diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Al Amin Accounting Software Crack Keygen The Ultimate Guide for Windows Users.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Al Amin Accounting Software Crack Keygen The Ultimate Guide for Windows Users.md deleted file mode 100644 index 8e45a570a40870b0a884fbe03e920afb8f6388e7..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Al Amin Accounting Software Crack Keygen The Ultimate Guide for Windows Users.md +++ /dev/null @@ -1,152 +0,0 @@ - -

Al-Amin Accounting Software: A Comprehensive Solution for Your Business Needs

-

If you are looking for a reliable, efficient, and user-friendly accounting software for your business, you might want to consider Al-Amin Accounting Software. Al-Amin Accounting Software is a product of SyrianSoft, a leading software company in the Middle East that has been developing accounting solutions since 1992. Al-Amin Accounting Software is designed to meet the needs of small, medium, and large businesses in various sectors and industries. It offers a range of features and benefits that can help you manage your business operations more effectively and efficiently.

-

al amin accounting software crack keygen


Download ⚙⚙⚙ https://byltly.com/2uKwf4



-

In this article, we will explore the features and benefits of Al-Amin Accounting Software, how to download and install it on your computer, how to crack and activate it (and why you shouldn't), and some alternatives to consider. By the end of this article, you will have a better understanding of what Al-Amin Accounting Software can do for your business and how to get started with it.

-

Features and Benefits of Al-Amin Accounting Software

-

Al-Amin Accounting Software is a comprehensive solution that covers various aspects of your business management. It has four main modules: accounting and financial management, inventory and warehouse management, sales and customer relationship management, and human resources and payroll management. Each module has its own features and benefits that can help you streamline your business processes and improve your productivity and profitability. Here are some of the key features and benefits of each module:

-

Accounting and financial management

-

This module helps you manage your accounts, invoices, payments, budgets, etc. with ease and accuracy. Some of the features and benefits of this module are:

- -

Inventory and warehouse management

-

This module helps you track your stock, purchases, sales, transfers, etc. with ease and accuracy. Some of the features and benefits of this module are:

-

al amin accounting software activation code
-al amin accounting software license key generator
-al amin accounting software serial number free download
-al amin accounting software full version cracked
-al amin accounting software patch file
-al amin accounting software registration key
-al amin accounting software unlock code
-al amin accounting software crack keygen torrent
-al amin accounting software crack keygen online
-al amin accounting software crack keygen download
-al amin accounting software crack keygen 2021
-al amin accounting software crack keygen 2022
-al amin accounting software crack keygen 2023
-al amin accounting software crack keygen latest version
-al amin accounting software crack keygen updated version
-al amin accounting software crack keygen for windows
-al amin accounting software crack keygen for mac
-al amin accounting software crack keygen for linux
-al amin accounting software crack keygen for android
-al amin accounting software crack keygen for ios
-how to crack al amin accounting software
-how to get al amin accounting software for free
-how to install al amin accounting software cracked version
-how to use al amin accounting software without license key
-how to bypass al amin accounting software activation
-is it safe to use al amin accounting software crack keygen
-is it legal to use al amin accounting software crack keygen
-is it ethical to use al amin accounting software crack keygen
-what are the benefits of using al amin accounting software crack keygen
-what are the risks of using al amin accounting software crack keygen
-what are the alternatives to using al amin accounting software crack keygen
-where to find al amin accounting software crack keygen
-where to download al amin accounting software crack keygen
-where to buy al amin accounting software crack keygen
-where to sell al amin accounting software crack keygen
-who uses al amin accounting software crack keygen
-who makes al amin accounting software crack keygen
-who sells al amin accounting software crack keygen
-why use al amin accounting software crack keygen
-why not use al amin accounting software crack keygen
-best way to use al amin accounting software crack keygen
-best place to get al amin accounting software crack keygen
-best source of al amin accounting software crack keygen
-best method to generate al amin accounting software crack keygen
-best tool for creating al amin accounting software crack keygen
-easiest way to use al amin accounting software crack keygen
-easiest place to get al amin accounting software crack keygen
-easiest source of al amin accounting software crack keygen
-easiest method to generate al amin accounting software crack keygen

- -

Sales and customer relationship management

-

This module helps you manage your sales orders, quotations, contracts, customers, etc. with ease and efficiency. Some of the features and benefits of this module are:

- -

Human resources and payroll management

-

This module helps you manage your employees, salaries, deductions, leaves, etc. with ease and compliance. Some of the features and benefits of this module are:

- -

How to Download and Install Al-Ameen Accounting Software

-

If you are interested in trying out Al-Ameen Accounting Software for yourself or for your business, you can download it from the official website of SyrianSoft. Here are the steps to download and install Al-Ameen Accounting Software on your computer:

-

System requirements

-

Before downloading Al-Ameen Accounting Software,

make sure that your computer meets the minimum or recommended specifications for running the software.

-

According to the developer's website, the minimum and recommended system requirements for Al-Ameen Accounting Software are as follows:

- | Software | Minimum | Recommended | | --- | --- | --- | | Microsoft SQL Server | 2012 | 2012 or higher | | Microsoft .NET Framework | 4.5.2 | 4.5.2 or higher | | Visual C++ Redistributable for Visual Studio | 2015 | 2015 or higher | | Sentinel Protection Key | Required | Required | | Internet Explorer | 11 | 11 or higher | | Platform Update (Windows 7 SP1 and Windows Server 2008 R2 SP1) | Required | Required | | Hardware | Minimum | Recommended | | --- | --- | --- | | Processor | 1 GHz | 2 GHz or higher | | Memory | 2 GB | 4 GB or higher | | Hard Disk (Free Space) | 500 MB | 1 GB or higher |

Download links

-

To download Al-Ameen Accounting Software, you need to visit the official website of SyrianSoft and register for an account. After logging in, you can access the download page and choose the version that suits your needs. The latest version of Al-Ameen Accounting Software is 9.0 - (900.11), which was released on May 18, 2017. The download package consists of two files: Release_Notes.pdf and V_9_900_16_11.exe. The total size of the package is about 255 MB.

-

Installation steps

-

To install Al-Ameen Accounting Software on your computer, you need to follow these steps:

-
    -
  1. Download the two files from the download page and save them in one folder on your hard disk.
  2. -
  3. Click the file V_9_900_16_11.exe and an extract window will appear. Click Extract button and wait for the extraction process to finish.
  4. -
  5. A new file Ameen.exe will appear in the same folder where you saved the downloaded files. Click this file and the installation wizard will start on your computer.
  6. -
  7. Follow the instructions on the screen to complete the installation process. You may need to restart your computer after the installation.
  8. -
  9. After restarting your computer, you can launch Al-Ameen Accounting Software from the Start menu or from the desktop shortcut.
  10. -
-

How to Crack and Activate Al-Ameen Accounting Software

-

If you are wondering how to crack and activate Al-Ameen Accounting Software, we have some bad news for you: it is not possible, and even if it was, it would be illegal and unethical. Here are some reasons why you should not try to crack and activate Al-Ameen Accounting Software:

-

Disclaimer

-

Al-Ameen Accounting Software is a licensed software that requires a valid protection key to run. The protection key is a hardware device that plugs into your computer's USB port and verifies your license with the developer's server. Without the protection key, Al-Ameen Accounting Software will run as a demo version with limited functionality and data entry. Cracking and activating Al-Ameen Accounting Software means bypassing the protection key and using a fake license to run the full version of the software. This is a violation of the terms and conditions of use of Al-Ameen Accounting Software and an infringement of the intellectual property rights of SyrianSoft. By cracking and activating Al-Ameen Accounting Software, you are committing a crime that can result in legal action against you.

-

Risks and consequences

-

Even if you manage to find a way to crack and activate Al-Ameen Accounting Software, you are exposing yourself to various risks and consequences that can harm your computer, your data, and your business. Some of these risks and consequences are:

- -

Alternatives

-

If you are looking for alternatives to cracking and activating Al-Ameen Accounting Software, you have some options that are legal and ethical. Some of these options are:

- -

Conclusion

-

In conclusion, Al-Ameen Accounting Software is a comprehensive solution for your business needs that offers various features and benefits that can help you manage your accounting, inventory, sales, and payroll processes more effectively and efficiently. It is easy to download and install on your computer, but it requires a valid protection key to run. Cracking and activating Al-Ameen Accounting Software is not possible, and even if it was, it would be illegal and unethical. You should avoid doing so and look for legal and ethical alternatives instead. We hope this article has given you a clear overview of what Al-Ameen Accounting Software can do for your business and how to get started with it. If you have any questions or comments, please feel free to contact us. We would love to hear from you.

-

Frequently Asked Questions

-

Here are some frequently asked questions about Al-Ameen Accounting Software:

-
    -
  1. What is the price of Al-Ameen Accounting Software?
    The price of Al-Ameen Accounting Software depends on the number of users, modules, and features you need. You can contact SyrianSoft or its authorized dealers for a quotation.
  2. -
  3. How can I get support for Al-Ameen Accounting Software?
    You can get support for Al-Ameen Accounting Software by contacting SyrianSoft or its authorized dealers via phone, email, or online chat. You can also visit their website for online help, tutorials, and FAQs.
  4. -
  5. Can I use Al-Ameen Accounting Software on multiple computers?
    Yes, you can use Al-Ameen Accounting Software on multiple computers as long as they are connected to the same network. You will need one protection key per computer, however.
  6. -
  7. Can I customize Al-Ameen Accounting Software according to my needs?
    Yes, you can customize Al-Ameen Accounting Software according to your needs by using its built-in tools such as report designer, form designer, label designer, etc. You can also request SyrianSoft or its authorized dealers for custom development services if you need more advanced customization.
  8. -
  9. Can I integrate Al-Ameen Accounting Software with other software?
    Yes, you can integrate Al-Ameen Accounting Software with other software by using its built-in tools such as data import/export, data synchronization, web services, etc. You can also request SyrianSoft or its authorized dealers for integration services if you need more complex integration.
  10. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EASEUS Partition Master 6.0.1 Server Edition Portable 64 Bit.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EASEUS Partition Master 6.0.1 Server Edition Portable 64 Bit.md deleted file mode 100644 index 0fdc3ba505ed3d1239bf0df9d3cdef664455af1e..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EASEUS Partition Master 6.0.1 Server Edition Portable 64 Bit.md +++ /dev/null @@ -1,119 +0,0 @@ - -

EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit

-

EASEUS Partition Master is a powerful and easy-to-use partition software that allows you to create, resize, move, merge, split, clone, recover, convert, and manage disk partitions on Windows servers and PCs. It supports various file systems such as FAT32, NTFS, EXT2/EXT3/EXT4, ReFS, exFAT, etc. It also supports MBR and GPT disk styles, dynamic disks and volumes, RAID arrays, SSDs and HDDs, USB drives and memory cards.

-

In this article, we will introduce EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit, which is a special version of EASEUS Partition Master that can run directly from a USB flash drive or CD/DVD without installation. We will also show you how to use it to perform some common partition operations on your server or PC.

-

EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit


Download Zip ⇒⇒⇒ https://byltly.com/2uKxp0



-

What is EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?

-

EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit is a portable version of EASEUS Partition Master 6.0.1 Server Edition that can run on any Windows server or PC with a 64-bit processor without installation or activation. It has all the features of EASEUS Partition Master 6.0.1 Server Edition, which include:

- -

EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit is compatible with Windows Server 2003/2008/2012/2016/2019 and Windows XP/Vista/7/8/10 (64-bit only). It supports up to 32 disks and unlimited hard disk capacity.

-

Why use EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?

-

EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit has some advantages over other partition software:

- -

How to use EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?

-

To use EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit, you need to follow these steps:

-
    -
  1. Download EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit from the official website or some third-party sources. The file size is about 40 MB.
  2. -
  3. Extract the downloaded file to a USB flash drive or CD/DVD. You can use any compression software such as WinRAR or 7-Zip to do this.
  4. -
  5. Connect the USB flash drive or CD/DVD to the server or PC that you want to manage the disk partitions on.
  6. -
  7. Run the EPM.exe file from the USB flash drive or CD/DVD. You will see the main interface of EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit.
  8. -
  9. Select the disk or partition that you want to operate on from the disk map or the list on the left panel.
  10. -
  11. Right-click on the disk or partition and choose the desired operation from the context menu. You can also use the toolbar buttons or the menu bar options to access the operations.
  12. -
  13. Follow the instructions on the screen to complete the operation. You may need to confirm some actions or restart your system depending on the operation.
  14. -
-

Some common partition operations with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit

-

In this section, we will show you how to perform some common partition operations with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit, such as resizing, cloning, merging, splitting, converting, and recovering partitions.

-

How to resize a partition with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?

-

To resize a partition with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit, you need to follow these steps:

-
    -
  1. Select the partition that you want to resize from the disk map or the list on the left panel.
  2. -
  3. Right-click on the partition and choose Resize/Move from the context menu.
  4. -
  5. In the pop-up window, drag the left or right border of the partition to adjust its size. You can also enter the exact size in MB in the boxes below.
  6. -
  7. Click OK to confirm the changes. You will see a pending operation on the bottom panel.
  8. -
  9. Click Apply on the toolbar to execute the operation. You may need to restart your system if you are resizing a system partition.
  10. -
-

How to clone a disk/partition with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?

-

To clone a disk/partition with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit, you need to follow these steps:

-
    -
  1. Select the disk or partition that you want to clone from the disk map or the list on the left panel.
  2. -
  3. Right-click on the disk or partition and choose Clone from the context menu.
  4. -
  5. In the pop-up window, select the destination disk or partition that you want to clone to. Make sure it has enough space to hold all the data from the source disk or partition.
  6. -
  7. Click Next to continue. You can choose to clone the disk or partition sector by sector or adjust the partition layout on the destination disk or partition.
  8. -
  9. Click Proceed to start the cloning process. You may need to restart your system if you are cloning a system disk or partition.
  10. -
-

How to merge partitions with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?

-

To merge partitions with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit, you need to follow these steps:

-

-
    -
  1. Select one of the partitions that you want to merge from the disk map or the list on the left panel.
  2. -
  3. Right-click on the partition and choose Merge from the context menu.
  4. -
  5. In the pop-up window, select another partition that you want to merge with the first one. The two partitions must be adjacent and have the same file system.
  6. -
  7. Click OK to confirm the changes. You will see a pending operation on the bottom panel.
  8. -
  9. Click Apply on the toolbar to execute the operation. You may need to restart your system if you are merging a system partition.
  10. -
-

How to split a partition with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?

-

To split a partition with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit, you need to follow these steps:

-
    -
  1. Select the partition that you want to split from the disk map or the list on the left panel.
  2. -
  3. Right-click on the partition and choose Split from the context menu.
  4. -
  5. In the pop-up window, drag the slider or enter the size in MB to specify how much space you want to allocate for the new partition.
  6. -
  7. Click OK to confirm the changes. You will see a pending operation on the bottom panel.
  8. -
  9. Click Apply on the toolbar to execute the operation. You may need to restart your system if you are splitting a system partition.
  10. -
-

How to convert a disk/partition with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?

-

To convert a disk/partition with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit, you need to follow these steps:

-
    -
  1. Select the disk or partition that you want to convert from the disk map or the list on the left panel.
  2. -
  3. Right-click on the disk or partition and choose Convert from the context menu.
  4. -
  5. In the pop-up window, choose whether you want to convert a disk from MBR to GPT or vice versa, or convert a partition from one file system to another.
  6. -
  7. Click OK to confirm the changes. You will see a pending operation on the bottom panel.
  8. -
  9. Click Apply on the toolbar to execute the operation. You may need to restart your system if you are converting a system disk or partition.
  10. -
-

How to recover a partition with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?

-

To recover a partition with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit, you need to follow these steps:

-
    -
  1. Select an unallocated space or a damaged disk that contains the deleted or lost partition from the disk map or the list on the left panel.
  2. -
  3. Right-click on the unallocated space or the damaged disk and choose Partition Recovery from the context menu.
  4. -
  5. In the pop-up window, choose whether you want to perform a quick scan or a deep scan to search for the deleted or lost partition. A quick scan is faster but may not find all the partitions, while a deep scan is slower but more thorough.
  6. -
  7. Click Next to start the scanning process. You will see a list of found partitions on the right panel.
  8. -
  9. Select the partition that you want to recover and click Proceed to recover it. You can also preview the files on the partition before recovering it.
  10. -
  11. Click Apply on the toolbar to execute the operation. You may need to restart your system if you are recovering a system partition.
  12. -
-

Conclusion

-

EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit is a powerful and easy-to-use partition software that can run directly from a USB flash drive or CD/DVD without installation. It can help you create, resize, move, merge, split, clone, recover, convert, and manage disk partitions on Windows servers and PCs. It supports various file systems, disk styles, dynamic disks and volumes, RAID arrays, SSDs and HDDs, USB drives and memory cards. It is fast, reliable, versatile, and cost-effective. It is a great tool for disk partition management and maintenance.

-

FAQs

-

Q: How can I get EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?

-

A: You can get EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit for free from the official website or some third-party sources. You can also download it from this link:

-

Q: What are the system requirements for EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?

-

A: EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit requires a Windows server or PC with a 64-bit processor, at least 512 MB of RAM, and at least 100 MB of free disk space.

-

Q: What are the limitations of EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?

-

A: EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit has some limitations compared to other versions of EASEUS Partition Master, such as:

- -

Q: How can I update EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?

-

A: EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit does not support automatic updates. You need to download the latest version from the official website or some third-party sources and replace the old version on your USB flash drive or CD/DVD.

-

Q: How can I contact EASEUS for technical support or feedback?

-

A: You can contact EASEUS by email at support@easeus.com or by phone at +1-800-570-4634 (toll-free in US and Canada) or +86-28-85432479 (international). You can also visit their website at for more information and resources.

b2dd77e56b
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EasyWorship 7 Full Version The Ultimate Solution for Creating and Presenting Worship Media.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EasyWorship 7 Full Version The Ultimate Solution for Creating and Presenting Worship Media.md deleted file mode 100644 index fa4ae2cfd50e03dce5fcad2aed38f86bb82312bb..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EasyWorship 7 Full Version The Ultimate Solution for Creating and Presenting Worship Media.md +++ /dev/null @@ -1,22 +0,0 @@ - -

How to Download and Install EasyWorship 7 Full Version for Free

-

EasyWorship 7 is a powerful and easy-to-use software that allows you to create and present worship slides, lyrics, videos, scriptures, and more. With EasyWorship 7, you can design and customize your own media library, schedule and manage your services, and control your presentation from any device. EasyWorship 7 is a great tool for churches, ministries, and worship teams who want to enhance their worship experience and engage their audience.

-

easyworship 7 full version


Downloadhttps://byltly.com/2uKzqy



-

However, EasyWorship 7 is not a free software. You need to purchase a license to use it legally and access all its features. The official price of EasyWorship 7 is $499 for the full version and $199 for the upgrade version. This may be too expensive for some users who want to try out the software or use it for personal or non-commercial purposes.

-

Fortunately, there is a way to download and install EasyWorship 7 full version for free and use it without paying anything. In this article, we will show you how to do that step by step. But before we proceed, we want to warn you that downloading and using cracked software is illegal and risky. You may face legal consequences, malware infections, data loss, or other problems if you choose to do so. We do not condone or encourage piracy in any way. This article is for educational purposes only.

-

What is EasyWorship 7 Full Version?

-

A full version of a software is a complete and unlocked version that has all the features and functions of the original software. A full version of a software usually requires a license key or activation code to use it legally and properly.

-

-

EasyWorship 7 full version is a complete and unlocked version of EasyWorship 7 that has all the features and functions of the original software. It does not require a license key or activation code to use it. It also has some additional features or functions that are not available in the official release. For example, some users claim that the full version has more themes, backgrounds, fonts, and transitions than the original one.

-

However, using EasyWorship 7 full version also has some drawbacks and risks. For one thing, it is illegal and violates the terms and conditions of Softouch Development Inc., the developer of EasyWorship. You may face legal actions or penalties if you are caught using it. For another thing, it is unsafe and unreliable. You may download malware or viruses along with the full version that can harm your computer or steal your data. You may also experience errors, crashes, bugs, or compatibility issues that can affect your work quality and efficiency.

-

How to Download and Install EasyWorship 7 Full Version for Free?

-

If you still want to download and install EasyWorship 7 full version for free despite the risks and consequences, here are the steps you need to follow:

-
    -
  1. Go to a website that offers EasyWorship 7 full version for free download. There are many websites that claim to provide this service, but not all of them are trustworthy or legitimate. Some of them may contain malware or viruses that can infect your computer or redirect you to other unwanted sites. To avoid this, you should do some research and check the reviews and ratings of the website before downloading anything from it.
  2. -
  3. Select the download link or button and wait for the download process to start. Depending on the size of the file and your internet speed, this may take some time. You may also need to complete some surveys or offers before you can access the download link.
  4. -
  5. Once the download is complete, locate the file on your computer and extract it using a file extractor program such as WinRAR or 7-Zip. You should see a folder containing the setup file and the crack file.
  6. -
  7. Run the setup file and follow the instructions to install EasyWorship 7 on your computer. You may need to enter some information such as your name, email address, or country during the installation process.
  8. -
  9. After the installation is done, do not run or open EasyWorship 7 yet. Instead, go to the folder where you extracted the crack file and copy it.
  10. -
  11. Paste the crack file into the installation directory of EasyWorship 7. This is usually located at C

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Chess Titans Free _HOT_ Download Full Version For Pc.md b/spaces/1gistliPinn/ChatGPT4/Examples/Chess Titans Free _HOT_ Download Full Version For Pc.md deleted file mode 100644 index 97c1a7d013112dc50a0a42cfbd6516aff23563d8..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Chess Titans Free _HOT_ Download Full Version For Pc.md +++ /dev/null @@ -1,48 +0,0 @@ -

    Chess Titans Free Download Full Version For Pc


    DOWNLOAD ——— https://imgfil.com/2uxYVg



    -
    -You can play chess against the computer and see your progress. There is also a friendly ranking system to see who is the best player of the tournament. With a single click you can take a snapshot, add new pieces or save the game. - -Easy to play, easy to learn - -Simple three-dimensional graphics, to keep it as clear and easy to learn as possible. Simply drag and drop your pieces into the game to play. Want to play chess with the computer? You can even set the computer to play for you. - -A traditional look - -Choose your colors and set the background and playing pieces. You can even change the background and use hex colors. The game is classic in its look, but there is a lot of detail. - -Play against the computer - -Play against the computer in a friendly competition. You can choose the level of difficulty or play a friend’s game. The computer knows the standard moves and pieces, so you don’t have to tell it. Create your own board or play against the computer in a three-dimensional board. - -Chess Titans for Windows lets you play three different board sizes, with three levels of difficulty. It also comes with eight unique game boards to choose from. It is also a friendly competition between friends, as there are 10,000 different boards available. - -The new version of Chess Titans has been completely redesigned. The new chess engines are used to play the game. The new chess engines used are the HyperChess and Chess King. The game is better than ever, and has a completely new user interface. - -Use the 10,000 boards available - -Play a friend's game or play against the computer - -Create your own board or play against the computer - -Controls: - -Move your pieces: left and right arrow keys - -Drag a piece to a new square: W - -Drag a piece to open the piece menu: A - -Drag a piece to select a piece: S - -Switch a piece with another piece: B - -Take a snapshot: Ctrl+F - -List the pieces on the board: Space bar - -Save the game: Ctrl+S - -Chess Titans for Windows is a classic chess game, but with a twist. After starting the game, you can play with or against the computer. You can choose the type of game, board size and level of difficulty. There are 10 4fefd39f24
    -
    -
    -

    diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dominos Pizza - Food Delivery APK A Must-Have App for Pizza Lovers.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dominos Pizza - Food Delivery APK A Must-Have App for Pizza Lovers.md deleted file mode 100644 index 8660baa869f9261225cc52fd5dffcafd964cc238..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dominos Pizza - Food Delivery APK A Must-Have App for Pizza Lovers.md +++ /dev/null @@ -1,113 +0,0 @@ -
    -

    Domino's APK: How to Order Pizza Online with Ease

    -

    Do you love pizza? Do you want to order it online from the comfort of your home or office? Do you want to enjoy delicious pizza at affordable prices and fast delivery? If you answered yes to any of these questions, then you need to download Domino's APK on your Android device.

    -

    What is Domino's APK?

    -

    A brief introduction to the app and its features

    -

    Domino's APK is the official app of Domino's Pizza, one of the most popular pizza chains in the world. With this app, you can order pizza online from your nearest Domino's outlet and get it delivered to your doorstep in no time. You can also customize your pizza with your choice of crust, toppings, cheese, and sauces. You can also order other items from the menu, such as pasta, sandwiches, salads, desserts, drinks, and more.

    -

    dominos apk


    DOWNLOAD > https://urlin.us/2uSUQg



    -

    How to download and install the app on your device

    -

    Downloading and installing Domino's APK is very easy and simple. All you have to do is follow these steps:

    -
      -
    1. Search for Domino's APK or Pizza delivery app on the Google Play Store or Apple App Store and tap on install.
    2. -
    3. Wait for the app to download and install on your device.
    4. -
    5. Open the app and grant the necessary permissions for location, camera, storage, etc.
    6. -
    7. You are ready to order pizza online with Domino's APK.
    8. -
    -

    How to Use Domino's APK to Order Pizza Online

    -

    How to create an account and log in

    -

    To use Domino's APK, you need to create an account and log in with your email address or phone number. You can also sign up with your Facebook or Google account. Creating an account will help you save your preferences, address, payment details, and order history. You can also earn rewards points for every order you place with Domino's APK.

    -

    How to browse the menu and customize your order

    -

    Once you log in, you can browse the menu by tapping on the categories or using the search bar. You can also filter the menu by price, popularity, or ratings. You can tap on any item you like and see its details, such as ingredients, calories, price, etc. You can also customize your order by adding or removing toppings, cheese, sauces, etc. You can also choose the size and quantity of your order.

    -

    How to apply coupons and offers

    -

    Domino's APK offers various coupons and offers that can help you save money on your order. You can find them on the home screen or under the deals section. You can also enter a coupon code manually if you have one. To apply a coupon or offer, simply select it and add it to your cart. You will see the discounted price on your checkout screen.

    -

    How to track your order and enjoy contactless delivery

    -

    After placing your order, you can track its status and progress on the app or on the website. You can also call the store or the delivery person if you have any queries or issues. Domino's APK also offers contactless delivery, which means you can get your order delivered without any physical contact with the delivery person. You can choose this option on the app or on the website and pay online. You can also instruct the delivery person to leave your order at a safe place, such as your doorstep, lobby, or gate.

    -

    Why Choose Domino's APK for Pizza Delivery?

    -

    The benefits of ordering from Domino's

    -

    There are many reasons why you should choose Domino's APK for pizza delivery. Here are some of them:

    - -

    The customer reviews and ratings of the app

    -

    Domino's APK has received positive feedback and ratings from its users. The app has a 4.5-star rating on the Google Play Store and a 4.7-star rating on the Apple App Store. Here are some of the reviews from the users:

    -

    dominos pizza app download
    -dominos online ordering apk
    -dominos app for android free
    -dominos pizza delivery apk
    -dominos app latest version
    -dominos apk mod
    -dominos app coupon code
    -dominos pizza tracker apk
    -dominos app not working
    -dominos apk old version
    -dominos app rewards
    -dominos pizza maker apk
    -dominos app deals
    -dominos apk mirror
    -dominos app review
    -dominos pizza menu apk
    -dominos app login
    -dominos apk pure
    -dominos app gift card
    -dominos pizza game apk
    -dominos app offers
    -dominos apk file
    -dominos app feedback
    -dominos pizza coupons apk
    -dominos app update
    -dominos apk for pc
    -dominos app contact number
    -dominos pizza maker game apk
    -dominos app promo code
    -dominos apk hack
    -dominos app customer service
    -dominos pizza online apk
    -dominos app payment options
    -dominos pizza simulator apk
    -dominos app referral code
    -dominos apk uptodown
    -dominos app store
    -dominos pizza order tracker apk
    -dominos app discount code
    -dominos apk cracked
    -dominos app support
    -dominos pizza maker simulator apk
    -dominos app free pizza points
    -dominos apk android 4.4.2
    -dominos app faq
    -domino's pizza food delivery apk
    -domino's app order history
    -domino's pizza maker 3d cooking game apk

    -
    -

    "I love this app. It's easy to use and I can order pizza anytime I want. The delivery is fast and the pizza is always hot and delicious. I also like the coupons and offers that they have. I highly recommend this app to anyone who loves pizza."

    -

    "This app is awesome. It has everything I need to order pizza online. I can customize my pizza, apply coupons, track my order, and enjoy contactless delivery. The app is also very secure and reliable. I have never had any issues with it."

    -

    "This app is amazing. It saves me time and money when I order pizza online. The app is very simple and intuitive to use. I can also earn rewards points for every order and get free pizza and other perks. The app is a must-have for pizza lovers."

    -
    -

    The comparison with other pizza delivery apps

    -

    Domino's APK is not the only pizza delivery app available in the market. There are other apps that offer similar services, such as Pizza Hut, Papa John's, Little Caesars, etc. However, Domino's APK stands out from the rest in terms of quality, speed, convenience, and value. Here is a table that compares Domino's APK with other pizza delivery apps:

    - - - - - - -
    Pizza Delivery AppMenu VarietyDelivery TimeCustomer SatisfactionLoyalty Program
    Domino's APKHigh30 minutes or less100% guaranteePiece of the Pie Rewards
    Pizza HutMedium40 minutes or moreNo guaranteeHut Rewards
    Papa John'sLow45 minutes or moreNo guaranteePapa Rewards
    Little CaesarsLowNo delivery optionNo guaranteeNo loyalty program
    -

    Conclusion

    -

    To sum up, Domino's APK is the best pizza delivery app that you can use to order pizza online with ease. It has a wide range of pizzas and other items to choose from, fast and fresh delivery, 100% satisfaction guarantee, and a rewarding loyalty program. It also has a user-friendly and convenient app that makes ordering pizza online a breeze. So, what are you waiting for? Download Domino's APK today and enjoy delicious pizza at your doorstep.

    -

    FAQs

    -

    Q1. Is Domino's APK safe and secure?

    -

    A1. Yes, Domino's APK is safe and secure to use. It uses encryption and other security measures to protect your personal and payment information. It also complies with all the privacy policies and regulations.

    -Q2. What are the payment options available on Domino's APK? -

    A2. Domino's APK offers various payment options for your convenience. You can pay online with your credit card, debit card, net banking, UPI, or wallet. You can also pay cash on delivery or use a gift card or voucher.

    -

    Q3. How can I contact Domino's customer service?

    -

    A3. Domino's customer service is always ready to help you with any queries or issues you may have. You can contact them by calling the toll-free number 1800-103-6888 or by emailing them at guestcaredominos@jublfood.com. You can also chat with them on the app or on the website.

    -

    Q4. What are the minimum requirements for Domino's APK?

    -

    A4. Domino's APK requires an Android device with a minimum of 4.4 version and a minimum of 50 MB of free space. It also requires an internet connection and GPS access to function properly.

    -

    Q5. Can I order from Domino's APK in other countries?

    -

    A5. No, Domino's APK is only available for ordering pizza online in India. If you are in another country, you can use the website or the app of the local Domino's franchise to order pizza online.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Frozen City Mod APK 1.0.6 for Android - Free Purchase.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Frozen City Mod APK 1.0.6 for Android - Free Purchase.md deleted file mode 100644 index 6fcd411db4bf92d58cc6831434682cf6c5d87ce1..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Frozen City Mod APK 1.0.6 for Android - Free Purchase.md +++ /dev/null @@ -1,87 +0,0 @@ -
    -

    Frozen City Mod APK 1.0.6: A Survival Game in a Post-Apocalyptic World

    -

    Do you love survival games that challenge your skills and creativity? Do you want to experience a thrilling adventure in a frozen city where zombies and mutants roam? If yes, then you should try Frozen City mod APK 1.0.6, a modified version of the original game that gives you unlimited resources, free purchase, and no ads. In this article, we will tell you everything you need to know about this amazing game and how to download and install it on your Android device.

    -

    frozen city mod apk 1.0 6


    Download File ✦✦✦ https://urlin.us/2uST8c



    -

    Introduction

    -

    What is Frozen City?

    -

    Frozen City is a survival game developed by Century Games Pte Ltd, where you have to build your shelter, scavenge for resources, craft weapons and tools, and fight against zombies and mutants in a post-apocalyptic world. The game is set in a city that has been frozen by a mysterious disaster, and you are one of the few survivors who have to struggle to survive. You can explore the city, find other survivors, join clans, trade items, and complete quests. The game has a realistic physics system, dynamic weather, day and night cycle, and stunning graphics.

    -

    What is a mod APK?

    -

    A mod APK is a modified version of an original APK (Android Package Kit) file, which is the format used to distribute and install applications on Android devices. A mod APK can have extra features, unlocked items, unlimited resources, or other advantages that are not available in the original version of the game or app. A mod APK can be created by anyone who has the skills and tools to modify the original APK file.

    -

    Why download Frozen City mod APK 1.0.6?

    -

    If you are a fan of Frozen City, you might want to download Frozen City mod APK 1.0.6 because it offers some benefits that can enhance your gaming experience. For example, you can enjoy free purchase, which means you can buy anything in the game without spending real money. You can also have unlimited resources, such as wood, metal, food, water, and energy, which are essential for building your shelter and crafting items. Moreover, you can play the game without any annoying ads that can interrupt your gameplay or consume your data.

    -

    frozen city mod apk 1.0 6 download
    -frozen city mod apk 1.0 6 unlimited money
    -frozen city mod apk 1.0 6 latest version
    -frozen city mod apk 1.0 6 free purchase
    -frozen city mod apk 1.0 6 android
    -frozen city mod apk 1.0 6 hack
    -frozen city mod apk 1.0 6 offline
    -frozen city mod apk 1.0 6 gameplay
    -frozen city mod apk 1.0 6 review
    -frozen city mod apk 1.0 6 update
    -frozen city mod apk 1.0 6 cheats
    -frozen city mod apk 1.0 6 no root
    -frozen city mod apk 1.0 6 obb
    -frozen city mod apk 1.0 6 online
    -frozen city mod apk 1.0 6 features
    -frozen city mod apk 1.0 6 tips
    -frozen city mod apk 1.0 6 guide
    -frozen city mod apk 1.0 6 tutorial
    -frozen city mod apk 1.0 6 install
    -frozen city mod apk 1.0 6 requirements
    -frozen city mod apk 1.0 6 size
    -frozen city mod apk 1.0 6 screenshots
    -frozen city mod apk 1.0 6 trailer
    -frozen city mod apk 1.0 6 video
    -frozen city mod apk 1.0 6 link
    -frozen city mod apk 1.0 6 mirror
    -frozen city mod apk 1.0 6 alternative
    -frozen city mod apk 1.0 6 happymod
    -frozen city mod apk 1.0 6 rexdl
    -frozen city mod apk 1.0 6 apkpure
    -frozen city mod apk 1.0 6 apkmody
    -frozen city mod apk 1.0 6 revdl
    -frozen city mod apk 1.0 6 an1
    -frozen city mod apk 1.0 6 andropalace
    -frozen city mod apk 1.0 6 mob.org
    -frozen city mod apk 1.0 6 androidrepublica
    -frozen city mod apk 1.0 6 blackmod.net
    -frozen city mod apk 1.0 6 platinmods.com
    -frozen city mod apk 1.0 6 androidoyun.club
    -frozen city mod apk

    -

    Features of Frozen City mod APK 1.0.6

    -

    Free purchase

    -

    With Frozen City mod APK 1.0.6, you can buy anything in the game for free, such as weapons, armor, vehicles, furniture, decorations, and more. You don't need to worry about running out of money or gems, as you can have unlimited amounts of them with this mod.

    -

    Unlimited resources

    -

    Another feature of Frozen City mod APK 1.0.6 is that it gives you unlimited resources that you need to survive in the frozen city. You can have unlimited wood, metal, food, water, and energy with this mod, which means you don't need to scavenge for them or wait for them to regenerate. You can use them to build your shelter, craft items, cook food, and power your devices.

    -

    No ads

    -

    Frozen City mod APK 1.0.6 also removes all the ads that can appear in the game from time to time. Ads can be annoying and distracting when you are playing a survival game that requires your attention and data. With Frozen City mod APK 1.0.6, you can enjoy the game without any interruptions or distractions.

    -

    High-quality graphics and sound

    -

    Frozen City mod APK 1.0.6 does not compromise the quality of the graphics and sound of the game. In fact, it enhances them by making them more realistic and immersive. You can admire the details of the frozen city, the weather effects, the lighting, and the shadows. You can also hear the sounds of the zombies, the mutants, the weapons, and the environment. Frozen City mod APK 1.0.6 will make you feel like you are in a real post-apocalyptic world.

    -

    How to download and install Frozen City mod APK 1.0.6

    -

    Step 1: Enable unknown sources

    -

    Before you can download and install Frozen City mod APK 1.0.6, you need to enable unknown sources on your Android device. This will allow you to install apps that are not from the Google Play Store. To do this, go to your device settings, then security, then unknown sources, and toggle it on.

    -

    Step 2: Download the mod APK file

    -

    Next, you need to download the mod APK file of Frozen City from a reliable source. You can use this link to download it: [Frozen City mod APK 1.0.6]. Make sure you have enough storage space on your device before downloading it.

    -

    Step 3: Install the mod APK file

    -

    After downloading the mod APK file, you need to install it on your device. To do this, locate the file in your downloads folder or wherever you saved it, and tap on it. You will see a prompt asking you to confirm the installation. Tap on install and wait for it to finish.

    -

    Step 4: Enjoy the game

    -

    Once the installation is done, you can launch the game from your app drawer or home screen. You will see a new icon with the name Frozen City mod APK 1.0.6. Tap on it and enjoy the game with all its features.

    -

    Conclusion

    -

    Frozen City mod APK 1.0.6 is a great way to enjoy a survival game in a frozen city where zombies and mutants are your enemies. You can have free purchase, unlimited resources, no ads, and high-quality graphics and sound with this mod. You can also explore the city, find other survivors, join clans, trade items, and complete quests with this mod. If you want to download and install Frozen City mod APK 1.0.6 on your Android device, just follow the steps we have provided in this article.

    -

    FAQs

    -

    Here are some frequently asked questions about Frozen City mod APK 1.0.6:

    -
      -
    • Is Frozen City mod APK 1.0.6 safe to use?
    • -

      Yes, Frozen City mod APK 1.0.6 is safe to use as long as you download it from a trusted source and scan it with an antivirus before installing it.

      -
    • Does Frozen City mod APK 1.0.6 require root access?
    • -

      No, Frozen City mod APK 1.0.6 does not require root access to work on your device.

      -
    • Can I play Frozen City mod APK 1.0.6 online with other players?
    • -

      Yes, you can play Frozen City mod APK 1.0.6 online with other players who have the same version of the game.

      -
    • Can I update Frozen City mod APK 1.0.6 when a new version is released?
    • -

      No, you cannot update Frozen City mod APK 1.0.6 when a new version is released because it will overwrite the mod features and restore the original version of the game.

      -
    • Can I uninstall Frozen City mod APK 1.0.6 if I don't like it?
    • -

      Yes, you can uninstall Frozen City mod APK 1.0.6 if you don't like it or if it causes any problems on your device.

      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Clash Royale Bluestacks Play the Best Strategy Game on Your PC for Free.md b/spaces/1phancelerku/anime-remove-background/Clash Royale Bluestacks Play the Best Strategy Game on Your PC for Free.md deleted file mode 100644 index 9a48139ea75e5bb20b76f7b1da4ad47813cd958e..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Clash Royale Bluestacks Play the Best Strategy Game on Your PC for Free.md +++ /dev/null @@ -1,117 +0,0 @@ - -

    How to Download and Play Clash Royale on Bluestacks

    -

    Clash Royale is one of the most popular and addictive mobile games in the world. It is a real-time strategy game where you collect cards, build decks, and battle other players online. You can join clans, chat with friends, unlock new cards, and earn chests full of rewards. But what if you want to play Clash Royale on a bigger screen, with better graphics, faster performance, and more control? That's where Bluestacks comes in.

    -

    Bluestacks is the best mobile gaming platform for PC and Mac. It lets you play thousands of Android games on your computer, with full keyboard and mouse support, custom settings, and advanced features. You can also stream your gameplay to Facebook or Twitch, record your screen, take screenshots, and more. With Bluestacks, you can enjoy playing Clash Royale on your PC or Mac like never before.

    -

    download clash royale bluestacks


    Download Ziphttps://jinyurl.com/2uNS92



    -

    In this article, we will show you how to download and install Bluestacks on your PC or Mac, and how to play Clash Royale on it. Follow these simple steps and get ready to clash!

    -

    Step 1: Download and install Bluestacks on your PC or Mac

    -

    The first thing you need to do is to download Bluestacks from its official website. You can choose from different versions of Bluestacks, depending on your operating system and Android preference. For example, you can download Bluestacks 5 for Windows 10 with Android 11, or Bluestacks 5 Nougat 64-bit for Mac. Make sure your PC or Mac meets the minimum system requirements for Bluestacks before downloading it.

    -

    Once you have downloaded the Bluestacks installer, run it and follow the instructions to install it on your PC or Mac. You can choose the default location for installation or change it to a different drive. The installation process may take a few minutes, depending on your internet speed and computer performance.

    -

    Step 2: Launch Bluestacks and sign in with your Google account

    -

    After installing Bluestacks, launch it from your desktop or start menu. You will see a window like this:

    -

    How to play Clash Royale PC with Bluestacks Emulator
    -Clash Royale Bluestacks script for automating moves
    -Download Clash Royale PC for Windows & Mac (May 2023)
    -How to get Clash Royale (and other supercell games) on Bluestacks
    -Clash Royale Bluestacks settings for optimal performance
    -Clash Royale Bluestacks vs other Android emulators
    -How to install APK pure on Bluestacks for Clash Royale
    -Clash Royale Bluestacks stream feature for Facebook and Twitch
    -How to update Clash Royale on Bluestacks
    -Clash Royale Bluestacks keyboard controls and shortcuts
    -How to fix Clash Royale Bluestacks black screen issue
    -How to transfer Clash Royale account from Bluestacks to phone
    -How to play Clash Royale on Bluestacks offline mode
    -How to use cheat engine on Clash Royale Bluestacks
    -How to record Clash Royale gameplay on Bluestacks
    -How to change language on Clash Royale Bluestacks
    -How to play Clash Royale on Bluestacks with friends
    -How to sync Clash Royale progress between Bluestacks and Google Play
    -How to uninstall Clash Royale from Bluestacks
    -How to download Clash Royale mod apk on Bluestacks
    -How to play Clash Royale on Bluestacks without lag
    -How to run multiple instances of Clash Royale on Bluestacks
    -How to enter and exit shooting mode in Clash Royale Bluestacks
    -How to create and run a script for Clash Royale Bluestacks
    -How to customize CPU, RAM and resolution for Clash Royale Bluestacks
    -How to download and install BlueStacks 3.0 for Clash Royale PC
    -How to use BlueStacks macro recorder for Clash Royale PC
    -How to play Clash Royale on BlueStacks with mouse and keyboard
    -How to use BlueStacks multi-instance manager for Clash Royale PC
    -How to enable high FPS mode for Clash Royale on BlueStacks
    -How to use BlueStacks smart controls for Clash Royale PC
    -How to use BlueStacks gamepad support for Clash Royale PC
    -How to use BlueStacks eco mode for Clash Royale PC
    -How to use BlueStacks farm mode for Clash Royale PC
    -How to use BlueStacks sync feature for Clash Royale PC
    -How to use BlueStacks app center for Clash Royale PC
    -How to use BlueStacks app player settings for Clash Royale PC
    -How to use BlueStacks cloud connect for Clash Royale PC
    -How to use BlueStacks media manager for Clash Royale PC
    -How to use BlueStacks screenshot tool for Clash Royale PC
    -How to use BlueStacks location tool for Clash Royale PC
    -How to use BlueStacks shake tool for Clash Royale PC
    -How to use BlueStacks rotate tool for Clash Royale PC
    -How to use BlueStacks zoom tool for Clash Royale PC
    -How to use BlueStacks notification center for Clash Royale PC
    -How to use BlueStacks help center for Clash Royale PC
    -How to use BlueStacks feedback tool for Clash Royale PC
    -How to use BlueStacks reward center for Clash Royale PC

    - Bluestacks home screen -

    Here, you need to sign in with your Google account to access the Google Play Store and other Google services. If you don't have a Google account yet, you can create one here. Signing in with your Google account will also sync your game progress and purchases across devices.

    -

    Step 3: Search for Clash Royale in the Google Play Store and install it

    -

    Now that you have signed in with your Google account, you can search for Clash Royale in the Google Play Store app on Bluestacks. You can find the app icon on the home screen or in the app center. Click on it to open it.

    -

    In the Google Play Store app, type "Clash Royale" in the search bar and hit enter. You will see a list of results like this:

    - Clash Royale search results -

    Click on the first result that says "Clash Royale" by Supercell. This will take you to the game's page in the Google Play Store. Here, you can see more information about the game, such as its description, screenshots, reviews, ratings, etc.

    -

    To install Clash Royale on Bluestacks, click on the green "Install" button. This will start downloading and installing the game on your PC or Mac. The process may take a few minutes, depending on your internet speed.

    -

    Step 4: Enjoy playing Clash Royale on your PC or Mac with Bluestacks

    -

    Congratulations! You have successfully installed Clash Royale on Bluestacks. Now you can enjoy playing the game on your PC or Mac with a bigger screen, better graphics, faster performance, and more control. You can also use the Bluestacks features to enhance your gaming experience, such as:

    -
      -
    • Customize your keyboard and mouse controls to suit your play style. You can use the game guide to see the default controls or change them as you wish.
    • -
    • Use the multi-instance feature to play multiple accounts of Clash Royale at the same time. You can also switch between different instances easily with the multi-instance manager.
    • -
    • Use the macro feature to record and execute complex actions with a single keystroke. You can also edit and share your macros with other players.
    • -
    • Use the eco mode to reduce CPU and RAM usage and improve battery life. You can also enable or disable notifications, sound, and background apps.
    • -
    -

    With Bluestacks, you can take your Clash Royale gameplay to the next level. You can also explore other games in the Bluestacks app center, such as Clash of Clans, Brawl Stars, PUBG Mobile, and more.

    -

    Conclusion

    -

    In this article, we have shown you how to download and play Clash Royale on Bluestacks, the best mobile gaming platform for PC and Mac. We have also explained the benefits of playing Clash Royale on Bluestacks and how to use its features to enhance your gaming experience. We hope you found this article helpful and informative.

    -

    If you are a fan of Clash Royale or any other mobile game, we highly recommend you to try out Bluestacks. It is free, easy, and fun to use. You can download it from here and start playing your favorite games on your PC or Mac today.

    -

    Thank you for reading this article. If you have any questions or feedback, please leave them in the comments section below. We would love to hear from you. Happy clashing!

    -

    FAQs

    -

    Q: Is Bluestacks safe to use?

    -

    A: Yes, Bluestacks is safe to use. It is a legitimate software that has been downloaded by millions of users worldwide. It does not contain any malware, viruses, or spyware. It also does not access or modify any of your personal data or files.

    -

    Q: Is Bluestacks free to use?

    -

    A: Yes, Bluestacks is free to use. You can download and install it on your PC or Mac without paying anything. You can also play any game on it without any limitations or restrictions. However, some games may have in-app purchases or ads that require real money.

    -

    Q: How do I update Clash Royale on Bluestacks?

    -

    A: To update Clash Royale on Bluestacks, you need to follow these steps:

    -
      -
    1. Open the Google Play Store app on Bluestacks.
    2. -
    3. Click on the menu icon (three horizontal lines) on the top left corner.
    4. -
    5. Select "My apps & games" from the menu.
    6. -
    7. Find Clash Royale in the list of installed apps and click on "Update".
    8. -
    9. Wait for the update to finish and launch the game.
    10. -
    -

    Q: How do I transfer my Clash Royale account from my phone to Bluestacks?

    -

    A: To transfer your Clash Royale account from your phone to Bluestacks, you need to follow these steps:

    -
      -
    1. On your phone, open Clash Royale and go to the settings menu (gear icon).
    2. -
    3. Select "Link a device" and then "This is the old device".
    4. -
    5. Select "I want to link to another device" and then "Android device".
    6. -
    7. You will see a code that is valid for two minutes.
    8. -
    9. On Bluestacks, open Clash Royale and go to the settings menu (gear icon).
    10. -
    11. Select "Link a device" and then "This is the new device".
    12. -
    13. Enter the code from your phone and confirm.
    14. -
    15. Your account will be transferred to Bluestacks.
    16. -
    -

    Q: How do I contact Bluestacks support?

    -

    A: If you have any issues or problems with Bluestacks, you can contact their support team by following these steps:

    -
      -
    1. Open Bluestacks and click on the menu icon (three horizontal lines) on the top right corner.
    2. Select "Help and Support" from the menu.
    3. -
    4. You will see a list of topics and articles that may help you solve your issue.
    5. -
    6. If you still need assistance, click on the "Report a Problem" button at the bottom of the page.
    7. -
    8. Fill out the form with your name, email, description of the problem, and any attachments.
    9. -
    10. Click on the "Submit" button and wait for a response from the Bluestacks support team.
    11. -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Downloader How to Boost Your Download Speeds and Manage Your Files.md b/spaces/1phancelerku/anime-remove-background/Download Downloader How to Boost Your Download Speeds and Manage Your Files.md deleted file mode 100644 index f986a6aa849d33608e5c824006656739b8638f2f..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Downloader How to Boost Your Download Speeds and Manage Your Files.md +++ /dev/null @@ -1,92 +0,0 @@ - -

    Download Downloader: What Is It and Why Do You Need It?

    -

    If you frequently download files from the internet, you know how frustrating it can be to deal with slow speeds, broken links, timeouts, and other issues. That's why you need a download manager, also known as a download downloader. A download manager is a software tool that helps you manage your downloads more efficiently and effectively. It can boost your download speed, resume interrupted downloads, organize your files, convert formats, and more. In this article, we will show you how to choose the best download manager for your needs, review the top 5 free download managers of 2023, and give you some tips on how to use them effectively.

    -

    download downloader


    Download Filehttps://jinyurl.com/2uNUGu



    -

    How to Choose the Best Download Manager for Your Needs

    -

    There are many download managers available on the market, but not all of them are created equal. Some may have more features than others, some may be more compatible with your device or browser, some may be more secure or user-friendly. Here are some factors to consider when selecting a download manager:

    -
      -
    • Speed: One of the main reasons to use a download manager is to increase your download speed. A good download manager should be able to accelerate your downloads by using multiple connections, splitting files into smaller chunks, and optimizing your bandwidth.
    • -
    • Features: Another reason to use a download manager is to access more features than your browser's default downloader. A good download manager should be able to support various file types, protocols, and sources, such as HTTP, FTP, BitTorrent, YouTube, etc. It should also be able to preview files before downloading them, resume broken downloads, schedule downloads for later times, organize downloads into folders or categories, convert formats if needed, and integrate with your browser or antivirus software.
    • -
    • Compatibility: A good download manager should be compatible with your device and browser. It should be able to run smoothly on your operating system (Windows, Mac OS X, Linux), whether it's desktop or mobile. It should also be able to work with your preferred browser (Chrome, Firefox, Edge), whether it's through an extension or a standalone app.
    • -
    • Security: A good download manager should be secure and reliable. It should be able to scan files for viruses or malware before downloading them. It should also be able to protect your privacy by encrypting your data or using proxy servers if needed.
    • -
    -

    The Top 5 Free Download Managers of 2023

    -

    Download Accelerator Plus

    -

    Download Accelerator Plus (DAP) is one of the most popular download managers on the market. It has over 300 million users worldwide and boasts impressive speeds up to 400% faster than regular downloads. It also also has a built-in media file previewer that lets you watch videos or listen to music before downloading them. DAP supports various protocols and sources, such as HTTP, FTP, BitTorrent, YouTube, etc. It also integrates with your browser and antivirus software for seamless downloading. DAP is free to use, but you can upgrade to a premium version for more features and benefits.

    -

    Ninja Download Manager

    -

    Ninja Download Manager (NDM) is a powerful and well-designed download manager for media files. It has a sleek and intuitive interface that lets you manage your downloads easily and efficiently. NDM can accelerate your downloads by using multiple connections and smart logic. It can also resume broken downloads, schedule downloads for later times, organize downloads into categories, and convert formats if needed. NDM supports various protocols and sources, such as HTTP, HTTPS, FTP, YouTube, etc. It also integrates with your browser and clipboard for convenient downloading. NDM is free to use, but you can upgrade to a pro version for more features and benefits.

    -

    download manager
    -download accelerator
    -download speed booster
    -download video from youtube
    -download music from spotify
    -download files from google drive
    -download games for pc
    -download ebooks for free
    -download pdf converter
    -download antivirus software
    -download resume templates
    -download fonts for word
    -download wallpapers for desktop
    -download subtitles for movies
    -download podcasts for offline listening
    -download instagram stories
    -download tiktok videos
    -download netflix shows
    -download whatsapp status
    -download zoom app
    -download windows 10 iso
    -download android emulator
    -download chrome browser
    -download firefox browser
    -download opera browser
    -download tor browser
    -download vpn for pc
    -download torrent client
    -download utorrent downloader
    -download bittorrent downloader
    -download magnet link downloader
    -download youtube downloader hd
    -download youtube downloader mp3
    -download youtube downloader mp4
    -download facebook video downloader
    -download twitter video downloader
    -download vimeo video downloader
    -download dailymotion video downloader
    -download soundcloud music downloader
    -download bandcamp music downloader
    -download spotify music downloader
    -download amazon music downloader
    -download apple music downloader
    -download deezer music downloader
    -download tidal music downloader
    -download audiomack music downloader
    -download mixcloud music downloader
    -download internet archive downloader

    -

    Free Download Manager

    -

    Free Download Manager (FDM) is a versatile and user-friendly download manager with BitTorrent support. It has a simple and clean interface that lets you manage your downloads easily and efficiently. FDM can accelerate your downloads by using multiple connections and splitting files into smaller chunks. It can also resume broken downloads, schedule downloads for later times, organize downloads into folders or categories, and convert formats if needed. FDM supports various protocols and sources, such as HTTP, HTTPS, FTP, BitTorrent, YouTube, etc. It also integrates with your browser and antivirus software for seamless downloading. FDM is free and open-source, but you can donate to support the developers.

    -

    JDownloader

    -

    JDownloader is a feature-rich and customizable download manager with remote control. It has a complex and advanced interface that lets you manage your downloads in detail and with flexibility. JDownloader can accelerate your downloads by using multiple connections and splitting files into smaller chunks. It can also resume broken downloads, schedule downloads for later times, organize downloads into folders or categories, and convert formats if needed. JDownloader supports various protocols and sources, such as HTTP, HTTPS, FTP, BitTorrent, YouTube, etc. It also integrates with your browser and clipboard for convenient downloading. JDownloader is free and open-source, but you can buy a premium account for more features and benefits.

    -

    Internet Download Manager

    -

    Internet Download Manager (IDM) is a fast and reliable download manager with browser integration. It has a simple and classic interface that lets you manage your downloads easily and efficiently. IDM can accelerate your downloads by using multiple connections and dynamic file segmentation. It can also resume broken downloads, schedule downloads for later times, organize downloads into folders or categories, and convert formats if needed. IDM supports various protocols and sources, such as HTTP, HTTPS, FTP, BitTorrent, YouTube, etc. It also integrates with your browser and antivirus software for seamless downloading. IDM is not free to use, but you can try it for 30 days before buying it.

    -

    How to Use a Download Manager Effectively

    -

    Now that you have learned about the best download managers of 2023, you may wonder how to use them effectively to optimize your download experience. Here are some tips and tricks on how to do that:

    -
      -
    • Schedule your downloads: If you have a lot of files to download or if you want to save bandwidth or battery life, you can schedule your downloads for later times when you are not using your device or when the internet connection is better.
    • -
    • Organize your downloads: If you have a lot of files to download or if you want to find them easily later on, you can organize your downloads into folders or categories based on their type, source, date, etc.
    • -
    • Resume your downloads: If your download is interrupted by an error or a power outage or if you want to pause it for some reason, you can resume it from where it left off without losing any data or time.
    • -
    • Convert your downloads: If your download is in a format that is not compatible with your device or player or if you want to reduce its size or quality, you can convert it to another format that suits your needs.
    • -
    -

    Conclusion

    -

    A download manager is a software tool that helps you manage your downloads more efficiently and effectively. It can boost your download speed, resume interrupted downloads, organize your files, convert formats, and more. In this article, we have shown you how to choose the best download manager for your needs, reviewed the top 5 free download managers of 2023, and given you some tips on how to use them effectively. We hope you have found this article helpful and informative. If you want to try out a download manager for yourself, you can download one of the options we have mentioned above or search for other alternatives online. You will be amazed by how much easier and faster your download experience will be with a download manager. Happy downloading!

    -

    FAQs

    -

    Here are some frequently asked questions about download managers:

    -
      -
    1. What is the difference between a download manager and a torrent client?
      A download manager is a software tool that helps you download files from various sources and protocols, such as HTTP, FTP, YouTube, etc. A torrent client is a software tool that helps you download files from BitTorrent, a peer-to-peer protocol that uses a network of users to share files.
    2. -
    3. Are download managers safe to use?
      Download managers are generally safe to use, as long as you download them from reputable sources and scan them for viruses or malware before installing them. However, you should also be careful about the files you download with them, as some of them may contain harmful or illegal content. Always check the file name, size, type, and source before downloading it.
    4. -
    5. Do download managers work with all browsers?
      Most download managers work with all major browsers, such as Chrome, Firefox, Edge, etc. However, some of them may require an extension or a plugin to integrate with your browser. You can check the compatibility of your download manager with your browser on its official website or in its settings.
    6. -
    7. Do download managers use more bandwidth or data?
      Download managers may use more bandwidth or data than regular downloads, as they use multiple connections and split files into smaller chunks to accelerate your downloads. However, this also depends on your internet speed, file size, and source. You can limit the bandwidth or data usage of your download manager in its settings if needed.
    8. -
    9. How can I uninstall a download manager?
      You can uninstall a download manager like any other software on your device. You can go to your control panel or settings and look for the option to uninstall programs or apps. You can then select your download manager and follow the instructions to remove it from your device.
    10. -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Table No. 21 Full Movie in 720p HD Quality from Filmyzilla.md b/spaces/1phancelerku/anime-remove-background/Download Table No. 21 Full Movie in 720p HD Quality from Filmyzilla.md deleted file mode 100644 index cd180cc76ad8b0eea5c88330d2ccc8a9b383b640..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Table No. 21 Full Movie in 720p HD Quality from Filmyzilla.md +++ /dev/null @@ -1,337 +0,0 @@ -
    -

    Table No. 21 Full Movie Download Filmyzilla 720p: A Thrilling and Illegal Adventure

    -

    If you are looking for a movie that will keep you on the edge of your seat, you might be tempted to download Table No. 21 full movie from Filmyzilla, a website that offers free downloads of pirated movies and shows. But before you do that, you should know what you are getting into and why it is not a good idea.

    -

    table no 21 full movie download filmyzilla 720p


    Download Ziphttps://jinyurl.com/2uNTML



    -

    What is Table No. 21?

    -

    Table No. 21 is a 2013 Hindi thriller movie starring Paresh Rawal, Rajeev Khandelwal and Tina Desai. It is named after Article 21 of the Indian Constitution, which talks about the protection of life and personal liberty. The movie touches upon the pertinent social issue of ragging, or bullying in college campuses.

    -

    A brief summary of the plot

    -

    The movie follows Vivaan and Siya, a married couple who struggle to make ends meet. They win a trip to Fiji in a lucky draw, where they meet Mr. Khan, a mysterious and charming man who invites them to participate in a live game show called Table No. 21. He tells them that the winner of the game will get a whopping amount of ₹210 million as prize money. The rules are simple: they have to answer eight personal questions truthfully and complete a task related to each question. However, as the game progresses, the questions and tasks become increasingly horrific and reveal dark secrets from their past. They soon realize that they are trapped in a deadly game of survival with no escape.

    -

    The cast and crew of the movie

    -

    The movie is directed by Aditya Datt and produced by Eros International. The screenplay is written by Shantanu Ray Chhibber and Sheershak Anand, based on their own story. The music is composed by Gajendra Verma, Neeraj Shridhar and Sachin Gupta.

    -

    table no 21 full movie online free watch hd
    -table no 21 hindi movie download 720p filmywap
    -table no 21 thriller film streaming on zee5
    -table no 21 paresh rawal movie download mp4
    -table no 21 rajeev khandelwal movie watch online
    -table no 21 full movie free download in hindi
    -table no 21 2013 movie download 480p filmyzilla
    -table no 21 adventure movie online on jiocinema
    -table no 21 tina desai movie download torrent
    -table no 21 full movie hd quality download
    -table no 21 hindi thriller film watch online free
    -table no 21 movie download link filmyzilla
    -table no 21 full movie online with english subtitles
    -table no 21 aditya datt movie download pagalworld
    -table no 21 full movie streaming on netflix
    -table no 21 hindi movie watch online hd quality
    -table no 21 movie download in hindi 720p filmyhit
    -table no 21 full movie online on youtube
    -table no 21 paresh rawal thriller film download
    -table no 21 rajeev khandelwal movie online free
    -table no 21 full movie download filmyzilla hd
    -table no 21 hindi movie online on amazon prime video
    -table no 21 tina desai movie watch online hd
    -table no 21 full movie download in hindi filmyzilla
    -table no 21 adventure thriller film online free
    -table no 21 movie download filmyzilla mp4 hd
    -table no 21 full movie online on hotstar
    -table no 21 hindi movie download filmyzilla.com
    -table no 21 paresh rawal movie online hd quality
    -table no 21 rajeev khandelwal thriller film download
    -table no 21 full movie watch online free filmyzilla
    -table no 21 hindi movie streaming on mx player
    -table no 21 tina desai adventure film download
    -table no 21 full movie download in hindi mp4moviez
    -table no 21 thriller film watch online hd quality
    -table no 21 movie download filmyzilla in hindi hd
    -table no 21 full movie online on voot
    -table no 21 hindi movie download filmyzilla.in
    -table no 21 paresh rawal adventure film online free
    -table no 21 rajeev khandelwal movie download hd quality

    -

    The main cast of the movie are:

    -
      -
    • Paresh Rawal as Abdul Razaq Khan, the host of the game show
    • -
    • Rajeev Khandelwal as Vivaan Agasthi, one of the contestants
    • -
    • Tina Desai as Siya Agasthi, Vivaan's wife and another contestant
    • -
    • Dhruv Ganesh as Akram Khan, Mr. Khan's son who was ragged by Vivaan and his friends in college
    • -
    • Asheesh Kapur as Bittoo, one of Vivaan's friends
    • -
    • Sana Amin Sheikh as Neeti, one of Siya's friends
    • -
    • Hanif Hilal as Ghouse, Mr. Khan's bodyguard
    • -
    -

    The critical reception and box office performance

    -

    The movie received mixed to positive reviews from critics and audiences alike. It was praised for its gripping plot, suspenseful twists, powerful performances, and social message. However, it was also criticized for its violence, implausible scenarios, and lack of originality.

    -

    The movie performed above average at the box office, earning ₹177.95 million against a budget of ₹85 million.

    -

    What is Filmyzilla?

    -

    Fil

    What is Filmyzilla?

    -

    Filmyzilla is a notorious website that provides free downloads of pirated movies and shows from Bollywood, Hollywood, Tollywood, and other regional film industries. It is one of the most popular and visited websites for movie piracy in India and across the world.

    -

    A notorious website for pirating movies and shows

    -

    Filmyzilla has been operating for several years and has a huge collection of movies and shows in various languages, genres, and formats. It uploads the latest releases within hours or days of their theatrical or digital premiere, often in high quality. It also offers old and classic movies, as well as dubbed and subbed versions of foreign movies.

    -

    Filmyzilla is an illegal website that violates the Indian and international laws on copyright and intellectual property rights. It hosts and distributes the pirated content without the permission or consent of the original creators or owners. It also generates revenue from advertisements and pop-ups that may contain malware or viruses.

    -

    The categories and formats of movies available on Filmyzilla

    -

    Filmyzilla has a user-friendly interface that allows users to browse and download movies and shows according to their preferences. It has various categories such as:

    -
      -
    • Bollywood Movies
    • -
    • Hollywood Movies
    • -
    • Hollywood Hindi Dubbed Movies
    • -
    • South Indian Hindi Dubbed Movies
    • -
    • Punjabi Movies
    • -
    • Bengali Movies
    • -
    • Tamil Movies
    • -
    • Telugu Movies
    • -
    • Malayalam Movies
    • -
    • Marathi Movies
    • -
    • Gujarati Movies
    • -
    • Kannada Movies
    • -
    • Urdu Movies
    • -
    • Pakistani Movies
    • -
    • Nepali Movies
    • -
    • Bhojpuri Movies
    • -
    • Web Series
    • -
    • TV Shows
    • -
    • Awards Shows
    • -
    • Documentaries
    • -
    • Anime
    • -
    • Cartoons
    • -
    -

    Filmyzilla also offers different formats and qualities of movies and shows such as:

    -
      -
    • MP4
    • -
    • MKV
    • -
    • AVI
    • -
    • WEBM
    • -
    • 3GP
    • -
    • 360p
    • -
    • 480p
    • -
    • 720p
    • -
    • 1080p
    • -
    • HDRip
    • -
    • DVDRip
    • -
    • BluRay
    • -
    • DVDScr
    • CamRip
    • -
    • PreDVDRip
    • -
    -

    The latest movies leaked by Filmyzilla

    -

    Filmyzilla is notorious for leaking the latest movies and shows from various film industries. Some of the recent movies that have been leaked by Filmyzilla are:

    -
      -
    • Bell Bottom
    • -
    • Shershaah
    • -
    • Bhuj: The Pride of India
    • -
    • Mimi
    • -
    • Fast and Furious 9
    • -
    • Black Widow
    • -
    • The Suicide Squad
    • -
    • Jungle Cruise
    • -
    • Loki
    • -
    • The Family Man Season 2
    • -
    • Mirzapur Season 2
    • -
    • Scam 1992
    • -
    • Money Heist Season 4
    • -
    • Extraction
    • -
    • Tenet
    • -
    -

    How to download Table No. 21 full movie from Filmyzilla?

    -

    If you are still interested in downloading Table No. 21 full movie from Filmyzilla, you should know that it is not an easy or safe process. You will have to face many risks and challenges along the way, and you may also face legal consequences for your actions. Here are the steps to download the movie from Filmyzilla:

    -

    The steps to access and download the movie

    -
      -
    1. First, you will need a VPN (Virtual Private Network) service to bypass the geo-restrictions and access the Filmyzilla website. A VPN will also protect your online identity and privacy from hackers and trackers.
    2. -
    3. Next, you will need to find a working domain name of Filmyzilla, as the website keeps changing its domain name to avoid detection and blocking by the authorities. Some of the common domain names of Filmyzilla are filmyzilla.com, filmyzilla.in, filmyzilla.net, filmyzilla.vip, filmyzilla.pro, filmyzilla.me, filmyzilla.co.in, filmyzilla.live, etc.
    4. -
    5. Once you find a working domain name, you will need to enter it in your browser and access the Filmyzilla website. You will see a lot of advertisements and pop-ups on the website, which may redirect you to other websites or download unwanted software on your device. You will have to close them or avoid clicking on them.
    6. -
    7. Then, you will need to search for Table No. 21 full movie on the website using the search bar or the categories. You will see a list of results with different formats and qualities of the movie. You will have to choose the one that suits your preference and device compatibility.
    8. -
    9. After that, you will need to click on the download link or button of the movie. You may have to go through some verification processes or captcha tests before you can start the download. You may also see some fake download links or buttons that may lead you to other websites or download malware on your device. You will have to be careful and avoid them.
    10. -
    11. Finally, you will need to wait for the download to complete and then enjoy watching Table No. 21 full movie on your device.
    12. -

    The risks and challenges of downloading from Filmyzilla

    -

    Downloading Table No. 21 full movie from Filmyzilla may seem like a convenient and cost-effective option, but it comes with many risks and challenges that may ruin your experience and cause you trouble. Some of the risks and challenges are:

    -
      -
    • You may download a corrupted or incomplete file that may not play properly or damage your device.
    • -
    • You may download a file that contains malware or viruses that may infect your device and compromise your data and security.
    • -
    • You may face slow download speeds, frequent interruptions, or low-quality videos due to the high traffic and low bandwidth of the website.
    • -
    • You may expose your online activity and identity to hackers and trackers who may monitor your browsing history, IP address, location, and personal information.
    • -
    • You may violate the terms and conditions of your internet service provider (ISP) and face penalties such as throttling, suspension, or termination of your service.
    • -
    -

    The legal consequences of movie piracy in India

    -

    Downloading Table No. 21 full movie from Filmyzilla is not only risky and challenging, but also illegal and punishable by law. Movie piracy is a serious crime in India that violates the Cinematograph Act of 1952, the Information Technology Act of 2000, and the Indian Penal Code of 1860. According to these laws, anyone who downloads, uploads, streams, distributes, or exhibits pirated movies or shows without the authorization of the rightful owners can face the following legal consequences:

    -
      -
    • A fine of up to ₹10 lakh or three times the value of the pirated content, whichever is higher.
    • -
    • A jail term of up to three years.
    • -
    • A civil lawsuit by the original creators or owners for damages and compensation.
    • -
    • A criminal case by the government for violating the national interest and security.
    • -
    -

    Why you should avoid downloading Table No. 21 from Filmyzilla?

    -

    By now, you should have realized that downloading Table No. 21 full movie from Filmyzilla is not worth it. It is a bad idea that will not only harm you, but also the film industry and the artists who work hard to create quality content for you. Here are some reasons why you should avoid downloading Table No. 21 from Filmyzilla:

    -

    The ethical and moral issues of supporting piracy

    -

    When you download Table No. 21 full movie from Filmyzilla, you are supporting piracy, which is an unethical and immoral act. Piracy is a form of theft that deprives the original creators and owners of their rightful earnings and recognition. It also disrespects their artistic vision and hard work. By downloading pirated movies, you are encouraging more piracy and discouraging more creativity. You are also depriving yourself of the authentic and enjoyable experience of watching movies in theatres or on legal platforms.

    The impact of piracy on the film industry and the artists

    -

    When you download Table No. 21 full movie from Filmyzilla, you are also affecting the film industry and the artists who depend on it for their livelihood. Piracy causes huge losses to the producers, distributors, exhibitors, and other stakeholders of the film industry. According to a report by Ernst & Young, the Indian film industry lost ₹189.5 billion in 2018 due to piracy. Piracy also affects the quality and quantity of movies that are made, as it reduces the incentive and resources for filmmakers to invest in new projects. Piracy also deprives the artists of their fair share of revenue and appreciation, which may demotivate them and affect their career prospects.

    -

    The alternatives to watch Table No. 21 legally and safely

    -

    Instead of downloading Table No. 21 full movie from Filmyzilla, you should opt for legal and safe alternatives to watch the movie. There are many platforms that offer Table No. 21 for online streaming or download at a reasonable price. Some of them are:

    -
      -
    • Eros Now: This is the official platform of Eros International, the producer of Table No. 21. You can watch the movie on Eros Now with a subscription plan that starts from ₹49 per month. You can also download the movie for offline viewing on your device.
    • -
    • YouTube: This is the most popular and accessible platform for watching movies and shows online. You can rent or buy Table No. 21 on YouTube for ₹25 or ₹50 respectively. You can also download the movie for offline viewing on your device.
    • -
    • Google Play Movies: This is another platform that allows you to rent or buy movies and shows online. You can rent or buy Table No. 21 on Google Play Movies for ₹25 or ₹50 respectively. You can also download the movie for offline viewing on your device.
    • -
    • Amazon Prime Video: This is one of the leading platforms for streaming movies and shows online. You can watch Table No. 21 on Amazon Prime Video with a subscription plan that starts from ₹129 per month or ₹999 per year. You can also download the movie for offline viewing on your device.
    • -
    -

    By choosing these alternatives, you will not only enjoy watching Table No. 21 in high quality and without any interruptions, but also support the film industry and the artists who deserve your respect and admiration.

    -

    Conclusion

    -

    Table No. 21 is a thrilling and engaging movie that will keep you hooked till the end. It is a movie that deserves to be watched legally and safely, not illegally and riskily. Downloading Table No. 21 full movie from Filmyzilla is a bad idea that will expose you to many dangers and troubles, as well as harm the film industry and the artists who work hard to entertain you. Therefore, you should avoid downloading Table No. 21 from Filmyzilla and opt for legal and safe alternatives to watch the movie.

    -

    FAQs

    -

    Here are some frequently asked questions about Table No. 21 and Filmyzilla:

    -
      -
    1. Is Table No. 21 based on a true story?
    2. -

      No, Table No. 21 is not based on a true story, but it is inspired by Article 21 of the Indian Constitution, which talks about the protection of life and personal liberty.

      -
    3. What is the meaning of Table No. 21?
    4. -

      Table No. 21 is the name of the game show that Mr. Khan hosts in the movie. It is also a reference to Article 21 of the Indian Constitution, which is violated by Mr. Khan in his quest for revenge.

      -
    5. What is ragging and why is it an issue in India?
    6. -

      Ragging is a form of bullying that involves physical, mental, or sexual abuse of new or junior students by senior students in educational institutions. It is an issue in India because it causes many cases of harassment, humiliation, injury, suicide, and murder among students every year.

      -
    7. How does Filmyzilla get access to new movies?
    8. -

      Filmyzilla gets access to new movies by using various sources such as camcorders, screen recorders, hacked servers, leaked copies, etc. It then uploads them on its website or shares them with other websites.

      -
    9. How can I report or block Filmyzilla?
    10. -

      You can report or block Filmyzilla by contacting your ISP, cybercrime cell, or anti-piracy cell and providing them with the details of the website. You can also use software or extensions that block access to pirated websites.

      -
    I have completed writing the article as per your instructions. I hope you are satisfied with the quality and content of the article. Here is the final version of the article:

    Table No. 21 Full Movie Download Filmyzilla 720p: A Thrilling and Illegal Adventure

    -

    If you are looking for a movie that will keep you on the edge of your seat, you might be tempted to download Table No. 21 full movie from Filmyzilla, a website that offers free downloads of pirated movies and shows. But before you do that, you should know what you are getting into and why it is not a good idea.

    -

    What is Table No. 21?

    -

    Table No. 21 is a 2013 Hindi thriller movie starring Paresh Rawal, Rajeev Khandelwal and Tina Desai. It is named after Article 21 of the Indian Constitution, which talks about the protection of life and personal liberty. The movie touches upon the pertinent social issue of ragging, or bullying in college campuses.

    -

    A brief summary of the plot

    -

    The movie follows Vivaan and Siya, a married couple who struggle to make ends meet. They win a trip to Fiji in a lucky draw, where they meet Mr. Khan, a mysterious and charming man who invites them to participate in a live game show called Table No. 21. He tells them that the winner of the game will get a whopping amount of ₹210 million as prize money. The rules are simple: they have to answer eight personal questions truthfully and complete a task related to each question. However, as the game progresses, the questions and tasks become increasingly horrific and reveal dark secrets from their past. They soon realize that they are trapped in a deadly game of survival with no escape.

    -

    The cast and crew of the movie

    -

    The movie is directed by Aditya Datt and produced by Eros International. The screenplay is written by Shantanu Ray Chhibber and Sheershak Anand, based on their own story. The music is composed by Gajendra Verma, Neeraj Shridhar and Sachin Gupta.

    -

    The main cast of the movie are:

    -
      -
    • Paresh Rawal as Abdul Razaq Khan, the host of the game show
    • -
    • Rajeev Khandelwal as Vivaan Agasthi, one of the contestants
    • -
    • Tina Desai as Siya Agasthi, Vivaan's wife and another contestant
    • -
    • Dhruv Ganesh as Akram Khan, Mr. Khan's son who was ragged by Vivaan and his friends in college
    • -
    • Asheesh Kapur as Bittoo, one of Vivaan's friends
    • -
    • Sana Amin Sheikh as Neeti, one of Siya's friends
    • -
    • Hanif Hilal as Ghouse, Mr. Khan's bodyguard
    • -
    -

    The critical reception and box office performance

    -

    The movie received mixed to positive reviews from critics and audiences alike. It was praised for its gripping plot, suspenseful twists, powerful performances, and social message. However, it was also criticized for its violence, implausible scenarios, and lack of originality.

    -

    The movie performed above average at the box office, earning ₹177.95 million against a budget of ₹85 million.

    -

    What is Filmyzilla?

    -

    Filmyzilla is a notorious website that provides free downloads of pirated movies and shows from Bollywood, Hollywood, Tollywood, and other regional film industries. It is one of the most popular and visited websites for movie piracy in India and across the world.

    -

    A notorious website for pirating movies and shows

    -

    Filmyzilla has been operating for several years and has a huge collection of movies and shows in various languages, genres, and formats. It uploads the latest releases within hours or days of their theatrical or digital premiere, often in high quality. It also offers old and classic movies, as well as dubbed and subbed versions of foreign movies.

    -

    Filmyzilla is an illegal website that violates the Indian and international laws on copyright and intellectual property rights. It hosts and distributes the pirated content without the permission or consent of the original creators or owners. It also generates revenue from advertisements and pop-ups that may contain malware or viruses.

    -

    The categories and formats of movies available on Filmyzilla

    -

    Filmyzilla has a user-friendly interface that allows users to browse and download movies and shows according to their preferences. It has various categories such as:

    -
      -
    • Bollywood Movies
    • -
    • Hollywood Movies
    • -
    • Hollywood Hindi Dubbed Movies
    • -
    • South Indian Hindi Dubbed Movies
    • -
    • Punjabi Movies
    • -
    • Bengali Movies
    • -
    • Tamil Movies
    • -
    • Telugu Movies
    • -
    • Malayalam Movies
    • -
    • Marathi Movies
    • -
    • Gujarati Movies
    • -
    • Kannada Movies
    • -
    • Urdu Movies
    • -
    • Pakistani Movies
    • -
    • Nepali Movies
    • -
    • Bhojpuri Movies
    • -
    • Web Series
    • -
    • TV Shows
    • -
    • Awards Shows
    • -
    • Documentaries
    • -
    • Anime
    • -
    • Cartoons
    • -
    -

    Filmyzilla also offers different formats and qualities of movies and shows such as:

    -
      -
    • MP4
    • -
    • MKV
    • -
    • AVI
    • -
    • WEBM
    • -
    • 3GP
    • -
    • 360p
    • -
    • 480p
    • -
    • 720p
    • -
    • 1080p
    • -
    • HDRip
    • -
    • DVDRip
    • -
    • BluRay
    • -
    • DVDScr
    • -
    • CamRip
    • -
    • PreDVDRip
    • -
    -

    The latest movies leaked by Filmyzilla

    -

    Filmyzilla is notorious for leaking the latest movies and shows from various film industries. Some of the recent movies that have been leaked by Filmyzilla are:

    -
      -
    • Bell Bottom
    • -
    • Shershaah
    • -
    • Bhuj: The Pride of India
    • -
    • Mimi
    • -
    • Fast and Furious 9
    • -
    • Black Widow
    • -
    • The Suicide Squad
    • -
    • Jungle Cruise
    • -
    • Loki
    • -
    • The Family Man Season 2
    • -
    • Mirzapur Season 2
    • -
    • Scam 1992
    • -
    • Money Heist Season 4
    • -
    • Extraction
    • -
    • Tenet
    • -
    -

    How to download Table No. 21 full movie from Filmyzilla?

    -

    If you are still interested in downloading Table No. 21 full movie from Filmyzilla, you should know that it is not an easy or safe process. You will have to face many risks and challenges along the way, and you may also face legal consequences for your actions. Here are the steps to download the movie from Filmyzilla:

    -

    The steps to access and download the movie

    The steps to access and download the movie

    -
      -
    1. First, you will need a VPN (Virtual Private Network) service to bypass the geo-restrictions and access the Filmyzilla website. A VPN will also protect your online identity and privacy from hackers and trackers.
    2. -
    3. Next, you will need to find a working domain name of Filmyzilla, as the website keeps changing its domain name to avoid detection and blocking by the authorities. Some of the common domain names of Filmyzilla are filmyzilla.com, filmyzilla.in, filmyzilla.net, filmyzilla.vip, filmyzilla.pro, filmyzilla.me, filmyzilla.co.in, filmyzilla.live, etc.
    4. -
    5. Once you find a working domain name, you will need to enter it in your browser and access the Filmyzilla website. You will see a lot of advertisements and pop-ups on the website, which may redirect you to other websites or download unwanted software on your device. You will have to close them or avoid clicking on them.
    6. -
    7. Then, you will need to search for Table No. 21 full movie on the website using the search bar or the categories. You will see a list of results with different formats and qualities of the movie. You will have to choose the one that suits your preference and device compatibility.
    8. -
    9. After that, you will need to click on the download link or button of the movie. You may have to go through some verification processes or captcha tests before you can start the download. You may also see some fake download links or buttons that may lead you to other websites or download malware on your device. You will have to be careful and avoid them.
    10. -
    11. Finally, you will need to wait for the download to complete and then enjoy watching Table No. 21 full movie on your device.
    12. -
    -

    The risks and challenges of downloading from Filmyzilla

    -

    Downloading Table No. 21 full movie from Filmyzilla may seem like a convenient and cost-effective option, but it comes with many risks and challenges that may ruin your experience and cause you trouble. Some of the risks and challenges are:

    -
      -
    • You may download a corrupted or incomplete file that may not play properly or damage your device.
    • -
    • You may download a file that contains malware or viruses that may infect your device and compromise your data and security.
    • -
    • You may face slow download speeds, frequent interruptions, or low-quality videos due to the high traffic and low bandwidth of the website.
    • -
    • You may expose your online activity and identity to hackers and trackers who may monitor your browsing history, IP address, location, and personal information.
    • -
    • You may violate the terms and conditions of your internet service provider (ISP) and face penalties such as throttling, suspension, or termination of your service.
    • -
    -

    The legal consequences of movie piracy in India

    -

    Downloading Table No. 21 full movie from Filmyzilla is not only risky and challenging, but also illegal and punishable by law. Movie piracy is a serious crime in India that violates the Cinematograph Act of 1952, the Information Technology Act of 2000, and the Indian Penal Code of 1860. According to these laws, anyone who downloads, uploads, streams, distributes, or exhibits pirated movies or shows without the authorization of the rightful owners can face the following legal consequences:

    -
      -
    • A fine of up to ₹10 lakh or three times the value of the pirated content, whichever is higher.
    • -
    • A jail term of up to three years.
    • -
    • A civil lawsuit by the original creators or owners for damages and compensation.
    • -
    • A criminal case by the government for violating the national interest and security.
    • -
    -

    Why you should avoid downloading Table No. 21 from Filmyzilla?

    -

    By now, you should have realized that downloading Table No. 21 full movie from Filmyzilla is not worth it. It is a bad idea that will not only harm you, but also the film industry and the artists who work hard to create quality content for you. Here are some reasons why you should avoid downloading Table No. 21 from Filmyzilla:

    -

    The ethical and moral issues of supporting piracy

    -

    When you download Table No. 21 full movie from Filmyzilla, you are supporting piracy, which is an unethical and immoral act. Piracy is a form of theft that deprives the original creators and owners of their rightful earnings and recognition. It also disrespects their artistic vision and hard work. By downloading pirated movies, you are encouraging more piracy and discouraging more creativity. You are also depriving yourself of the authentic and enjoyable experience of watching movies in theatres or on legal platforms.

    -

    The impact of piracy on

    The impact of piracy on the film industry and the artists

    -

    When you download Table No. 21 full movie from Filmyzilla, you are also affecting the film industry and the artists who depend on it for their livelihood. Piracy causes huge losses to the producers, distributors, exhibitors, and other stakeholders of the film industry. According to a report by Ernst & Young, the Indian film industry lost ₹189.5 billion in 2018 due to piracy. Piracy also affects the quality and quantity of movies that are made, as it reduces the incentive and resources for filmmakers to invest in new projects. Piracy also deprives the artists of their fair share of revenue and appreciation, which may demotivate them and affect their career prospects.

    -

    The alternatives to watch Table No. 21 legally and safely

    -

    Instead of downloading Table No. 21 full movie from Filmyzilla, you should opt for legal and safe alternatives to watch the movie. There are many platforms that offer Table No. 21 for online streaming or download at a reasonable price. Some of them are:

    -
      -
    • Eros Now: This is the official platform of Eros International, the producer of Table No. 21. You can watch the movie on Eros Now with a subscription plan that starts from ₹49 per month. You can also download the movie for offline viewing on your device.
    • -
    • YouTube: This is the most popular and accessible platform for watching movies and shows online. You can rent or buy Table No. 21 on YouTube for ₹25 or ₹50 respectively. You can also download the movie for offline viewing on your device.
    • -
    • Google Play Movies: This is another platform that allows you to rent or buy movies and shows online. You can rent or buy Table No. 21 on Google Play Movies for ₹25 or ₹50 respectively. You can also download the movie for offline viewing on your device.
    • -
    • Amazon Prime Video: This is one of the leading platforms for streaming movies and shows online. You can watch Table No. 21 on Amazon Prime Video with a subscription plan that starts from ₹129 per month or ₹999 per year. You can also download the movie for offline viewing on your device.
    • -
    -

    By choosing these alternatives, you will not only enjoy watching Table No. 21 in high quality and without any interruptions, but also support the film industry and the artists who deserve your respect and admiration.

    -

    Conclusion

    -

    Table No. 21 is a thrilling and engaging movie that will keep you hooked till the end. It is a movie that deserves to be watched legally and safely, not illegally and riskily. Downloading Table No. 21 full movie from Filmyzilla is a bad idea that will expose you to many dangers and troubles, as well as harm the film industry and the artists who work hard to entertain you. Therefore, you should avoid downloading Table No. 21 from Filmyzilla and opt for legal and safe alternatives to watch the movie.

    -

    FAQs

    -

    Here are some frequently asked questions about Table No. 21 and Filmyzilla:

    -
      -
    1. Is Table No. 21 based on a true story?
    2. -

      No, Table No. 21 is not based on a true story, but it is inspired by Article 21 of the Indian Constitution, which talks about the protection of life and personal liberty.

      -
    3. What is the meaning of Table No. 21?
    4. -

      Table No. 21 is the name of the game show that Mr. Khan hosts in the movie. It is also a reference to Article 21 of the Indian Constitution, which is violated by Mr. Khan in his quest for revenge.

      -
    5. What is ragging and why is it an issue in India?
    6. -

      Ragging is a form of bullying that involves physical, mental, or sexual abuse of new or junior students by senior students in educational institutions. It is an issue in India because it causes many cases of harassment, humiliation, injury, suicide, and murder among students every year.

      -
    7. How does Filmyzilla get access to new movies?
    8. -

      Filmyzilla gets access to new movies by using various sources such as camcorders, screen recorders, hacked servers, leaked copies, etc. It then uploads them on its website or shares them with other websites.

      -
    9. How can I report or block Filmyzilla?
    10. -

      You can report or block Filmyzilla by contacting your ISP, cybercrime cell, or anti-piracy cell and providing them with the details of the website. You can also use software or extensions that block access to pirated websites.

      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Treasure Mathstorm and Join the Super Solvers in an Amazing Adventure.md b/spaces/1phancelerku/anime-remove-background/Download Treasure Mathstorm and Join the Super Solvers in an Amazing Adventure.md deleted file mode 100644 index 386139dc37e49f3eff19d131b24763c7600ce76e..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Treasure Mathstorm and Join the Super Solvers in an Amazing Adventure.md +++ /dev/null @@ -1,152 +0,0 @@ -
    -

    How to Download Treasure Mathstorm: A Fun and Educational Game for Kids

    -

    Do you want to help your kids learn math in a fun and engaging way? Do you want to introduce them to a classic educational game that has entertained and challenged millions of children around the world? If you answered yes, then you should download Treasure Mathstorm, a game that combines math, adventure, and humor in a delightful way.

    -

    download treasure mathstorm


    DOWNLOAD »»» https://jinyurl.com/2uNUeR



    -

    Treasure Mathstorm is an educational game designed for kids ages 6 to 8. It was developed by The Learning Company in 1992 and it is part of the Super Solvers series. In this game, you have to help the elves restore Treasure Mountain by solving math problems and finding treasures. Along the way, you will encounter various obstacles, puzzles, and surprises that will make your journey more exciting.

    -

    In this article, we will tell you everything you need to know about Treasure Mathstorm, including what it is, how to download it, and how to play it. We will also share some tips and tricks to help you get the most out of this game. So, let's get started!

    -

    What is Treasure Mathstorm?

    -

    Treasure Mathstorm is an educational game that teaches kids various math skills and concepts in a fun and interactive way. It is suitable for kids who are in grades 1 to 3 or who have a basic knowledge of arithmetic. The game covers topics such as addition, subtraction, multiplication, division, fractions, decimals, time, money, measurement, geometry, logic, and problem-solving.

    -

    The story and the goal of the game

    -

    The story of Treasure Mathstorm is that the Master of Mischief, a villain who likes to cause trouble, has invented a machine that changes the weather and freezes Treasure Mountain. He has also hidden all the treasures on the mountain and locked them with math problems. Your goal is to restore the mountain by locating different treasures on the mountain and returning them to the castle at the top. When all the treasures have been restored, the king will have his power back and all of the ice will melt.

    -

    download treasure mathstorm free
    -download treasure mathstorm for windows 10
    -download treasure mathstorm online
    -download treasure mathstorm mac
    -download treasure mathstorm dos
    -download treasure mathstorm game
    -download treasure mathstorm full version
    -download treasure mathstorm iso
    -download treasure mathstorm emulator
    -download treasure mathstorm 1992
    -download treasure mathstorm windows 3.1
    -download treasure mathstorm cd rom
    -download treasure mathstorm internet archive
    -download treasure mathstorm classic reload
    -download treasure mathstorm super solvers
    -download treasure mathstorm learning company
    -download treasure mathstorm educational game
    -download treasure mathstorm master of mischief
    -download treasure mathstorm mountain
    -download treasure mathstorm castle
    -download treasure mathstorm elves
    -download treasure mathstorm weather machine
    -download treasure mathstorm addition subtraction multiplication
    -download treasure mathstorm telling time counting money
    -download treasure mathstorm skill level
    -how to download treasure mathstorm
    -where to download treasure mathstorm
    -why download treasure mathstorm
    -what is treasure mathstorm
    -who made treasure mathstorm
    -when was treasure mathstorm released
    -is treasure mathstorm compatible with windows 10
    -is treasure mathstorm still available
    -is treasure mathstorm fun
    -is treasure mathstorm educational
    -can i download treasure mathstorm for free
    -can i play treasure mathstorm online
    -can i run treasure mathstorm on mac
    -can i use dosbox to play treasure mathstorm
    -can i get the full version of treasure mathstorm
    -best site to download treasure mathstorm
    -best way to play treasure mathstorm on windows 10
    -best emulator for treasure mathstorm
    -best settings for treasure mathstorm dosbox
    -best tips and tricks for playing treasure mathstorm

    -

    The math skills and concepts covered in the game

    -

    The math skills and concepts covered in Treasure Mathstorm are divided into three levels of difficulty: easy, medium, and hard. You can choose which level you want to play at any time during the game. The math skills and concepts covered in each level are as follows:

    -
      -
    • Easy: addition and subtraction up to 18, telling time by hours and half-hours, counting money up to $1.00, identifying shapes and colors.
    • -
    • Medium: addition and subtraction up to 99, telling time by quarter-hours, counting money up to $5.00, identifying fractions (halves, thirds, fourths), measuring length with inches.
    • -
    • Hard: addition and subtraction up to 999, telling time by minutes, counting money up to $10.00, identifying fractions (sixths, eighths), measuring length with feet.
    • -
    -

    The features and benefits of the game

    -

    Treasure Mathstorm has many features and benefits that make it a great educational game for kids. Some of them are:

    -
      -
    • It adapts to your child's skill level and progress. The game automatically adjusts the difficulty of the math problems based on your child's performance. It also keeps track of your child's scores and achievements.
    • -
    • It provides feedback and encouragement. The game gives your child immediate feedback on whether they answered a math problem correctly or incorrectly. It also provides hints and explanations when needed. It also praises your child for their efforts and achievements.
    • -
    • It offers variety and fun. The game has different types of math problems and activities that keep your child engaged and motivated. It also has colorful graphics, animations, sound effects, and music that make the game more enjoyable.
    • -
    • It fosters creativity and exploration. The game allows your child to explore the mountain and discover different treasures and surprises. It also lets your child customize their character and their backpack with different items and accessories.
    • -
    -

    How to download Treasure Mathstorm?

    -

    If you want to download Treasure Mathstorm, you need to make sure that your computer meets the system requirements and compatibility of the game. You also need to find a reliable source and link to download the game. Finally, you need to follow the steps and tips to install and run the game on your computer.

    -

    The system requirements and compatibility of the game

    -

    Treasure Mathstorm is an old game that was originally designed for DOS and Windows 3.x operating systems. Therefore, it may not run smoothly on modern computers with newer operating systems such as Windows 10, Mac OS, or Linux. However, there are ways to make the game compatible with your computer by using emulators or virtual machines.

    -

    An emulator is a software that mimics the functions of an old operating system or device on your computer. A virtual machine is a software that creates a separate environment on your computer that runs an old operating system or device. Both methods allow you to run old games and programs on your computer without affecting your main system.

    -

    Some of the popular emulators and virtual machines that you can use to run Treasure Mathstorm are:

    -
      -
    • DOSBox: an emulator that runs DOS games and programs on Windows, Mac OS, Linux, and other platforms.
    • -
    • ScummVM: an emulator that runs games that use the SCUMM engine, such as Treasure Mathstorm.
    • -
    • VirtualBox: a virtual machine that runs various operating systems such as Windows 3.x, Windows 95, Windows 98, etc.
    • -
    • VMware: another virtual machine that runs various operating systems such as Windows 3.x, Windows 95, Windows 98, etc.
    • -
    -

    You can download these emulators and virtual machines from their official websites or from other trusted sources. You can also find tutorials and guides on how to use them online.

    -

    The sources and links to download the game

    -

    Once you have chosen an emulator or a virtual machine to run Treasure Mathstorm, you need to find a source and a link to download the game. There are many websites that offer old games for free or for a small fee. However, not all of them are safe and legal. Some of them may contain viruses, malware, or spyware that can harm your computer or steal your personal information. Some of them may also violate the copyright laws or the terms of service of the original developers or publishers of the game.

    -

    Therefore, you need to be careful and selective when choosing a source and a link to download Treasure Mathstorm. You need to check the reputation and the reviews of the website before downloading anything from it. You also need to scan the downloaded files with an antivirus program before opening them. You also need to respect the rights and the wishes of the original developers or publishers of the game.

    -

    Some of the reputable and legal sources and links to download Treasure Mathstorm are:

    -
      -
    • The Learning Company: the original developer and publisher of Treasure Mathstorm. They offer a digital download of the game for $9.99 on their website.
    • -
    • GOG.com: a digital distribution platform that sells old games that are DRM-free (no copy protection) and compatible with modern systems. They offer Treasure Mathstorm for $5.99 on their website.
    • -
    • Abandonia: a website that hosts old games that are abandoned by their developers or publishers. They offer Treasure Mathstorm for free on their website.
    • -
    -

    The steps and tips to install and run the game

    -

    After you have downloaded Treasure Mathstorm from a source and a link of your choice, you need to follow these steps and tips to install and run the game on your computer:

    -
      -
    1. Extract the downloaded files from the ZIP or RAR archive using a program such as WinZip or WinRAR.
    2. -
    3. Create a folder on your computer where you want to store the game files.
    4. -
    5. Copy or move the extracted files to the folder you created in step 2.
    6. -
    7. Open the emulator or the virtual machine of your choice and configure it according to the instructions and the system requirements of the game.
    8. -
    9. Mount or load the game folder or the game file (usually a .exe or a .bat file) on the emulator or the virtual machine and start the game.
    10. -
    11. Enjoy playing Treasure Mathstorm!
    12. -
    -

    Some tips and tricks to help you install and run the game are:

    -
      -
    • If you encounter any errors or problems while installing or running the game, you can try to change the settings of the emulator or the virtual machine, such as the memory, the sound, the graphics, etc.
    • -
    • If you want to save your progress and your scores in the game, you need to create a save file on the emulator or the virtual machine. You can also backup your save file on your computer or on a cloud service.
    • -
    • If you want to play Treasure Mathstorm with other players online, you can use a program such as DOSBox Daum or DOSBox-X that supports multiplayer mode. You can also use a program such as Hamachi or Tunngle that creates a virtual network for online gaming.
    • -
    -

    How to play Treasure Mathstorm?

    -

    Now that you have installed and run Treasure Mathstorm on your computer, you are ready to play it. In this section, we will explain how to play Treasure Mathstorm, including the main screen and the menu options of the game, the levels and the challenges of the game, and the rewards and the achievements of the game.

    -

    The main screen and the menu options of the game

    -

    The main screen of Treasure Mathstorm is where you can see your character, your backpack, your score, your level, and your time. You can also see the mountain and the castle in the background. You can use your mouse or your keyboard to move your character around and interact with different objects and characters on the screen.

    -

    The menu options of Treasure Mathstorm are located at the top of the screen. You can access them by clicking on them with your mouse or by pressing a key on your keyboard. The menu options are:

    -
      -
    • File: where you can start a new game, load a saved game, save your current game, quit the game, or change your player name.
    • -
    • Options: where you can change the difficulty level of the math problems, turn on or off the music and sound effects, adjust the volume, or view the credits.
    • -
    • Help: where you can get help on how to play Treasure Mathstorm, how to use DOSBox or ScummVM, or how to contact The Learning Company.
    • -
    -

    The levels and the challenges of the game

    -

    Treasure Mathstorm has three levels of difficulty: easy, medium, and hard. You can choose which level you want to play at any time during the game. The level you choose affects the type and the number of math problems you have to solve in the game. Treasure Mathstorm has 10 levels of challenges that you have to complete in order to restore the mountain. Each level has a different theme and a different number of treasures to find. The themes and the number of treasures are: - Level 1: Snowy Slopes (10 treasures) - Level 2: Icy Caves (15 treasures) - Level 3: Frozen Forest (20 treasures) - Level 4: Snowman Village (25 treasures) - Level 5: Ice Castle (30 treasures) - Level 6: Crystal Caverns (35 treasures) - Level 7: Blizzard Bluffs (40 treasures) - Level 8: Polar Peak (45 treasures) - Level 9: Cloud City (50 treasures) - Level 10: Treasure Mountain (55 treasures) To complete a level, you have to find all the treasures on that level and return them to the castle at the top of the mountain. To find a treasure, you have to solve a math problem that is attached to it. To return a treasure, you have to carry it to the castle and drop it in the correct bin. The math problems in Treasure Mathstorm are varied and fun. They include: - Addition and subtraction problems that involve snowballs, snowflakes, icicles, etc. - Multiplication and division problems that involve snowmen, penguins, polar bears, etc. - Fraction problems that involve pies, pizzas, cakes, etc. - Decimal problems that involve thermometers, clocks, scales, etc. - Time problems that involve clocks, watches, calendars, etc. - Money problems that involve coins, bills, wallets, etc. - Measurement problems that involve rulers, tapes, scales, etc. - Geometry problems that involve shapes, angles, lines, etc. - Logic problems that involve patterns, sequences, puzzles, etc. - Problem-solving problems that involve word problems, equations, graphs, etc. The math problems in Treasure Mathstorm are not only educational but also entertaining. They have humorous scenarios and characters that make the game more enjoyable. For example: - You have to help a snowman find his missing nose by solving a fraction problem. - You have to help a penguin buy a hat by solving a money problem. - You have to help a polar bear catch a fish by solving a geometry problem. - You have to help a cloud fairy make a rainbow by solving a logic problem.

    The rewards and the achievements of the game

    -

    Treasure Mathstorm has many rewards and achievements that motivate you to play the game and improve your math skills. Some of them are:

    -
      -
    • You can earn stars for each math problem you solve correctly. The more stars you earn, the higher your score will be.
    • -
    • You can earn medals for each level you complete. The medals are bronze, silver, gold, and platinum. The higher the medal, the better your performance on that level.
    • -
    • You can earn trophies for each level of difficulty you complete. The trophies are easy, medium, and hard. The higher the trophy, the more challenging the math problems you solved.
    • -
    • You can earn badges for special achievements in the game. The badges are explorer, adventurer, mastermind, super solver, etc. The more badges you earn, the more skills and concepts you mastered.
    • -
    • You can customize your character and your backpack with different items and accessories that you find or buy in the game. You can also change your character's name and appearance.
    • -
    -

    Conclusion

    -

    Treasure Mathstorm is an educational game that teaches kids math skills and concepts in a fun and interactive way. It is suitable for kids who are in grades 1 to 3 or who have a basic knowledge of arithmetic. The game covers topics such as addition, subtraction, multiplication, division, fractions, decimals, time, money, measurement, geometry, logic, and problem-solving. Treasure Mathstorm is also a fun and engaging game that combines math, adventure, and humor in a delightful way. It has colorful graphics, animations, sound effects, and music that make the game more enjoyable. It also has various obstacles, puzzles, and surprises that make the game more exciting. It also has different types of math problems and activities that keep the game varied and interesting. Treasure Mathstorm is an old game that was originally designed for DOS and Windows 3.x operating systems. Therefore, it may not run smoothly on modern computers with newer operating systems such as Windows 10, Mac OS, or Linux. However, there are ways to make the game compatible with your computer by using emulators or virtual machines. Treasure Mathstorm is a game that you can download from various sources and links online. However, you need to be careful and selective when choosing a source and a link to download the game. You need to check the reputation and the reviews of the website before downloading anything from it. You also need to scan the downloaded files with an antivirus program before opening them. You also need to respect the rights and the wishes of the original developers or publishers of the game. Treasure Mathstorm is a game that you can play by following some steps and tips to install and run the game on your computer. You also need to follow some tips and tricks to help you play the game better and faster. You also need to enjoy playing the game and learning math at the same time. We hope that this article has helped you learn how to download Treasure Mathstorm: a fun and educational game for kids. We also hope that you have fun playing Treasure Mathstorm and improving your math skills. If you have any questions or comments about Treasure Mathstorm or this article, please feel free to contact us or leave a comment below. Thank you for reading!

    FAQs

    -

    Here are some frequently asked questions about Treasure Mathstorm:

    -
      -
    1. Q: How long does it take to complete Treasure Mathstorm?
    2. -
    3. A: It depends on your skill level and your speed. However, it usually takes about 10 to 15 hours to complete all 10 levels of Treasure Mathstorm.
    4. -
    5. Q: How can I get more stars, medals, trophies, and badges in Treasure Mathstorm?
    6. -
    7. A: You can get more stars by solving more math problems correctly. You can get more medals by completing more levels with higher scores. You can get more trophies by completing more levels of difficulty. You can get more badges by achieving special goals in the game.
    8. -
    9. Q: How can I save my progress and my scores in Treasure Mathstorm?
    10. -
    11. A: You can save your progress and your scores in Treasure Mathstorm by creating a save file on the emulator or the virtual machine that you are using. You can also backup your save file on your computer or on a cloud service.
    12. -
    13. Q: How can I play Treasure Mathstorm with other players online?
    14. -
    15. A: You can play Treasure Mathstorm with other players online by using a program such as DOSBox Daum or DOSBox-X that supports multiplayer mode. You can also use a program such as Hamachi or Tunngle that creates a virtual network for online gaming.
    16. -
    17. Q: Where can I find more information and resources about Treasure Mathstorm?
    18. -
    19. A: You can find more information and resources about Treasure Mathstorm on these websites:
    20. -
        -
      • The Learning Company: the original developer and publisher of Treasure Mathstorm. They offer a digital download of the game for $9.99 on their website.
      • -
      • GOG.com: a digital distribution platform that sells old games that are DRM-free (no copy protection) and compatible with modern systems. They offer Treasure Mathstorm for $5.99 on their website.
      • -
      • Abandonia: a website that hosts old games that are abandoned by their developers or publishers. They offer Treasure Mathstorm for free on their website.
      • -
      • MobyGames: a website that provides information and reviews about old games. They have a page dedicated to Treasure Mathstorm on their website.
      • -
      • Wikipedia: a free online encyclopedia that provides information about various topics. They have an article about Treasure Mathstorm on their website.
      • -
      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/30SecondsToMoon/30SecondsToMoon/README.md b/spaces/30SecondsToMoon/30SecondsToMoon/README.md deleted file mode 100644 index 32bfc53b203454ed16de26d490b66119e5c8043e..0000000000000000000000000000000000000000 --- a/spaces/30SecondsToMoon/30SecondsToMoon/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 30SecondsToMoon -emoji: 📉 -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn_pytorch/src/visualization_utils.py b/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn_pytorch/src/visualization_utils.py deleted file mode 100644 index bab02be31a6ca44486f98d57de4ab4bfa89394b7..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn_pytorch/src/visualization_utils.py +++ /dev/null @@ -1,31 +0,0 @@ -from PIL import ImageDraw - - -def show_bboxes(img, bounding_boxes, facial_landmarks=[]): - """Draw bounding boxes and facial landmarks. - - Arguments: - img: an instance of PIL.Image. - bounding_boxes: a float numpy array of shape [n, 5]. - facial_landmarks: a float numpy array of shape [n, 10]. - - Returns: - an instance of PIL.Image. - """ - - img_copy = img.copy() - draw = ImageDraw.Draw(img_copy) - - for b in bounding_boxes: - draw.rectangle([ - (b[0], b[1]), (b[2], b[3]) - ], outline='white') - - for p in facial_landmarks: - for i in range(5): - draw.ellipse([ - (p[i] - 1.0, p[i + 5] - 1.0), - (p[i] + 1.0, p[i + 5] + 1.0) - ], outline='blue') - - return img_copy diff --git a/spaces/AIWaves/Software_Company/app.py b/spaces/AIWaves/Software_Company/app.py deleted file mode 100644 index f61d5b5befa277298e7d06e657e5cdef0e14066f..0000000000000000000000000000000000000000 --- a/spaces/AIWaves/Software_Company/app.py +++ /dev/null @@ -1,254 +0,0 @@ -import sys - -import os -from gradio_base import WebUI, UIHelper, PORT, HOST, Client -from gradio_config import GradioConfig as gc -from typing import List, Tuple, Any -import gradio as gr -import time - -class CodeUI(WebUI): - - def render_and_register_ui(self): - self.agent_name:list = [self.cache["agents_name"]] if isinstance(self.cache["agents_name"], str) else self.cache['agents_name'] - gc.add_agent(self.agent_name) - - def __init__( - self, - client_cmd: list, - socket_host: str = HOST, - socket_port: int = PORT, - bufsize: int = 1024, - ui_name: str = "CodeUI" - ): - super(CodeUI, self).__init__(client_cmd, socket_host, socket_port, bufsize, ui_name) - self.first_recieve_from_client() - self.data_history = list() - self.caller = 0 - - def construct_ui(self): - with gr.Blocks(css=gc.CSS) as demo: - gr.Markdown("""# Agents""") - gr.Markdown("""**Agents** is an open-source library/framework for building autonomous language agents.if you want to know more about **Agents**, please check our📄 Paper and📦 Github. Here is a demo of **Agents**.""") - gr.Markdown("""If an error occurs or the queue is too long, please create your own demo by clicking Duplicate This Space in the upper right corner. Please be patient with building, thank you! It takes about 3-4 minutes.""") - with gr.Row(): - with gr.Column(): - self.text_api = gr.Textbox( - value = self.cache["api_key"], - placeholder="openai key", - label="Please input valid openai key for gpt-3.5-turbo-16k." - ) - self.radio_mode = gr.Radio( - [Client.SINGLE_MODE], - value=Client.SINGLE_MODE, - interactive=True, - label = Client.MODE_LABEL, - info = Client.MODE_INFO - ) - self.chatbot = gr.Chatbot( - elem_id="chatbot1" - ) - self.btn_next = gr.Button( - value="Next Agent", - visible=False, elem_id="btn" - ) - with gr.Row(): - self.text_requirement = gr.Textbox( - value=self.cache['requirement'], - placeholder="Please enter your content", - scale=9, - ) - self.btn_start = gr.Button( - value="Start!", - scale=1 - ) - self.btn_reset = gr.Button( - value="Restart", - visible=False - ) - - with gr.Column(): - self.file = gr.File(visible=False) - self.chat_code_show = gr.Chatbot( - elem_id="chatbot1", - visible=False - ) - - self.btn_start.click( - fn=self.btn_send_when_click, - inputs=[self.chatbot, self.text_requirement, self.radio_mode, self.text_api], - outputs=[self.chatbot, self.btn_start, self.text_requirement, self.btn_reset] - ).then( - fn=self.btn_send_after_click, - inputs=[self.file, self.chatbot, self.chat_code_show, self.btn_start, self.btn_reset, self.text_requirement], - outputs=[self.file, self.chatbot, self.chat_code_show, self.btn_start, self.btn_reset, self.text_requirement, self.btn_next] - ) - self.text_requirement.submit( - fn=self.btn_send_when_click, - inputs=[self.chatbot, self.text_requirement, self.text_api], - outputs=[self.chatbot, self.btn_start, self.text_requirement, self.btn_reset] - ).then( - fn=self.btn_send_after_click, - inputs=[self.file, self.chatbot, self.chat_code_show, self.btn_start, self.btn_reset, self.text_requirement], - outputs=[self.file, self.chatbot, self.chat_code_show, self.btn_start, self.btn_reset, self.text_requirement, self.btn_next] - ) - self.btn_reset.click( - fn=self.btn_reset_when_click, - inputs=[], - outputs=[self.file, self.chatbot, self.chat_code_show, self.btn_start, self.btn_reset, self.text_requirement, self.btn_next] - ).then( - fn=self.btn_reset_after_click, - inputs=[self.file, self.chatbot, self.chat_code_show, self.btn_start, self.btn_reset, self.text_requirement], - outputs=[self.file, self.chatbot, self.chat_code_show, self.btn_start, self.btn_reset, self.text_requirement, self.btn_next] - ) - self.file.select( - fn=self.file_when_select, - inputs=[self.file], - outputs=[self.chat_code_show] - ) - self.btn_next.click( - fn = self.btn_next_when_click, - inputs=[], - outputs=[self.btn_next] - ).then( - fn=self.btn_send_after_click, - inputs=[self.file, self.chatbot, self.chat_code_show, self.btn_start, self.btn_reset, self.text_requirement], - outputs=[self.file, self.chatbot, self.chat_code_show, self.btn_start, self.btn_reset, self.text_requirement, self.btn_next] - ) - - self.demo = demo - - - def handle_message(self, history:list, state, agent_name, token, node_name): - if state % 10 == 0: - self.data_history.append({agent_name: token}) - elif state % 10 == 1: - # Same state. Need to add new bubble in same bubble. - if len(self.data_history) == 0: - self.data_history.append({agent_name:""}) - self.data_history[-1][agent_name] += token - elif state % 10 == 2: - # New state. Need to add new bubble. - history.append([None, ""]) - self.data_history.clear() - self.data_history.append({agent_name: token}) - else: - assert False, "Invalid state." - render_data = self.render_bubble(history, self.data_history, node_name, render_node_name=True) - return render_data - - def btn_send_when_click(self, chatbot, text_requirement, mode, api): - """ - inputs=[self.chatbot, self.text_requirement, radio, text_api], - outputs=[self.chatbot, self.btn_start, self.text_requirement, self.btn_reset] - """ - chatbot = [[UIHelper.wrap_css(content=text_requirement, name="User"), None]] - yield chatbot,\ - gr.Button.update(visible=True, interactive=False, value="Running"),\ - gr.Textbox.update(visible=True, interactive=False, value=""),\ - gr.Button.update(visible=False, interactive=False) - self.send_start_cmd({'requirement': text_requirement, "mode": mode, "api_key": api}) - return - - def btn_send_after_click( - self, - file, - history, - show_code, - btn_send, - btn_reset, - text_requirement - ): - """ - outputs=[self.file, self.chatbot, self.chat_code_show, self.btn_start, self.btn_reset, self.text_requirement, self.btn_next] - """ - if self.caller == 0: - self.data_history = list() - self.caller = 0 - receive_server = self.receive_server - while True: - data_list: List = receive_server.send(None) - for item in data_list: - data = eval(item) - assert isinstance(data, list) - state, agent_name, token, node_name = data - assert isinstance(state, int) - assert state in [10, 11, 12, 99, 98] - if state == 99: - # finish - fs = [self.cache['pwd']+'/output_code/'+_ for _ in os.listdir(self.cache['pwd']+'/output_code')] - yield gr.File.update(value=fs, visible=True, interactive=True),\ - history, \ - gr.Chatbot.update(visible=True),\ - gr.Button.update(visible=True, interactive=True, value="Start"),\ - gr.Button.update(visible=True, interactive=True),\ - gr.Textbox.update(visible=True, interactive=True, placeholder="Please input your requirement", value=""),\ - gr.Button.update(visible=False) - return - elif state == 98: - yield gr.File.update(visible=False),\ - history, \ - gr.Chatbot.update(visible=False),\ - gr.Button.update(visible=True, interactive=False),\ - gr.Button.update(visible=True, interactive=True),\ - gr.Textbox.update(visible=True, interactive=False),\ - gr.Button.update(visible=True, value=f"Next Agent: 🤖{agent_name} | Next Node: ⭕{node_name}") - return - history = self.handle_message(history, state, agent_name, token, node_name) - yield gr.File.update(visible=False),\ - history, \ - gr.Chatbot.update(visible=False),\ - gr.Button.update(visible=True, interactive=False),\ - gr.Button.update(visible=False, interactive=False),\ - gr.Textbox.update(visible=True, interactive=False),\ - gr.Button.update(visible=False) - - def btn_reset_when_click(self): - """ - inputs = [] - outputs = [self.file, self.chatbot, self.chat_code_show, self.btn_start, self.btn_reset, self.text_requirement, self.btn_next] - """ - return gr.File.update(visible=False),\ - None, None, gr.Button.update(value="Restarting...", interactive=False),\ - gr.Button.update(value="Restarting...", interactive=False),\ - gr.Textbox.update(value="Restarting", interactive=False),\ - gr.Button.update(visible=False) - - def btn_reset_after_click( - self, - file, - chatbot, - show_code, - btn_send, - btn_reset, - text_requirement - ): - self.reset() - self.first_recieve_from_client(reset_mode=True) - return gr.File.update(value=None, visible=False),\ - gr.Chatbot.update(value=None, visible=True),\ - gr.Chatbot.update(value=None, visible=False),\ - gr.Button.update(value="Start", visible=True, interactive=True),\ - gr.Button.update(value="Restart", interactive=False, visible=False),\ - gr.Textbox.update(value=self.cache['requirement'], interactive=True, visible=True),\ - gr.Button.update(visible=False) - - def file_when_select(self, file): - CODE_PREFIX = "```python\n{}\n```" - with open(file.name, "r", encoding='utf-8') as f: - contents = f.readlines() - codes = "".join(contents) - return [[CODE_PREFIX.format(codes),None]] - - def btn_next_when_click(self): - self.caller = 1 # it will remain the value in self.data_history - self.send_message("nothing") - time.sleep(0.5) - yield gr.Button.update(visible=False) - return - - -if __name__ == '__main__': - ui = CodeUI(client_cmd=["python","gradio_backend.py"]) - ui.construct_ui() - ui.run() \ No newline at end of file diff --git a/spaces/AIZero2Hero4Health/4-ImageSimilaritySearch-SL/app.py b/spaces/AIZero2Hero4Health/4-ImageSimilaritySearch-SL/app.py deleted file mode 100644 index 9b287e491115a6952e8577523bef64c2cb57686b..0000000000000000000000000000000000000000 --- a/spaces/AIZero2Hero4Health/4-ImageSimilaritySearch-SL/app.py +++ /dev/null @@ -1,186 +0,0 @@ -from html import escape -import re -import streamlit as st -import pandas as pd, numpy as np -from transformers import CLIPProcessor, CLIPModel -from st_clickable_images import clickable_images - -@st.cache( - show_spinner=False, - hash_funcs={ - CLIPModel: lambda _: None, - CLIPProcessor: lambda _: None, - dict: lambda _: None, - }, -) -def load(): - model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14") - processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14") - df = {0: pd.read_csv("data.csv"), 1: pd.read_csv("data2.csv")} - embeddings = {0: np.load("embeddings.npy"), 1: np.load("embeddings2.npy")} - for k in [0, 1]: - embeddings[k] = embeddings[k] / np.linalg.norm( - embeddings[k], axis=1, keepdims=True - ) - return model, processor, df, embeddings - - -model, processor, df, embeddings = load() -source = {0: "\nSource: Unsplash", 1: "\nSource: The Movie Database (TMDB)"} - - -def compute_text_embeddings(list_of_strings): - inputs = processor(text=list_of_strings, return_tensors="pt", padding=True) - result = model.get_text_features(**inputs).detach().numpy() - return result / np.linalg.norm(result, axis=1, keepdims=True) - - -def image_search(query, corpus, n_results=24): - positive_embeddings = None - - def concatenate_embeddings(e1, e2): - if e1 is None: - return e2 - else: - return np.concatenate((e1, e2), axis=0) - - splitted_query = query.split("EXCLUDING ") - dot_product = 0 - k = 0 if corpus == "Unsplash" else 1 - if len(splitted_query[0]) > 0: - positive_queries = splitted_query[0].split(";") - for positive_query in positive_queries: - match = re.match(r"\[(Movies|Unsplash):(\d{1,5})\](.*)", positive_query) - if match: - corpus2, idx, remainder = match.groups() - idx, remainder = int(idx), remainder.strip() - k2 = 0 if corpus2 == "Unsplash" else 1 - positive_embeddings = concatenate_embeddings( - positive_embeddings, embeddings[k2][idx : idx + 1, :] - ) - if len(remainder) > 0: - positive_embeddings = concatenate_embeddings( - positive_embeddings, compute_text_embeddings([remainder]) - ) - else: - positive_embeddings = concatenate_embeddings( - positive_embeddings, compute_text_embeddings([positive_query]) - ) - dot_product = embeddings[k] @ positive_embeddings.T - dot_product = dot_product - np.median(dot_product, axis=0) - dot_product = dot_product / np.max(dot_product, axis=0, keepdims=True) - dot_product = np.min(dot_product, axis=1) - - if len(splitted_query) > 1: - negative_queries = (" ".join(splitted_query[1:])).split(";") - negative_embeddings = compute_text_embeddings(negative_queries) - dot_product2 = embeddings[k] @ negative_embeddings.T - dot_product2 = dot_product2 - np.median(dot_product2, axis=0) - dot_product2 = dot_product2 / np.max(dot_product2, axis=0, keepdims=True) - dot_product -= np.max(np.maximum(dot_product2, 0), axis=1) - - results = np.argsort(dot_product)[-1 : -n_results - 1 : -1] - return [ - ( - df[k].iloc[i]["path"], - df[k].iloc[i]["tooltip"] + source[k], - i, - ) - for i in results - ] - - -description = """ -# Semantic image search -**Enter your query and hit enter** -""" - -howto = """ -- Click image to find similar images -- Use "**;**" to combine multiple queries) -- Use "**EXCLUDING**", to exclude a query -""" - - -def main(): - st.markdown( - """ - """, - unsafe_allow_html=True, - ) - st.sidebar.markdown(description) - with st.sidebar.expander("Advanced use"): - st.markdown(howto) - - - st.sidebar.markdown(f"Try these test prompts: orange, blue, beach, lighthouse, mountain, sunset, parade") - st.sidebar.markdown(f"Unsplash has categories that match: backgrounds, photos, nature, iphone, etc") - st.sidebar.markdown(f"Unsplash images contain animals, apps, events, feelings, food, travel, nature, people, religion, sports, things, stock") - st.sidebar.markdown(f"Unsplash things include flag, tree, clock, money, tattoo, arrow, book, car, fireworks, ghost, health, kiss, dance, balloon, crown, eye, house, music, airplane, lighthouse, typewriter, toys") - st.sidebar.markdown(f"unsplash feelings include funny, heart, love, cool, congratulations, love, scary, cute, friendship, inspirational, hug, sad, cursed, beautiful, crazy, respect, transformation, peaceful, happy") - st.sidebar.markdown(f"unsplash people contain baby, life, women, family, girls, pregnancy, society, old people, musician, attractive, bohemian") - st.sidebar.markdown(f"imagenet queries include: photo of, photo of many, sculpture of, rendering of, graffiti of, tattoo of, embroidered, drawing of, plastic, black and white, painting, video game, doodle, origami, sketch, etc") - - - _, c, _ = st.columns((1, 3, 1)) - if "query" in st.session_state: - query = c.text_input("", value=st.session_state["query"]) - else: - - query = c.text_input("", value="lighthouse") - corpus = st.radio("", ["Unsplash"]) - #corpus = st.radio("", ["Unsplash", "Movies"]) - if len(query) > 0: - results = image_search(query, corpus) - clicked = clickable_images( - [result[0] for result in results], - titles=[result[1] for result in results], - div_style={ - "display": "flex", - "justify-content": "center", - "flex-wrap": "wrap", - }, - img_style={"margin": "2px", "height": "200px"}, - ) - if clicked >= 0: - change_query = False - if "last_clicked" not in st.session_state: - change_query = True - else: - if clicked != st.session_state["last_clicked"]: - change_query = True - if change_query: - st.session_state["query"] = f"[{corpus}:{results[clicked][2]}]" - st.experimental_rerun() - - -if __name__ == "__main__": - main() diff --git a/spaces/ASJMO/freegpt/client/css/settings.css b/spaces/ASJMO/freegpt/client/css/settings.css deleted file mode 100644 index 0a409f27d6d185c90ae76d95f64b457e140ae8d9..0000000000000000000000000000000000000000 --- a/spaces/ASJMO/freegpt/client/css/settings.css +++ /dev/null @@ -1,44 +0,0 @@ -.settings-container { - color: var(--colour-2); - margin: 24px 0px 8px 0px; - justify-content: center; -} - -.settings-container span { - font-size: 0.875rem; - margin: 0; -} - -.settings-container label { - width: 24px; - height: 16px; -} - -.settings-container .field { - justify-content: space-between; -} - -.settings-container .checkbox input + label, -.settings-container .checkbox input:checked + label:after { - background: var(--colour-1); -} - -.settings-container .checkbox input + label:after, -.settings-container .checkbox input:checked + label { - background: var(--colour-3); -} - -.settings-container .checkbox label:after { - left: 2px; - width: 10px; - height: 10px; -} - -.settings-container .checkbox input:checked + label:after { - left: calc(100% - 2px - 10px); -} - -.settings-container .dropdown { - padding: 4px 8px; - font-size: 0.75rem; -} diff --git a/spaces/AchyuthGamer/OpenGPT/client/css/message.css b/spaces/AchyuthGamer/OpenGPT/client/css/message.css deleted file mode 100644 index 64e04147ee4d1e76dda4f39c4f756c9da63e3874..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/client/css/message.css +++ /dev/null @@ -1,65 +0,0 @@ -.message { - width: 100%; - overflow-wrap: break-word; - display: flex; - gap: var(--section-gap); - padding: var(--section-gap); - padding-bottom: 0; -} - -.message:last-child { - animation: 0.6s show_message; -} - -@keyframes show_message { - from { - transform: translateY(10px); - opacity: 0; - } -} - -.message .avatar-container img { - max-width: 48px; - max-height: 48px; - box-shadow: 0.4px 0.5px 0.7px -2px rgba(0, 0, 0, 0.08), 1.1px 1.3px 2px -2px rgba(0, 0, 0, 0.041), - 2.7px 3px 4.8px -2px rgba(0, 0, 0, 0.029), 9px 10px 16px -2px rgba(0, 0, 0, 0.022); -} - -.message .content { - display: flex; - flex-direction: column; - width: 90%; - gap: 18px; -} - -.message .content p, -.message .content li, -.message .content code { - font-size: 1rem; - line-height: 1.3; -} - -@media screen and (max-height: 720px) { - .message { - padding: 12px; - gap: 0; - } - - .message .content { - margin-left: 8px; - width: 80%; - } - - .message .avatar-container img { - max-width: 32px; - max-height: 32px; - } - - .message .content, - .message .content p, - .message .content li, - .message .content code { - font-size: 0.875rem; - line-height: 1.3; - } -} diff --git a/spaces/Adapter/T2I-Adapter/ldm/data/utils.py b/spaces/Adapter/T2I-Adapter/ldm/data/utils.py deleted file mode 100644 index 7ece8c92b4aca12d6c65908900460cc4beaf522e..0000000000000000000000000000000000000000 --- a/spaces/Adapter/T2I-Adapter/ldm/data/utils.py +++ /dev/null @@ -1,40 +0,0 @@ -# -*- coding: utf-8 -*- - -import cv2 -import numpy as np -from torchvision.transforms import transforms -from torchvision.transforms.functional import to_tensor -from transformers import CLIPProcessor - -from basicsr.utils import img2tensor - - -class AddCannyFreezeThreshold(object): - - def __init__(self, low_threshold=100, high_threshold=200): - self.low_threshold = low_threshold - self.high_threshold = high_threshold - - def __call__(self, sample): - # sample['jpg'] is PIL image - x = sample['jpg'] - img = cv2.cvtColor(np.array(x), cv2.COLOR_RGB2BGR) - canny = cv2.Canny(img, self.low_threshold, self.high_threshold)[..., None] - sample['canny'] = img2tensor(canny, bgr2rgb=True, float32=True) / 255. - sample['jpg'] = to_tensor(x) - return sample - - -class AddStyle(object): - - def __init__(self, version): - self.processor = CLIPProcessor.from_pretrained(version) - self.pil_to_tensor = transforms.ToTensor() - - def __call__(self, sample): - # sample['jpg'] is PIL image - x = sample['jpg'] - style = self.processor(images=x, return_tensors="pt")['pixel_values'][0] - sample['style'] = style - sample['jpg'] = to_tensor(x) - return sample diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/spiralcurve-plugin.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/spiralcurve-plugin.d.ts deleted file mode 100644 index 1f1e5ce088c41839d7c859168f5ee7628dc0f161..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/spiralcurve-plugin.d.ts +++ /dev/null @@ -1,15 +0,0 @@ -import SpiralCurve from './spiralcurve'; - -export default class SpiralCurvePlugin extends Phaser.Plugins.BasePlugin { - add( - config?: SpiralCurve.IConfig - ): SpiralCurve; - - add( - x?: number, y?: number, - startRadius?: number, endRadius?: number, - startAngle?: number, endAngle?: number, - rotation?: number - ): SpiralCurve - -} \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/checkbox/Checkbox.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/checkbox/Checkbox.js deleted file mode 100644 index c991ae0f45c43e96fec5f19c36cef550be8d0d1a..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/checkbox/Checkbox.js +++ /dev/null @@ -1,2 +0,0 @@ -import Checkbox from '../../../plugins/checkbox.js'; -export default Checkbox; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/SetTransitCallbackMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/SetTransitCallbackMethods.js deleted file mode 100644 index 570583b7c218737745351150e75375a9bb003854..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/SetTransitCallbackMethods.js +++ /dev/null @@ -1,32 +0,0 @@ -import GetEaseConfig from './GetEaseConfig.js'; - -var PopUp = function (menu, duration) { - menu.popUp(GetEaseConfig(menu.root.easeIn, menu)) -} - -var ScaleDown = function (menu, duration) { - // Don't destroy here - menu.scaleDown(GetEaseConfig(menu.root.easeOut, menu)); -} - -export default { - setTransitInCallback(callback) { - if (callback === undefined) { - callback = PopUp; - } - - this.transitInCallback = callback; - // callback = function(gameObject, duration) {} - return this; - }, - - setTransitOutCallback(callback) { - if (callback === undefined) { - callback = ScaleDown; - } - - this.transitOutCallback = callback; - // callback = function(gameObject, duration) {} - return this; - } -} \ No newline at end of file diff --git a/spaces/AlexWang/lama/saicinpainting/evaluation/losses/fid/inception.py b/spaces/AlexWang/lama/saicinpainting/evaluation/losses/fid/inception.py deleted file mode 100644 index e9bd0863b457aaa40c770eaa4acbb142b18fc18b..0000000000000000000000000000000000000000 --- a/spaces/AlexWang/lama/saicinpainting/evaluation/losses/fid/inception.py +++ /dev/null @@ -1,323 +0,0 @@ -import logging - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torchvision import models - -try: - from torchvision.models.utils import load_state_dict_from_url -except ImportError: - from torch.utils.model_zoo import load_url as load_state_dict_from_url - -# Inception weights ported to Pytorch from -# http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz -FID_WEIGHTS_URL = 'https://github.com/mseitzer/pytorch-fid/releases/download/fid_weights/pt_inception-2015-12-05-6726825d.pth' - - -LOGGER = logging.getLogger(__name__) - - -class InceptionV3(nn.Module): - """Pretrained InceptionV3 network returning feature maps""" - - # Index of default block of inception to return, - # corresponds to output of final average pooling - DEFAULT_BLOCK_INDEX = 3 - - # Maps feature dimensionality to their output blocks indices - BLOCK_INDEX_BY_DIM = { - 64: 0, # First max pooling features - 192: 1, # Second max pooling featurs - 768: 2, # Pre-aux classifier features - 2048: 3 # Final average pooling features - } - - def __init__(self, - output_blocks=[DEFAULT_BLOCK_INDEX], - resize_input=True, - normalize_input=True, - requires_grad=False, - use_fid_inception=True): - """Build pretrained InceptionV3 - - Parameters - ---------- - output_blocks : list of int - Indices of blocks to return features of. Possible values are: - - 0: corresponds to output of first max pooling - - 1: corresponds to output of second max pooling - - 2: corresponds to output which is fed to aux classifier - - 3: corresponds to output of final average pooling - resize_input : bool - If true, bilinearly resizes input to width and height 299 before - feeding input to model. As the network without fully connected - layers is fully convolutional, it should be able to handle inputs - of arbitrary size, so resizing might not be strictly needed - normalize_input : bool - If true, scales the input from range (0, 1) to the range the - pretrained Inception network expects, namely (-1, 1) - requires_grad : bool - If true, parameters of the model require gradients. Possibly useful - for finetuning the network - use_fid_inception : bool - If true, uses the pretrained Inception model used in Tensorflow's - FID implementation. If false, uses the pretrained Inception model - available in torchvision. The FID Inception model has different - weights and a slightly different structure from torchvision's - Inception model. If you want to compute FID scores, you are - strongly advised to set this parameter to true to get comparable - results. - """ - super(InceptionV3, self).__init__() - - self.resize_input = resize_input - self.normalize_input = normalize_input - self.output_blocks = sorted(output_blocks) - self.last_needed_block = max(output_blocks) - - assert self.last_needed_block <= 3, \ - 'Last possible output block index is 3' - - self.blocks = nn.ModuleList() - - if use_fid_inception: - inception = fid_inception_v3() - else: - inception = models.inception_v3(pretrained=True) - - # Block 0: input to maxpool1 - block0 = [ - inception.Conv2d_1a_3x3, - inception.Conv2d_2a_3x3, - inception.Conv2d_2b_3x3, - nn.MaxPool2d(kernel_size=3, stride=2) - ] - self.blocks.append(nn.Sequential(*block0)) - - # Block 1: maxpool1 to maxpool2 - if self.last_needed_block >= 1: - block1 = [ - inception.Conv2d_3b_1x1, - inception.Conv2d_4a_3x3, - nn.MaxPool2d(kernel_size=3, stride=2) - ] - self.blocks.append(nn.Sequential(*block1)) - - # Block 2: maxpool2 to aux classifier - if self.last_needed_block >= 2: - block2 = [ - inception.Mixed_5b, - inception.Mixed_5c, - inception.Mixed_5d, - inception.Mixed_6a, - inception.Mixed_6b, - inception.Mixed_6c, - inception.Mixed_6d, - inception.Mixed_6e, - ] - self.blocks.append(nn.Sequential(*block2)) - - # Block 3: aux classifier to final avgpool - if self.last_needed_block >= 3: - block3 = [ - inception.Mixed_7a, - inception.Mixed_7b, - inception.Mixed_7c, - nn.AdaptiveAvgPool2d(output_size=(1, 1)) - ] - self.blocks.append(nn.Sequential(*block3)) - - for param in self.parameters(): - param.requires_grad = requires_grad - - def forward(self, inp): - """Get Inception feature maps - - Parameters - ---------- - inp : torch.autograd.Variable - Input tensor of shape Bx3xHxW. Values are expected to be in - range (0, 1) - - Returns - ------- - List of torch.autograd.Variable, corresponding to the selected output - block, sorted ascending by index - """ - outp = [] - x = inp - - if self.resize_input: - x = F.interpolate(x, - size=(299, 299), - mode='bilinear', - align_corners=False) - - if self.normalize_input: - x = 2 * x - 1 # Scale from range (0, 1) to range (-1, 1) - - for idx, block in enumerate(self.blocks): - x = block(x) - if idx in self.output_blocks: - outp.append(x) - - if idx == self.last_needed_block: - break - - return outp - - -def fid_inception_v3(): - """Build pretrained Inception model for FID computation - - The Inception model for FID computation uses a different set of weights - and has a slightly different structure than torchvision's Inception. - - This method first constructs torchvision's Inception and then patches the - necessary parts that are different in the FID Inception model. - """ - LOGGER.info('fid_inception_v3 called') - inception = models.inception_v3(num_classes=1008, - aux_logits=False, - pretrained=False) - LOGGER.info('models.inception_v3 done') - inception.Mixed_5b = FIDInceptionA(192, pool_features=32) - inception.Mixed_5c = FIDInceptionA(256, pool_features=64) - inception.Mixed_5d = FIDInceptionA(288, pool_features=64) - inception.Mixed_6b = FIDInceptionC(768, channels_7x7=128) - inception.Mixed_6c = FIDInceptionC(768, channels_7x7=160) - inception.Mixed_6d = FIDInceptionC(768, channels_7x7=160) - inception.Mixed_6e = FIDInceptionC(768, channels_7x7=192) - inception.Mixed_7b = FIDInceptionE_1(1280) - inception.Mixed_7c = FIDInceptionE_2(2048) - - LOGGER.info('fid_inception_v3 patching done') - - state_dict = load_state_dict_from_url(FID_WEIGHTS_URL, progress=True) - LOGGER.info('fid_inception_v3 weights downloaded') - - inception.load_state_dict(state_dict) - LOGGER.info('fid_inception_v3 weights loaded into model') - - return inception - - -class FIDInceptionA(models.inception.InceptionA): - """InceptionA block patched for FID computation""" - def __init__(self, in_channels, pool_features): - super(FIDInceptionA, self).__init__(in_channels, pool_features) - - def forward(self, x): - branch1x1 = self.branch1x1(x) - - branch5x5 = self.branch5x5_1(x) - branch5x5 = self.branch5x5_2(branch5x5) - - branch3x3dbl = self.branch3x3dbl_1(x) - branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl) - branch3x3dbl = self.branch3x3dbl_3(branch3x3dbl) - - # Patch: Tensorflow's average pool does not use the padded zero's in - # its average calculation - branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1, - count_include_pad=False) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch5x5, branch3x3dbl, branch_pool] - return torch.cat(outputs, 1) - - -class FIDInceptionC(models.inception.InceptionC): - """InceptionC block patched for FID computation""" - def __init__(self, in_channels, channels_7x7): - super(FIDInceptionC, self).__init__(in_channels, channels_7x7) - - def forward(self, x): - branch1x1 = self.branch1x1(x) - - branch7x7 = self.branch7x7_1(x) - branch7x7 = self.branch7x7_2(branch7x7) - branch7x7 = self.branch7x7_3(branch7x7) - - branch7x7dbl = self.branch7x7dbl_1(x) - branch7x7dbl = self.branch7x7dbl_2(branch7x7dbl) - branch7x7dbl = self.branch7x7dbl_3(branch7x7dbl) - branch7x7dbl = self.branch7x7dbl_4(branch7x7dbl) - branch7x7dbl = self.branch7x7dbl_5(branch7x7dbl) - - # Patch: Tensorflow's average pool does not use the padded zero's in - # its average calculation - branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1, - count_include_pad=False) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch7x7, branch7x7dbl, branch_pool] - return torch.cat(outputs, 1) - - -class FIDInceptionE_1(models.inception.InceptionE): - """First InceptionE block patched for FID computation""" - def __init__(self, in_channels): - super(FIDInceptionE_1, self).__init__(in_channels) - - def forward(self, x): - branch1x1 = self.branch1x1(x) - - branch3x3 = self.branch3x3_1(x) - branch3x3 = [ - self.branch3x3_2a(branch3x3), - self.branch3x3_2b(branch3x3), - ] - branch3x3 = torch.cat(branch3x3, 1) - - branch3x3dbl = self.branch3x3dbl_1(x) - branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl) - branch3x3dbl = [ - self.branch3x3dbl_3a(branch3x3dbl), - self.branch3x3dbl_3b(branch3x3dbl), - ] - branch3x3dbl = torch.cat(branch3x3dbl, 1) - - # Patch: Tensorflow's average pool does not use the padded zero's in - # its average calculation - branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1, - count_include_pad=False) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch3x3, branch3x3dbl, branch_pool] - return torch.cat(outputs, 1) - - -class FIDInceptionE_2(models.inception.InceptionE): - """Second InceptionE block patched for FID computation""" - def __init__(self, in_channels): - super(FIDInceptionE_2, self).__init__(in_channels) - - def forward(self, x): - branch1x1 = self.branch1x1(x) - - branch3x3 = self.branch3x3_1(x) - branch3x3 = [ - self.branch3x3_2a(branch3x3), - self.branch3x3_2b(branch3x3), - ] - branch3x3 = torch.cat(branch3x3, 1) - - branch3x3dbl = self.branch3x3dbl_1(x) - branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl) - branch3x3dbl = [ - self.branch3x3dbl_3a(branch3x3dbl), - self.branch3x3dbl_3b(branch3x3dbl), - ] - branch3x3dbl = torch.cat(branch3x3dbl, 1) - - # Patch: The FID Inception model uses max pooling instead of average - # pooling. This is likely an error in this specific Inception - # implementation, as other Inception models use average pooling here - # (which matches the description in the paper). - branch_pool = F.max_pool2d(x, kernel_size=3, stride=1, padding=1) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch3x3, branch3x3dbl, branch_pool] - return torch.cat(outputs, 1) diff --git a/spaces/AlexWang/lama/saicinpainting/training/visualizers/directory.py b/spaces/AlexWang/lama/saicinpainting/training/visualizers/directory.py deleted file mode 100644 index bc42e00500c7a5b70b2cef83b03e45b5bb471ff8..0000000000000000000000000000000000000000 --- a/spaces/AlexWang/lama/saicinpainting/training/visualizers/directory.py +++ /dev/null @@ -1,36 +0,0 @@ -import os - -import cv2 -import numpy as np - -from saicinpainting.training.visualizers.base import BaseVisualizer, visualize_mask_and_images_batch -from saicinpainting.utils import check_and_warn_input_range - - -class DirectoryVisualizer(BaseVisualizer): - DEFAULT_KEY_ORDER = 'image predicted_image inpainted'.split(' ') - - def __init__(self, outdir, key_order=DEFAULT_KEY_ORDER, max_items_in_batch=10, - last_without_mask=True, rescale_keys=None): - self.outdir = outdir - os.makedirs(self.outdir, exist_ok=True) - self.key_order = key_order - self.max_items_in_batch = max_items_in_batch - self.last_without_mask = last_without_mask - self.rescale_keys = rescale_keys - - def __call__(self, epoch_i, batch_i, batch, suffix='', rank=None): - check_and_warn_input_range(batch['image'], 0, 1, 'DirectoryVisualizer target image') - vis_img = visualize_mask_and_images_batch(batch, self.key_order, max_items=self.max_items_in_batch, - last_without_mask=self.last_without_mask, - rescale_keys=self.rescale_keys) - - vis_img = np.clip(vis_img * 255, 0, 255).astype('uint8') - - curoutdir = os.path.join(self.outdir, f'epoch{epoch_i:04d}{suffix}') - os.makedirs(curoutdir, exist_ok=True) - rank_suffix = f'_r{rank}' if rank is not None else '' - out_fname = os.path.join(curoutdir, f'batch{batch_i:07d}{rank_suffix}.jpg') - - vis_img = cv2.cvtColor(vis_img, cv2.COLOR_RGB2BGR) - cv2.imwrite(out_fname, vis_img) diff --git a/spaces/Aloento/9Nine-PITS/text/frontend/normalizer/numbers.py b/spaces/Aloento/9Nine-PITS/text/frontend/normalizer/numbers.py deleted file mode 100644 index abe5738fba1f11e21b2c44df0712128090ddfdfb..0000000000000000000000000000000000000000 --- a/spaces/Aloento/9Nine-PITS/text/frontend/normalizer/numbers.py +++ /dev/null @@ -1,86 +0,0 @@ -# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# number expansion is not that easy -import re - -import inflect - -_inflect = inflect.engine() -_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])') -_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)') -_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)') -_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)') -_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)') -_number_re = re.compile(r'[0-9]+') - - -def _remove_commas(m): - return m.group(1).replace(',', '') - - -def _expand_decimal_point(m): - return m.group(1).replace('.', ' point ') - - -def _expand_dollars(m): - match = m.group(1) - parts = match.split('.') - if len(parts) > 2: - return match + ' dollars' # Unexpected format - dollars = int(parts[0]) if parts[0] else 0 - cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0 - if dollars and cents: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit) - elif dollars: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - return '%s %s' % (dollars, dollar_unit) - elif cents: - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s' % (cents, cent_unit) - else: - return 'zero dollars' - - -def _expand_ordinal(m): - return _inflect.number_to_words(m.group(0)) - - -def _expand_number(m): - num = int(m.group(0)) - if num > 1000 and num < 3000: - if num == 2000: - return 'two thousand' - elif num > 2000 and num < 2010: - return 'two thousand ' + _inflect.number_to_words(num % 100) - elif num % 100 == 0: - return _inflect.number_to_words(num // 100) + ' hundred' - else: - return _inflect.number_to_words( - num, andword='', zero='oh', group=2).replace(', ', ' ') - else: - return _inflect.number_to_words(num, andword='') - - -def normalize_numbers(text): - """ Normalize numbers in English text. - """ - text = re.sub(_comma_number_re, _remove_commas, text) - text = re.sub(_pounds_re, r'\1 pounds', text) - text = re.sub(_dollars_re, _expand_dollars, text) - text = re.sub(_decimal_number_re, _expand_decimal_point, text) - text = re.sub(_ordinal_re, _expand_ordinal, text) - text = re.sub(_number_re, _expand_number, text) - return text diff --git a/spaces/Andres99/Tune-A-Video-Training-UI/app.py b/spaces/Andres99/Tune-A-Video-Training-UI/app.py deleted file mode 100644 index 3e0b9a282fc42c71e6c0f8d7f238a79a9c53c697..0000000000000000000000000000000000000000 --- a/spaces/Andres99/Tune-A-Video-Training-UI/app.py +++ /dev/null @@ -1,84 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import os -from subprocess import getoutput - -import gradio as gr -import torch - -from app_inference import create_inference_demo -from app_training import create_training_demo -from app_upload import create_upload_demo -from inference import InferencePipeline -from trainer import Trainer - -TITLE = '# [Tune-A-Video](https://tuneavideo.github.io/) UI' - -ORIGINAL_SPACE_ID = 'Tune-A-Video-library/Tune-A-Video-Training-UI' -SPACE_ID = os.getenv('SPACE_ID', ORIGINAL_SPACE_ID) -GPU_DATA = getoutput('nvidia-smi') -SHARED_UI_WARNING = f'''## Attention - Training doesn't work in this shared UI. You can duplicate and use it with a paid private T4 GPU. - -
    Duplicate Space
    -''' - -if os.getenv('SYSTEM') == 'spaces' and SPACE_ID != ORIGINAL_SPACE_ID: - SETTINGS = f'Settings' -else: - SETTINGS = 'Settings' - -INVALID_GPU_WARNING = f'''## Attention - the specified GPU is invalid. Training may not work. Make sure you have selected a `T4 GPU` for this task.''' - -CUDA_NOT_AVAILABLE_WARNING = f'''## Attention - Running on CPU. -
    -You can assign a GPU in the {SETTINGS} tab if you are running this on HF Spaces. -You can use "T4 small/medium" to run this demo. -
    -''' - -HF_TOKEN_NOT_SPECIFIED_WARNING = f'''The environment variable `HF_TOKEN` is not specified. Feel free to specify your Hugging Face token with write permission if you don't want to manually provide it for every run. -
    -You can check and create your Hugging Face tokens here. -You can specify environment variables in the "Repository secrets" section of the {SETTINGS} tab. -
    -''' - -HF_TOKEN = os.getenv('HF_TOKEN') - - -def show_warning(warning_text: str) -> gr.Blocks: - with gr.Blocks() as demo: - with gr.Box(): - gr.Markdown(warning_text) - return demo - - -pipe = InferencePipeline(HF_TOKEN) -trainer = Trainer(HF_TOKEN) - -with gr.Blocks(css='style.css') as demo: - if SPACE_ID == ORIGINAL_SPACE_ID: - show_warning(SHARED_UI_WARNING) - elif not torch.cuda.is_available(): - show_warning(CUDA_NOT_AVAILABLE_WARNING) - elif (not 'T4' in GPU_DATA): - show_warning(INVALID_GPU_WARNING) - - gr.Markdown(TITLE) - with gr.Tabs(): - with gr.TabItem('Train'): - create_training_demo(trainer, pipe) - with gr.TabItem('Run'): - create_inference_demo(pipe, HF_TOKEN) - with gr.TabItem('Upload'): - gr.Markdown(''' - - You can use this tab to upload models later if you choose not to upload models in training time or if upload in training time failed. - ''') - create_upload_demo(HF_TOKEN) - - if not HF_TOKEN: - show_warning(HF_TOKEN_NOT_SPECIFIED_WARNING) - -demo.queue(max_size=1).launch(share=False) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/unclip_text_interpolation.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/unclip_text_interpolation.py deleted file mode 100644 index 290f45317004182a6aeb0701c42d0fa65899c1ed..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/unclip_text_interpolation.py +++ /dev/null @@ -1,573 +0,0 @@ -import inspect -from typing import List, Optional, Tuple, Union - -import torch -from torch.nn import functional as F -from transformers import CLIPTextModelWithProjection, CLIPTokenizer -from transformers.models.clip.modeling_clip import CLIPTextModelOutput - -from diffusers import ( - DiffusionPipeline, - ImagePipelineOutput, - PriorTransformer, - UnCLIPScheduler, - UNet2DConditionModel, - UNet2DModel, -) -from diffusers.pipelines.unclip import UnCLIPTextProjModel -from diffusers.utils import is_accelerate_available, logging, randn_tensor - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -def slerp(val, low, high): - """ - Find the interpolation point between the 'low' and 'high' values for the given 'val'. See https://en.wikipedia.org/wiki/Slerp for more details on the topic. - """ - low_norm = low / torch.norm(low) - high_norm = high / torch.norm(high) - omega = torch.acos((low_norm * high_norm)) - so = torch.sin(omega) - res = (torch.sin((1.0 - val) * omega) / so) * low + (torch.sin(val * omega) / so) * high - return res - - -class UnCLIPTextInterpolationPipeline(DiffusionPipeline): - - """ - Pipeline for prompt-to-prompt interpolation on CLIP text embeddings and using the UnCLIP / Dall-E to decode them to images. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - text_encoder ([`CLIPTextModelWithProjection`]): - Frozen text-encoder. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - prior ([`PriorTransformer`]): - The canonincal unCLIP prior to approximate the image embedding from the text embedding. - text_proj ([`UnCLIPTextProjModel`]): - Utility class to prepare and combine the embeddings before they are passed to the decoder. - decoder ([`UNet2DConditionModel`]): - The decoder to invert the image embedding into an image. - super_res_first ([`UNet2DModel`]): - Super resolution unet. Used in all but the last step of the super resolution diffusion process. - super_res_last ([`UNet2DModel`]): - Super resolution unet. Used in the last step of the super resolution diffusion process. - prior_scheduler ([`UnCLIPScheduler`]): - Scheduler used in the prior denoising process. Just a modified DDPMScheduler. - decoder_scheduler ([`UnCLIPScheduler`]): - Scheduler used in the decoder denoising process. Just a modified DDPMScheduler. - super_res_scheduler ([`UnCLIPScheduler`]): - Scheduler used in the super resolution denoising process. Just a modified DDPMScheduler. - - """ - - prior: PriorTransformer - decoder: UNet2DConditionModel - text_proj: UnCLIPTextProjModel - text_encoder: CLIPTextModelWithProjection - tokenizer: CLIPTokenizer - super_res_first: UNet2DModel - super_res_last: UNet2DModel - - prior_scheduler: UnCLIPScheduler - decoder_scheduler: UnCLIPScheduler - super_res_scheduler: UnCLIPScheduler - - # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.__init__ - def __init__( - self, - prior: PriorTransformer, - decoder: UNet2DConditionModel, - text_encoder: CLIPTextModelWithProjection, - tokenizer: CLIPTokenizer, - text_proj: UnCLIPTextProjModel, - super_res_first: UNet2DModel, - super_res_last: UNet2DModel, - prior_scheduler: UnCLIPScheduler, - decoder_scheduler: UnCLIPScheduler, - super_res_scheduler: UnCLIPScheduler, - ): - super().__init__() - - self.register_modules( - prior=prior, - decoder=decoder, - text_encoder=text_encoder, - tokenizer=tokenizer, - text_proj=text_proj, - super_res_first=super_res_first, - super_res_last=super_res_last, - prior_scheduler=prior_scheduler, - decoder_scheduler=decoder_scheduler, - super_res_scheduler=super_res_scheduler, - ) - - # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents - def prepare_latents(self, shape, dtype, device, generator, latents, scheduler): - if latents is None: - latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - else: - if latents.shape != shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}") - latents = latents.to(device) - - latents = latents * scheduler.init_noise_sigma - return latents - - # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline._encode_prompt - def _encode_prompt( - self, - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - text_model_output: Optional[Union[CLIPTextModelOutput, Tuple]] = None, - text_attention_mask: Optional[torch.Tensor] = None, - ): - if text_model_output is None: - batch_size = len(prompt) if isinstance(prompt, list) else 1 - # get prompt text embeddings - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - text_mask = text_inputs.attention_mask.bool().to(device) - - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( - text_input_ids, untruncated_ids - ): - removed_text = self.tokenizer.batch_decode( - untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] - ) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length] - - text_encoder_output = self.text_encoder(text_input_ids.to(device)) - - prompt_embeds = text_encoder_output.text_embeds - text_encoder_hidden_states = text_encoder_output.last_hidden_state - - else: - batch_size = text_model_output[0].shape[0] - prompt_embeds, text_encoder_hidden_states = text_model_output[0], text_model_output[1] - text_mask = text_attention_mask - - prompt_embeds = prompt_embeds.repeat_interleave(num_images_per_prompt, dim=0) - text_encoder_hidden_states = text_encoder_hidden_states.repeat_interleave(num_images_per_prompt, dim=0) - text_mask = text_mask.repeat_interleave(num_images_per_prompt, dim=0) - - if do_classifier_free_guidance: - uncond_tokens = [""] * batch_size - - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - uncond_text_mask = uncond_input.attention_mask.bool().to(device) - negative_prompt_embeds_text_encoder_output = self.text_encoder(uncond_input.input_ids.to(device)) - - negative_prompt_embeds = negative_prompt_embeds_text_encoder_output.text_embeds - uncond_text_encoder_hidden_states = negative_prompt_embeds_text_encoder_output.last_hidden_state - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - - seq_len = negative_prompt_embeds.shape[1] - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len) - - seq_len = uncond_text_encoder_hidden_states.shape[1] - uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.repeat(1, num_images_per_prompt, 1) - uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.view( - batch_size * num_images_per_prompt, seq_len, -1 - ) - uncond_text_mask = uncond_text_mask.repeat_interleave(num_images_per_prompt, dim=0) - - # done duplicates - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - text_encoder_hidden_states = torch.cat([uncond_text_encoder_hidden_states, text_encoder_hidden_states]) - - text_mask = torch.cat([uncond_text_mask, text_mask]) - - return prompt_embeds, text_encoder_hidden_states, text_mask - - # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.enable_sequential_cpu_offload - def enable_sequential_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline's - models have their state dicts saved to CPU and then are moved to a `torch.device('meta') and loaded to GPU only - when their specific submodule has its `forward` method called. - """ - if is_accelerate_available(): - from accelerate import cpu_offload - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - device = torch.device(f"cuda:{gpu_id}") - - # TODO: self.prior.post_process_latents is not covered by the offload hooks, so it fails if added to the list - models = [ - self.decoder, - self.text_proj, - self.text_encoder, - self.super_res_first, - self.super_res_last, - ] - for cpu_offloaded_model in models: - if cpu_offloaded_model is not None: - cpu_offload(cpu_offloaded_model, device) - - @property - # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline._execution_device - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module - hooks. - """ - if self.device != torch.device("meta") or not hasattr(self.decoder, "_hf_hook"): - return self.device - for module in self.decoder.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - @torch.no_grad() - def __call__( - self, - start_prompt: str, - end_prompt: str, - steps: int = 5, - prior_num_inference_steps: int = 25, - decoder_num_inference_steps: int = 25, - super_res_num_inference_steps: int = 7, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - prior_guidance_scale: float = 4.0, - decoder_guidance_scale: float = 8.0, - enable_sequential_cpu_offload=True, - gpu_id=0, - output_type: Optional[str] = "pil", - return_dict: bool = True, - ): - """ - Function invoked when calling the pipeline for generation. - - Args: - start_prompt (`str`): - The prompt to start the image generation interpolation from. - end_prompt (`str`): - The prompt to end the image generation interpolation at. - steps (`int`, *optional*, defaults to 5): - The number of steps over which to interpolate from start_prompt to end_prompt. The pipeline returns - the same number of images as this value. - prior_num_inference_steps (`int`, *optional*, defaults to 25): - The number of denoising steps for the prior. More denoising steps usually lead to a higher quality - image at the expense of slower inference. - decoder_num_inference_steps (`int`, *optional*, defaults to 25): - The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality - image at the expense of slower inference. - super_res_num_inference_steps (`int`, *optional*, defaults to 7): - The number of denoising steps for super resolution. More denoising steps usually lead to a higher - quality image at the expense of slower inference. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - prior_guidance_scale (`float`, *optional*, defaults to 4.0): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - decoder_guidance_scale (`float`, *optional*, defaults to 4.0): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generated image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - enable_sequential_cpu_offload (`bool`, *optional*, defaults to `True`): - If True, offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline's - models have their state dicts saved to CPU and then are moved to a `torch.device('meta') and loaded to GPU only - when their specific submodule has its `forward` method called. - gpu_id (`int`, *optional*, defaults to `0`): - The gpu_id to be passed to enable_sequential_cpu_offload. Only works when enable_sequential_cpu_offload is set to True. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. - """ - - if not isinstance(start_prompt, str) or not isinstance(end_prompt, str): - raise ValueError( - f"`start_prompt` and `end_prompt` should be of type `str` but got {type(start_prompt)} and" - f" {type(end_prompt)} instead" - ) - - if enable_sequential_cpu_offload: - self.enable_sequential_cpu_offload(gpu_id=gpu_id) - - device = self._execution_device - - # Turn the prompts into embeddings. - inputs = self.tokenizer( - [start_prompt, end_prompt], - padding="max_length", - truncation=True, - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ) - inputs.to(device) - text_model_output = self.text_encoder(**inputs) - - text_attention_mask = torch.max(inputs.attention_mask[0], inputs.attention_mask[1]) - text_attention_mask = torch.cat([text_attention_mask.unsqueeze(0)] * steps).to(device) - - # Interpolate from the start to end prompt using slerp and add the generated images to an image output pipeline - batch_text_embeds = [] - batch_last_hidden_state = [] - - for interp_val in torch.linspace(0, 1, steps): - text_embeds = slerp(interp_val, text_model_output.text_embeds[0], text_model_output.text_embeds[1]) - last_hidden_state = slerp( - interp_val, text_model_output.last_hidden_state[0], text_model_output.last_hidden_state[1] - ) - batch_text_embeds.append(text_embeds.unsqueeze(0)) - batch_last_hidden_state.append(last_hidden_state.unsqueeze(0)) - - batch_text_embeds = torch.cat(batch_text_embeds) - batch_last_hidden_state = torch.cat(batch_last_hidden_state) - - text_model_output = CLIPTextModelOutput( - text_embeds=batch_text_embeds, last_hidden_state=batch_last_hidden_state - ) - - batch_size = text_model_output[0].shape[0] - - do_classifier_free_guidance = prior_guidance_scale > 1.0 or decoder_guidance_scale > 1.0 - - prompt_embeds, text_encoder_hidden_states, text_mask = self._encode_prompt( - prompt=None, - device=device, - num_images_per_prompt=1, - do_classifier_free_guidance=do_classifier_free_guidance, - text_model_output=text_model_output, - text_attention_mask=text_attention_mask, - ) - - # prior - - self.prior_scheduler.set_timesteps(prior_num_inference_steps, device=device) - prior_timesteps_tensor = self.prior_scheduler.timesteps - - embedding_dim = self.prior.config.embedding_dim - - prior_latents = self.prepare_latents( - (batch_size, embedding_dim), - prompt_embeds.dtype, - device, - generator, - None, - self.prior_scheduler, - ) - - for i, t in enumerate(self.progress_bar(prior_timesteps_tensor)): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([prior_latents] * 2) if do_classifier_free_guidance else prior_latents - - predicted_image_embedding = self.prior( - latent_model_input, - timestep=t, - proj_embedding=prompt_embeds, - encoder_hidden_states=text_encoder_hidden_states, - attention_mask=text_mask, - ).predicted_image_embedding - - if do_classifier_free_guidance: - predicted_image_embedding_uncond, predicted_image_embedding_text = predicted_image_embedding.chunk(2) - predicted_image_embedding = predicted_image_embedding_uncond + prior_guidance_scale * ( - predicted_image_embedding_text - predicted_image_embedding_uncond - ) - - if i + 1 == prior_timesteps_tensor.shape[0]: - prev_timestep = None - else: - prev_timestep = prior_timesteps_tensor[i + 1] - - prior_latents = self.prior_scheduler.step( - predicted_image_embedding, - timestep=t, - sample=prior_latents, - generator=generator, - prev_timestep=prev_timestep, - ).prev_sample - - prior_latents = self.prior.post_process_latents(prior_latents) - - image_embeddings = prior_latents - - # done prior - - # decoder - - text_encoder_hidden_states, additive_clip_time_embeddings = self.text_proj( - image_embeddings=image_embeddings, - prompt_embeds=prompt_embeds, - text_encoder_hidden_states=text_encoder_hidden_states, - do_classifier_free_guidance=do_classifier_free_guidance, - ) - - if device.type == "mps": - # HACK: MPS: There is a panic when padding bool tensors, - # so cast to int tensor for the pad and back to bool afterwards - text_mask = text_mask.type(torch.int) - decoder_text_mask = F.pad(text_mask, (self.text_proj.clip_extra_context_tokens, 0), value=1) - decoder_text_mask = decoder_text_mask.type(torch.bool) - else: - decoder_text_mask = F.pad(text_mask, (self.text_proj.clip_extra_context_tokens, 0), value=True) - - self.decoder_scheduler.set_timesteps(decoder_num_inference_steps, device=device) - decoder_timesteps_tensor = self.decoder_scheduler.timesteps - - num_channels_latents = self.decoder.config.in_channels - height = self.decoder.config.sample_size - width = self.decoder.config.sample_size - - decoder_latents = self.prepare_latents( - (batch_size, num_channels_latents, height, width), - text_encoder_hidden_states.dtype, - device, - generator, - None, - self.decoder_scheduler, - ) - - for i, t in enumerate(self.progress_bar(decoder_timesteps_tensor)): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([decoder_latents] * 2) if do_classifier_free_guidance else decoder_latents - - noise_pred = self.decoder( - sample=latent_model_input, - timestep=t, - encoder_hidden_states=text_encoder_hidden_states, - class_labels=additive_clip_time_embeddings, - attention_mask=decoder_text_mask, - ).sample - - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred_uncond, _ = noise_pred_uncond.split(latent_model_input.shape[1], dim=1) - noise_pred_text, predicted_variance = noise_pred_text.split(latent_model_input.shape[1], dim=1) - noise_pred = noise_pred_uncond + decoder_guidance_scale * (noise_pred_text - noise_pred_uncond) - noise_pred = torch.cat([noise_pred, predicted_variance], dim=1) - - if i + 1 == decoder_timesteps_tensor.shape[0]: - prev_timestep = None - else: - prev_timestep = decoder_timesteps_tensor[i + 1] - - # compute the previous noisy sample x_t -> x_t-1 - decoder_latents = self.decoder_scheduler.step( - noise_pred, t, decoder_latents, prev_timestep=prev_timestep, generator=generator - ).prev_sample - - decoder_latents = decoder_latents.clamp(-1, 1) - - image_small = decoder_latents - - # done decoder - - # super res - - self.super_res_scheduler.set_timesteps(super_res_num_inference_steps, device=device) - super_res_timesteps_tensor = self.super_res_scheduler.timesteps - - channels = self.super_res_first.config.in_channels // 2 - height = self.super_res_first.config.sample_size - width = self.super_res_first.config.sample_size - - super_res_latents = self.prepare_latents( - (batch_size, channels, height, width), - image_small.dtype, - device, - generator, - None, - self.super_res_scheduler, - ) - - if device.type == "mps": - # MPS does not support many interpolations - image_upscaled = F.interpolate(image_small, size=[height, width]) - else: - interpolate_antialias = {} - if "antialias" in inspect.signature(F.interpolate).parameters: - interpolate_antialias["antialias"] = True - - image_upscaled = F.interpolate( - image_small, size=[height, width], mode="bicubic", align_corners=False, **interpolate_antialias - ) - - for i, t in enumerate(self.progress_bar(super_res_timesteps_tensor)): - # no classifier free guidance - - if i == super_res_timesteps_tensor.shape[0] - 1: - unet = self.super_res_last - else: - unet = self.super_res_first - - latent_model_input = torch.cat([super_res_latents, image_upscaled], dim=1) - - noise_pred = unet( - sample=latent_model_input, - timestep=t, - ).sample - - if i + 1 == super_res_timesteps_tensor.shape[0]: - prev_timestep = None - else: - prev_timestep = super_res_timesteps_tensor[i + 1] - - # compute the previous noisy sample x_t -> x_t-1 - super_res_latents = self.super_res_scheduler.step( - noise_pred, t, super_res_latents, prev_timestep=prev_timestep, generator=generator - ).prev_sample - - image = super_res_latents - # done super res - - # post processing - - image = image * 0.5 + 0.5 - image = image.clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image,) - - return ImagePipelineOutput(images=image) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/detr.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/detr.py deleted file mode 100644 index 5ff82a280daa0a015f662bdf2509fa11542d46d4..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/detr.py +++ /dev/null @@ -1,46 +0,0 @@ -from mmdet.core import bbox2result -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class DETR(SingleStageDetector): - r"""Implementation of `DETR: End-to-End Object Detection with - Transformers `_""" - - def __init__(self, - backbone, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(DETR, self).__init__(backbone, None, bbox_head, train_cfg, - test_cfg, pretrained) - - def simple_test(self, img, img_metas, rescale=False): - """Test function without test time augmentation. - - Args: - imgs (list[torch.Tensor]): List of multiple images - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - batch_size = len(img_metas) - assert batch_size == 1, 'Currently only batch_size 1 for inference ' \ - f'mode is supported. Found batch_size {batch_size}.' - x = self.extract_feat(img) - outs = self.bbox_head(x, img_metas) - bbox_list = self.bbox_head.get_bboxes( - *outs, img_metas, rescale=rescale) - - bbox_results = [ - bbox2result(det_bboxes, det_labels, self.bbox_head.num_classes) - for det_bboxes, det_labels in bbox_list - ] - return bbox_results diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r18-d8_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r18-d8_512x1024_80k_cityscapes.py deleted file mode 100644 index 5a1d29e480cb46a763cb17d2105b3f040153d417..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r18-d8_512x1024_80k_cityscapes.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = './fcn_r50-d8_512x1024_80k_cityscapes.py' -model = dict( - pretrained='open-mmlab://resnet18_v1c', - backbone=dict(depth=18), - decode_head=dict( - in_channels=512, - channels=128, - ), - auxiliary_head=dict(in_channels=256, channels=64)) diff --git a/spaces/Andy1621/uniformerv2_demo/uniformerv2.py b/spaces/Andy1621/uniformerv2_demo/uniformerv2.py deleted file mode 100644 index 5ca7c3d511f4e3c2c8c6e89ace89e2ad8680d34f..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformerv2_demo/uniformerv2.py +++ /dev/null @@ -1,510 +0,0 @@ -#!/usr/bin/env python -import os -from collections import OrderedDict - -from timm.models.layers import DropPath -import torch -from torch import nn -from torch.nn import MultiheadAttention -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint - - -MODEL_PATH = './' -_MODELS = { - "ViT-B/16": os.path.join(MODEL_PATH, "vit_b16.pth"), - "ViT-L/14": os.path.join(MODEL_PATH, "vit_l14.pth"), - "ViT-L/14_336": os.path.join(MODEL_PATH, "vit_l14_336.pth"), -} - - -class LayerNorm(nn.LayerNorm): - """Subclass torch's LayerNorm to handle fp16.""" - - def forward(self, x): - orig_type = x.dtype - ret = super().forward(x.type(torch.float32)) - return ret.type(orig_type) - - -class QuickGELU(nn.Module): - def forward(self, x): - return x * torch.sigmoid(1.702 * x) - - -class Local_MHRA(nn.Module): - def __init__(self, d_model, dw_reduction=1.5, pos_kernel_size=3): - super().__init__() - - padding = pos_kernel_size // 2 - re_d_model = int(d_model // dw_reduction) - self.pos_embed = nn.Sequential( - nn.BatchNorm3d(d_model), - nn.Conv3d(d_model, re_d_model, kernel_size=1, stride=1, padding=0), - nn.Conv3d(re_d_model, re_d_model, kernel_size=(pos_kernel_size, 1, 1), stride=(1, 1, 1), padding=(padding, 0, 0), groups=re_d_model), - nn.Conv3d(re_d_model, d_model, kernel_size=1, stride=1, padding=0), - ) - - # init zero - print('Init zero for Conv in pos_emb') - nn.init.constant_(self.pos_embed[3].weight, 0) - nn.init.constant_(self.pos_embed[3].bias, 0) - - def forward(self, x): - return self.pos_embed(x) - - -class ResidualAttentionBlock(nn.Module): - def __init__( - self, d_model, n_head, attn_mask=None, drop_path=0.0, - dw_reduction=1.5, no_lmhra=False, double_lmhra=True - ): - super().__init__() - - self.n_head = n_head - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - print(f'Drop path rate: {drop_path}') - - self.no_lmhra = no_lmhra - self.double_lmhra = double_lmhra - print(f'No L_MHRA: {no_lmhra}') - print(f'Double L_MHRA: {double_lmhra}') - if not no_lmhra: - self.lmhra1 = Local_MHRA(d_model, dw_reduction=dw_reduction) - if double_lmhra: - self.lmhra2 = Local_MHRA(d_model, dw_reduction=dw_reduction) - - # spatial - self.attn = MultiheadAttention(d_model, n_head) - self.ln_1 = LayerNorm(d_model) - self.mlp = nn.Sequential(OrderedDict([ - ("c_fc", nn.Linear(d_model, d_model * 4)), - ("gelu", QuickGELU()), - ("c_proj", nn.Linear(d_model * 4, d_model)) - ])) - self.ln_2 = LayerNorm(d_model) - self.attn_mask = attn_mask - - def attention(self, x): - self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None - return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0] - - def forward(self, x, T=8, use_checkpoint=False): - # x: 1+HW, NT, C - if not self.no_lmhra: - # Local MHRA - tmp_x = x[1:, :, :] - L, NT, C = tmp_x.shape - N = NT // T - H = W = int(L ** 0.5) - tmp_x = tmp_x.view(H, W, N, T, C).permute(2, 4, 3, 0, 1).contiguous() - tmp_x = tmp_x + self.drop_path(self.lmhra1(tmp_x)) - tmp_x = tmp_x.view(N, C, T, L).permute(3, 0, 2, 1).contiguous().view(L, NT, C) - x = torch.cat([x[:1, :, :], tmp_x], dim=0) - # MHSA - if use_checkpoint: - attn_out = checkpoint.checkpoint(self.attention, self.ln_1(x)) - x = x + self.drop_path(attn_out) - else: - x = x + self.drop_path(self.attention(self.ln_1(x))) - # Local MHRA - if not self.no_lmhra and self.double_lmhra: - tmp_x = x[1:, :, :] - tmp_x = tmp_x.view(H, W, N, T, C).permute(2, 4, 3, 0, 1).contiguous() - tmp_x = tmp_x + self.drop_path(self.lmhra2(tmp_x)) - tmp_x = tmp_x.view(N, C, T, L).permute(3, 0, 2, 1).contiguous().view(L, NT, C) - x = torch.cat([x[:1, :, :], tmp_x], dim=0) - # FFN - if use_checkpoint: - mlp_out = checkpoint.checkpoint(self.mlp, self.ln_2(x)) - x = x + self.drop_path(mlp_out) - else: - x = x + self.drop_path(self.mlp(self.ln_2(x))) - return x - - -class Extractor(nn.Module): - def __init__( - self, d_model, n_head, attn_mask=None, - mlp_factor=4.0, dropout=0.0, drop_path=0.0, - ): - super().__init__() - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - print(f'Drop path rate: {drop_path}') - self.attn = nn.MultiheadAttention(d_model, n_head) - self.ln_1 = nn.LayerNorm(d_model) - d_mlp = round(mlp_factor * d_model) - self.mlp = nn.Sequential(OrderedDict([ - ("c_fc", nn.Linear(d_model, d_mlp)), - ("gelu", QuickGELU()), - ("dropout", nn.Dropout(dropout)), - ("c_proj", nn.Linear(d_mlp, d_model)) - ])) - self.ln_2 = nn.LayerNorm(d_model) - self.ln_3 = nn.LayerNorm(d_model) - self.attn_mask = attn_mask - - # zero init - nn.init.xavier_uniform_(self.attn.in_proj_weight) - nn.init.constant_(self.attn.out_proj.weight, 0.) - nn.init.constant_(self.attn.out_proj.bias, 0.) - nn.init.xavier_uniform_(self.mlp[0].weight) - nn.init.constant_(self.mlp[-1].weight, 0.) - nn.init.constant_(self.mlp[-1].bias, 0.) - - def attention(self, x, y): - d_model = self.ln_1.weight.size(0) - q = (x @ self.attn.in_proj_weight[:d_model].T) + self.attn.in_proj_bias[:d_model] - - k = (y @ self.attn.in_proj_weight[d_model:-d_model].T) + self.attn.in_proj_bias[d_model:-d_model] - v = (y @ self.attn.in_proj_weight[-d_model:].T) + self.attn.in_proj_bias[-d_model:] - Tx, Ty, N = q.size(0), k.size(0), q.size(1) - q = q.view(Tx, N, self.attn.num_heads, self.attn.head_dim).permute(1, 2, 0, 3) - k = k.view(Ty, N, self.attn.num_heads, self.attn.head_dim).permute(1, 2, 0, 3) - v = v.view(Ty, N, self.attn.num_heads, self.attn.head_dim).permute(1, 2, 0, 3) - aff = (q @ k.transpose(-2, -1) / (self.attn.head_dim ** 0.5)) - - aff = aff.softmax(dim=-1) - out = aff @ v - out = out.permute(2, 0, 1, 3).flatten(2) - out = self.attn.out_proj(out) - return out - - def forward(self, x, y): - x = x + self.drop_path(self.attention(self.ln_1(x), self.ln_3(y))) - x = x + self.drop_path(self.mlp(self.ln_2(x))) - return x - - -class Transformer(nn.Module): - def __init__( - self, width, layers, heads, attn_mask=None, backbone_drop_path_rate=0., - use_checkpoint=False, checkpoint_num=[0], t_size=8, dw_reduction=2, - no_lmhra=False, double_lmhra=True, - return_list=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], - n_layers=12, n_dim=768, n_head=12, mlp_factor=4.0, drop_path_rate=0., - mlp_dropout=[0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5], - cls_dropout=0.5, num_classes=400, - ): - super().__init__() - self.T = t_size - self.return_list = return_list - # backbone - b_dpr = [x.item() for x in torch.linspace(0, backbone_drop_path_rate, layers)] - self.resblocks = nn.ModuleList([ - ResidualAttentionBlock( - width, heads, attn_mask, - drop_path=b_dpr[i], - dw_reduction=dw_reduction, - no_lmhra=no_lmhra, - double_lmhra=double_lmhra, - ) for i in range(layers) - ]) - # checkpoint - self.use_checkpoint = use_checkpoint - self.checkpoint_num = checkpoint_num - self.n_layers = n_layers - print(f'Use checkpoint: {self.use_checkpoint}') - print(f'Checkpoint number: {self.checkpoint_num}') - - # global block - assert n_layers == len(return_list) - if n_layers > 0: - self.temporal_cls_token = nn.Parameter(torch.zeros(1, 1, n_dim)) - self.dpe = nn.ModuleList([ - nn.Conv3d(n_dim, n_dim, kernel_size=3, stride=1, padding=1, bias=True, groups=n_dim) - for i in range(n_layers) - ]) - for m in self.dpe: - nn.init.constant_(m.bias, 0.) - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, n_layers)] - self.dec = nn.ModuleList([ - Extractor( - n_dim, n_head, mlp_factor=mlp_factor, - dropout=mlp_dropout[i], drop_path=dpr[i], - ) for i in range(n_layers) - ]) - self.balance = nn.Parameter(torch.zeros((n_dim))) - self.sigmoid = nn.Sigmoid() - # projection - self.proj = nn.Sequential( - nn.LayerNorm(n_dim), - nn.Dropout(cls_dropout), - nn.Linear(n_dim, num_classes), - ) - - def forward(self, x): - T_down = self.T - L, NT, C = x.shape - N = NT // T_down - H = W = int((L - 1) ** 0.5) - - if self.n_layers > 0: - cls_token = self.temporal_cls_token.repeat(1, N, 1) - - j = -1 - for i, resblock in enumerate(self.resblocks): - if self.use_checkpoint and i < self.checkpoint_num[0]: - x = resblock(x, self.T, use_checkpoint=True) - else: - x = resblock(x, T_down) - if i in self.return_list: - j += 1 - tmp_x = x.clone() - tmp_x = tmp_x.view(L, N, T_down, C) - # dpe - _, tmp_feats = tmp_x[:1], tmp_x[1:] - tmp_feats = tmp_feats.permute(1, 3, 2, 0).reshape(N, C, T_down, H, W) - tmp_feats = self.dpe[j](tmp_feats).view(N, C, T_down, L - 1).permute(3, 0, 2, 1).contiguous() - tmp_x[1:] = tmp_x[1:] + tmp_feats - # global block - tmp_x = tmp_x.permute(2, 0, 1, 3).flatten(0, 1) # T * L, N, C - cls_token = self.dec[j](cls_token, tmp_x) - - if self.n_layers > 0: - weight = self.sigmoid(self.balance) - residual = x.view(L, N, T_down, C)[0].mean(1) # L, N, T, C - return self.proj((1 - weight) * cls_token[0, :, :] + weight * residual) - else: - residual = x.view(L, N, T_down, C)[0].mean(1) # L, N, T, C - return self.proj(residual) - - -class VisionTransformer(nn.Module): - def __init__( - self, - # backbone - input_resolution, patch_size, width, layers, heads, output_dim, backbone_drop_path_rate=0., - use_checkpoint=False, checkpoint_num=[0], t_size=8, kernel_size=3, dw_reduction=1.5, - temporal_downsample=True, - no_lmhra=-False, double_lmhra=True, - # global block - return_list=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], - n_layers=12, n_dim=768, n_head=12, mlp_factor=4.0, drop_path_rate=0., - mlp_dropout=[0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5], - cls_dropout=0.5, num_classes=400, - ): - super().__init__() - self.input_resolution = input_resolution - self.output_dim = output_dim - padding = (kernel_size - 1) // 2 - if temporal_downsample: - self.conv1 = nn.Conv3d(3, width, (kernel_size, patch_size, patch_size), (2, patch_size, patch_size), (padding, 0, 0), bias=False) - t_size = t_size // 2 - else: - self.conv1 = nn.Conv3d(3, width, (1, patch_size, patch_size), (1, patch_size, patch_size), (0, 0, 0), bias=False) - - scale = width ** -0.5 - self.class_embedding = nn.Parameter(scale * torch.randn(width)) - self.positional_embedding = nn.Parameter(scale * torch.randn((input_resolution // patch_size) ** 2 + 1, width)) - self.ln_pre = LayerNorm(width) - - self.transformer = Transformer( - width, layers, heads, dw_reduction=dw_reduction, - backbone_drop_path_rate=backbone_drop_path_rate, - use_checkpoint=use_checkpoint, checkpoint_num=checkpoint_num, t_size=t_size, - no_lmhra=no_lmhra, double_lmhra=double_lmhra, - return_list=return_list, n_layers=n_layers, n_dim=n_dim, n_head=n_head, - mlp_factor=mlp_factor, drop_path_rate=drop_path_rate, mlp_dropout=mlp_dropout, - cls_dropout=cls_dropout, num_classes=num_classes, - ) - - def forward(self, x): - x = self.conv1(x) # shape = [*, width, grid, grid] - N, C, T, H, W = x.shape - x = x.permute(0, 2, 3, 4, 1).reshape(N * T, H * W, C) - - x = torch.cat([self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width] - x = x + self.positional_embedding.to(x.dtype) - x = self.ln_pre(x) - - x = x.permute(1, 0, 2) # NLD -> LND - out = self.transformer(x) - return out - - -def inflate_weight(weight_2d, time_dim, center=True): - print(f'Init center: {center}') - if center: - weight_3d = torch.zeros(*weight_2d.shape) - weight_3d = weight_3d.unsqueeze(2).repeat(1, 1, time_dim, 1, 1) - middle_idx = time_dim // 2 - weight_3d[:, :, middle_idx, :, :] = weight_2d - else: - weight_3d = weight_2d.unsqueeze(2).repeat(1, 1, time_dim, 1, 1) - weight_3d = weight_3d / time_dim - return weight_3d - - -def load_state_dict(model, state_dict): - state_dict_3d = model.state_dict() - for k in state_dict.keys(): - if state_dict[k].shape != state_dict_3d[k].shape: - if len(state_dict_3d[k].shape) <= 2: - print(f'Ignore: {k}') - continue - print(f'Inflate: {k}, {state_dict[k].shape} => {state_dict_3d[k].shape}') - time_dim = state_dict_3d[k].shape[2] - state_dict[k] = inflate_weight(state_dict[k], time_dim) - model.load_state_dict(state_dict, strict=False) - - -def uniformerv2_b16( - pretrained=True, use_checkpoint=False, checkpoint_num=[0], - t_size=16, dw_reduction=1.5, backbone_drop_path_rate=0., - temporal_downsample=True, - no_lmhra=False, double_lmhra=True, - return_list=[8, 9, 10, 11], - n_layers=4, n_dim=768, n_head=12, mlp_factor=4.0, drop_path_rate=0., - mlp_dropout=[0.5, 0.5, 0.5, 0.5], - cls_dropout=0.5, num_classes=400, -): - model = VisionTransformer( - input_resolution=224, - patch_size=16, - width=768, - layers=12, - heads=12, - output_dim=512, - use_checkpoint=use_checkpoint, - checkpoint_num=checkpoint_num, - t_size=t_size, - dw_reduction=dw_reduction, - backbone_drop_path_rate=backbone_drop_path_rate, - temporal_downsample=temporal_downsample, - no_lmhra=no_lmhra, - double_lmhra=double_lmhra, - return_list=return_list, - n_layers=n_layers, - n_dim=n_dim, - n_head=n_head, - mlp_factor=mlp_factor, - drop_path_rate=drop_path_rate, - mlp_dropout=mlp_dropout, - cls_dropout=cls_dropout, - num_classes=num_classes, - ) - - if pretrained: - print('load pretrained weights') - state_dict = torch.load(_MODELS["ViT-B/16"], map_location='cpu') - load_state_dict(model, state_dict) - return model.eval() - - -def uniformerv2_l14( - pretrained=True, use_checkpoint=False, checkpoint_num=[0], - t_size=16, dw_reduction=1.5, backbone_drop_path_rate=0., - temporal_downsample=True, - no_lmhra=False, double_lmhra=True, - return_list=[20, 21, 22, 23], - n_layers=4, n_dim=1024, n_head=16, mlp_factor=4.0, drop_path_rate=0., - mlp_dropout=[0.5, 0.5, 0.5, 0.5], - cls_dropout=0.5, num_classes=400, -): - model = VisionTransformer( - input_resolution=224, - patch_size=14, - width=1024, - layers=24, - heads=16, - output_dim=768, - use_checkpoint=use_checkpoint, - checkpoint_num=checkpoint_num, - t_size=t_size, - dw_reduction=dw_reduction, - backbone_drop_path_rate=backbone_drop_path_rate, - temporal_downsample=temporal_downsample, - no_lmhra=no_lmhra, - double_lmhra=double_lmhra, - return_list=return_list, - n_layers=n_layers, - n_dim=n_dim, - n_head=n_head, - mlp_factor=mlp_factor, - drop_path_rate=drop_path_rate, - mlp_dropout=mlp_dropout, - cls_dropout=cls_dropout, - num_classes=num_classes, - ) - - if pretrained: - print('load pretrained weights') - state_dict = torch.load(_MODELS["ViT-L/14"], map_location='cpu') - load_state_dict(model, state_dict) - return model.eval() - - -def uniformerv2_l14_336( - pretrained=True, use_checkpoint=False, checkpoint_num=[0], - t_size=16, dw_reduction=1.5, backbone_drop_path_rate=0., - no_temporal_downsample=True, - no_lmhra=False, double_lmhra=True, - return_list=[20, 21, 22, 23], - n_layers=4, n_dim=1024, n_head=16, mlp_factor=4.0, drop_path_rate=0., - mlp_dropout=[0.5, 0.5, 0.5, 0.5], - cls_dropout=0.5, num_classes=400, -): - model = VisionTransformer( - input_resolution=336, - patch_size=14, - width=1024, - layers=24, - heads=16, - output_dim=768, - use_checkpoint=use_checkpoint, - checkpoint_num=checkpoint_num, - t_size=t_size, - dw_reduction=dw_reduction, - backbone_drop_path_rate=backbone_drop_path_rate, - no_temporal_downsample=no_temporal_downsample, - no_lmhra=no_lmhra, - double_lmhra=double_lmhra, - return_list=return_list, - n_layers=n_layers, - n_dim=n_dim, - n_head=n_head, - mlp_factor=mlp_factor, - drop_path_rate=drop_path_rate, - mlp_dropout=mlp_dropout, - cls_dropout=cls_dropout, - num_classes=num_classes, - ) - - if pretrained: - print('load pretrained weights') - state_dict = torch.load(_MODELS["ViT-L/14_336"], map_location='cpu') - load_state_dict(model, state_dict) - return model.eval() - - -if __name__ == '__main__': - import time - from fvcore.nn import FlopCountAnalysis - from fvcore.nn import flop_count_table - import numpy as np - - seed = 4217 - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - num_frames = 16 - - model = uniformerv2_l14( - pretrained=False, - t_size=num_frames, backbone_drop_path_rate=0., drop_path_rate=0., - dw_reduction=1.5, - no_lmhra=False, - temporal_downsample=True, - return_list=[8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], - mlp_dropout=[0.5]*16, - n_layers=16 - ) - print(model) - - flops = FlopCountAnalysis(model, torch.rand(1, 3, num_frames, 224, 224)) - s = time.time() - print(flop_count_table(flops, max_depth=1)) - print(time.time()-s) \ No newline at end of file diff --git a/spaces/AntiUser/DeepDanbooru_string/app.py b/spaces/AntiUser/DeepDanbooru_string/app.py deleted file mode 100644 index 49019837c9207cc68cb37be0342f3bc44fd0decb..0000000000000000000000000000000000000000 --- a/spaces/AntiUser/DeepDanbooru_string/app.py +++ /dev/null @@ -1,185 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import argparse -import functools -import os -import html -import pathlib -import tarfile - -import deepdanbooru as dd -import gradio as gr -import huggingface_hub -import numpy as np -import PIL.Image -import tensorflow as tf -import piexif -import piexif.helper - -TITLE = 'DeepDanbooru String' - -TOKEN = os.environ['TOKEN'] -MODEL_REPO = 'CikeyQI/DeepDanbooru_string' -MODEL_FILENAME = 'model-resnet_custom_v3.h5' -LABEL_FILENAME = 'tags.txt' - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument('--score-slider-step', type=float, default=0.05) - parser.add_argument('--score-threshold', type=float, default=0.5) - parser.add_argument('--theme', type=str, default='dark-grass') - parser.add_argument('--live', action='store_true') - parser.add_argument('--share', action='store_true') - parser.add_argument('--port', type=int) - parser.add_argument('--disable-queue', - dest='enable_queue', - action='store_false') - parser.add_argument('--allow-flagging', type=str, default='never') - return parser.parse_args() - - -def load_sample_image_paths() -> list[pathlib.Path]: - image_dir = pathlib.Path('images') - if not image_dir.exists(): - dataset_repo = 'hysts/sample-images-TADNE' - path = huggingface_hub.hf_hub_download(dataset_repo, - 'images.tar.gz', - repo_type='dataset', - use_auth_token=TOKEN) - with tarfile.open(path) as f: - f.extractall() - return sorted(image_dir.glob('*')) - - -def load_model() -> tf.keras.Model: - path = huggingface_hub.hf_hub_download(MODEL_REPO, - MODEL_FILENAME, - use_auth_token=TOKEN) - model = tf.keras.models.load_model(path) - return model - - -def load_labels() -> list[str]: - path = huggingface_hub.hf_hub_download(MODEL_REPO, - LABEL_FILENAME, - use_auth_token=TOKEN) - with open(path) as f: - labels = [line.strip() for line in f.readlines()] - return labels - -def plaintext_to_html(text): - text = "

    " + "
    \n".join([f"{html.escape(x)}" for x in text.split('\n')]) + "

    " - return text - -def predict(image: PIL.Image.Image, score_threshold: float, - model: tf.keras.Model, labels: list[str]) -> dict[str, float]: - rawimage = image - _, height, width, _ = model.input_shape - image = np.asarray(image) - image = tf.image.resize(image, - size=(height, width), - method=tf.image.ResizeMethod.AREA, - preserve_aspect_ratio=True) - image = image.numpy() - image = dd.image.transform_and_pad_image(image, width, height) - image = image / 255. - probs = model.predict(image[None, ...])[0] - probs = probs.astype(float) - res = dict() - for prob, label in zip(probs.tolist(), labels): - if prob < score_threshold: - continue - res[label] = prob - b = dict(sorted(res.items(),key=lambda item:item[1], reverse=True)) - a = ', '.join(list(b.keys())).replace('_',' ').replace('(','\(').replace(')','\)') - c = ', '.join(list(b.keys())) - - items = rawimage.info - geninfo = '' - - if "exif" in rawimage.info: - exif = piexif.load(rawimage.info["exif"]) - exif_comment = (exif or {}).get("Exif", {}).get(piexif.ExifIFD.UserComment, b'') - try: - exif_comment = piexif.helper.UserComment.load(exif_comment) - except ValueError: - exif_comment = exif_comment.decode('utf8', errors="ignore") - - items['exif comment'] = exif_comment - geninfo = exif_comment - - for field in ['jfif', 'jfif_version', 'jfif_unit', 'jfif_density', 'dpi', 'exif', - 'loop', 'background', 'timestamp', 'duration']: - items.pop(field, None) - - geninfo = items.get('parameters', geninfo) - - info = f""" -

    PNG Info

    -""" - for key, text in items.items(): - info += f""" -
    -

    {plaintext_to_html(str(key))}

    -

    {plaintext_to_html(str(text))}

    -
    -""".strip()+"\n" - - if len(info) == 0: - message = "Nothing found in the image." - info = f"

    {message}

    " - - return (a,c,res,info) - - -def main(): - args = parse_args() - model = load_model() - labels = load_labels() - - func = functools.partial(predict, model=model, labels=labels) - func = functools.update_wrapper(func, predict) - - gr.Interface( - func, - [ - gr.inputs.Image(type='pil', label='Input'), - gr.inputs.Slider(0, - 1, - step=args.score_slider_step, - default=args.score_threshold, - label='Score Threshold'), - ], - [ - gr.outputs.Textbox(label='Output (string)'), - gr.outputs.Textbox(label='Output (raw string)'), - gr.outputs.Label(label='Output (label)'), - gr.outputs.HTML() - ], - examples=[ - ['miku.jpg',0.5], - ['miku2.jpg',0.5] - ], - title=TITLE, - description=''' -Demo for [KichangKim/DeepDanbooru](https://github.com/KichangKim/DeepDanbooru) with "ready to copy" prompt and a prompt analyzer. - -Modified from [hysts/DeepDanbooru](https://huggingface.co/spaces/hysts/DeepDanbooru) - -PNG Info code forked from [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - ''', - theme=args.theme, - allow_flagging=args.allow_flagging, - live=args.live, - ).launch( - enable_queue=args.enable_queue, - server_port=args.port, - share=args.share, - ) - - -if __name__ == '__main__': - main() diff --git a/spaces/Ariharasudhan/YoloV5/utils/loggers/wandb/wandb_utils.py b/spaces/Ariharasudhan/YoloV5/utils/loggers/wandb/wandb_utils.py deleted file mode 100644 index 238f4edbf2a0ddf34c024fbb6775c71dd19e18aa..0000000000000000000000000000000000000000 --- a/spaces/Ariharasudhan/YoloV5/utils/loggers/wandb/wandb_utils.py +++ /dev/null @@ -1,589 +0,0 @@ -"""Utilities and tools for tracking runs with Weights & Biases.""" - -import logging -import os -import sys -from contextlib import contextmanager -from pathlib import Path -from typing import Dict - -import yaml -from tqdm import tqdm - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[3] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH - -from utils.dataloaders import LoadImagesAndLabels, img2label_paths -from utils.general import LOGGER, check_dataset, check_file - -try: - import wandb - - assert hasattr(wandb, '__version__') # verify package import not local dir -except (ImportError, AssertionError): - wandb = None - -RANK = int(os.getenv('RANK', -1)) -WANDB_ARTIFACT_PREFIX = 'wandb-artifact://' - - -def remove_prefix(from_string, prefix=WANDB_ARTIFACT_PREFIX): - return from_string[len(prefix):] - - -def check_wandb_config_file(data_config_file): - wandb_config = '_wandb.'.join(data_config_file.rsplit('.', 1)) # updated data.yaml path - if Path(wandb_config).is_file(): - return wandb_config - return data_config_file - - -def check_wandb_dataset(data_file): - is_trainset_wandb_artifact = False - is_valset_wandb_artifact = False - if isinstance(data_file, dict): - # In that case another dataset manager has already processed it and we don't have to - return data_file - if check_file(data_file) and data_file.endswith('.yaml'): - with open(data_file, errors='ignore') as f: - data_dict = yaml.safe_load(f) - is_trainset_wandb_artifact = isinstance(data_dict['train'], - str) and data_dict['train'].startswith(WANDB_ARTIFACT_PREFIX) - is_valset_wandb_artifact = isinstance(data_dict['val'], - str) and data_dict['val'].startswith(WANDB_ARTIFACT_PREFIX) - if is_trainset_wandb_artifact or is_valset_wandb_artifact: - return data_dict - else: - return check_dataset(data_file) - - -def get_run_info(run_path): - run_path = Path(remove_prefix(run_path, WANDB_ARTIFACT_PREFIX)) - run_id = run_path.stem - project = run_path.parent.stem - entity = run_path.parent.parent.stem - model_artifact_name = 'run_' + run_id + '_model' - return entity, project, run_id, model_artifact_name - - -def check_wandb_resume(opt): - process_wandb_config_ddp_mode(opt) if RANK not in [-1, 0] else None - if isinstance(opt.resume, str): - if opt.resume.startswith(WANDB_ARTIFACT_PREFIX): - if RANK not in [-1, 0]: # For resuming DDP runs - entity, project, run_id, model_artifact_name = get_run_info(opt.resume) - api = wandb.Api() - artifact = api.artifact(entity + '/' + project + '/' + model_artifact_name + ':latest') - modeldir = artifact.download() - opt.weights = str(Path(modeldir) / "last.pt") - return True - return None - - -def process_wandb_config_ddp_mode(opt): - with open(check_file(opt.data), errors='ignore') as f: - data_dict = yaml.safe_load(f) # data dict - train_dir, val_dir = None, None - if isinstance(data_dict['train'], str) and data_dict['train'].startswith(WANDB_ARTIFACT_PREFIX): - api = wandb.Api() - train_artifact = api.artifact(remove_prefix(data_dict['train']) + ':' + opt.artifact_alias) - train_dir = train_artifact.download() - train_path = Path(train_dir) / 'data/images/' - data_dict['train'] = str(train_path) - - if isinstance(data_dict['val'], str) and data_dict['val'].startswith(WANDB_ARTIFACT_PREFIX): - api = wandb.Api() - val_artifact = api.artifact(remove_prefix(data_dict['val']) + ':' + opt.artifact_alias) - val_dir = val_artifact.download() - val_path = Path(val_dir) / 'data/images/' - data_dict['val'] = str(val_path) - if train_dir or val_dir: - ddp_data_path = str(Path(val_dir) / 'wandb_local_data.yaml') - with open(ddp_data_path, 'w') as f: - yaml.safe_dump(data_dict, f) - opt.data = ddp_data_path - - -class WandbLogger(): - """Log training runs, datasets, models, and predictions to Weights & Biases. - - This logger sends information to W&B at wandb.ai. By default, this information - includes hyperparameters, system configuration and metrics, model metrics, - and basic data metrics and analyses. - - By providing additional command line arguments to train.py, datasets, - models and predictions can also be logged. - - For more on how this logger is used, see the Weights & Biases documentation: - https://docs.wandb.com/guides/integrations/yolov5 - """ - - def __init__(self, opt, run_id=None, job_type='Training'): - """ - - Initialize WandbLogger instance - - Upload dataset if opt.upload_dataset is True - - Setup training processes if job_type is 'Training' - - arguments: - opt (namespace) -- Commandline arguments for this run - run_id (str) -- Run ID of W&B run to be resumed - job_type (str) -- To set the job_type for this run - - """ - # Temporary-fix - if opt.upload_dataset: - opt.upload_dataset = False - # LOGGER.info("Uploading Dataset functionality is not being supported temporarily due to a bug.") - - # Pre-training routine -- - self.job_type = job_type - self.wandb, self.wandb_run = wandb, None if not wandb else wandb.run - self.val_artifact, self.train_artifact = None, None - self.train_artifact_path, self.val_artifact_path = None, None - self.result_artifact = None - self.val_table, self.result_table = None, None - self.bbox_media_panel_images = [] - self.val_table_path_map = None - self.max_imgs_to_log = 16 - self.wandb_artifact_data_dict = None - self.data_dict = None - # It's more elegant to stick to 1 wandb.init call, - # but useful config data is overwritten in the WandbLogger's wandb.init call - if isinstance(opt.resume, str): # checks resume from artifact - if opt.resume.startswith(WANDB_ARTIFACT_PREFIX): - entity, project, run_id, model_artifact_name = get_run_info(opt.resume) - model_artifact_name = WANDB_ARTIFACT_PREFIX + model_artifact_name - assert wandb, 'install wandb to resume wandb runs' - # Resume wandb-artifact:// runs here| workaround for not overwriting wandb.config - self.wandb_run = wandb.init(id=run_id, - project=project, - entity=entity, - resume='allow', - allow_val_change=True) - opt.resume = model_artifact_name - elif self.wandb: - self.wandb_run = wandb.init(config=opt, - resume="allow", - project='YOLOv5' if opt.project == 'runs/train' else Path(opt.project).stem, - entity=opt.entity, - name=opt.name if opt.name != 'exp' else None, - job_type=job_type, - id=run_id, - allow_val_change=True) if not wandb.run else wandb.run - if self.wandb_run: - if self.job_type == 'Training': - if opt.upload_dataset: - if not opt.resume: - self.wandb_artifact_data_dict = self.check_and_upload_dataset(opt) - - if isinstance(opt.data, dict): - # This means another dataset manager has already processed the dataset info (e.g. ClearML) - # and they will have stored the already processed dict in opt.data - self.data_dict = opt.data - elif opt.resume: - # resume from artifact - if isinstance(opt.resume, str) and opt.resume.startswith(WANDB_ARTIFACT_PREFIX): - self.data_dict = dict(self.wandb_run.config.data_dict) - else: # local resume - self.data_dict = check_wandb_dataset(opt.data) - else: - self.data_dict = check_wandb_dataset(opt.data) - self.wandb_artifact_data_dict = self.wandb_artifact_data_dict or self.data_dict - - # write data_dict to config. useful for resuming from artifacts. Do this only when not resuming. - self.wandb_run.config.update({'data_dict': self.wandb_artifact_data_dict}, allow_val_change=True) - self.setup_training(opt) - - if self.job_type == 'Dataset Creation': - self.wandb_run.config.update({"upload_dataset": True}) - self.data_dict = self.check_and_upload_dataset(opt) - - def check_and_upload_dataset(self, opt): - """ - Check if the dataset format is compatible and upload it as W&B artifact - - arguments: - opt (namespace)-- Commandline arguments for current run - - returns: - Updated dataset info dictionary where local dataset paths are replaced by WAND_ARFACT_PREFIX links. - """ - assert wandb, 'Install wandb to upload dataset' - config_path = self.log_dataset_artifact(opt.data, opt.single_cls, - 'YOLOv5' if opt.project == 'runs/train' else Path(opt.project).stem) - with open(config_path, errors='ignore') as f: - wandb_data_dict = yaml.safe_load(f) - return wandb_data_dict - - def setup_training(self, opt): - """ - Setup the necessary processes for training YOLO models: - - Attempt to download model checkpoint and dataset artifacts if opt.resume stats with WANDB_ARTIFACT_PREFIX - - Update data_dict, to contain info of previous run if resumed and the paths of dataset artifact if downloaded - - Setup log_dict, initialize bbox_interval - - arguments: - opt (namespace) -- commandline arguments for this run - - """ - self.log_dict, self.current_epoch = {}, 0 - self.bbox_interval = opt.bbox_interval - if isinstance(opt.resume, str): - modeldir, _ = self.download_model_artifact(opt) - if modeldir: - self.weights = Path(modeldir) / "last.pt" - config = self.wandb_run.config - opt.weights, opt.save_period, opt.batch_size, opt.bbox_interval, opt.epochs, opt.hyp, opt.imgsz = str( - self.weights), config.save_period, config.batch_size, config.bbox_interval, config.epochs,\ - config.hyp, config.imgsz - data_dict = self.data_dict - if self.val_artifact is None: # If --upload_dataset is set, use the existing artifact, don't download - self.train_artifact_path, self.train_artifact = self.download_dataset_artifact( - data_dict.get('train'), opt.artifact_alias) - self.val_artifact_path, self.val_artifact = self.download_dataset_artifact( - data_dict.get('val'), opt.artifact_alias) - - if self.train_artifact_path is not None: - train_path = Path(self.train_artifact_path) / 'data/images/' - data_dict['train'] = str(train_path) - if self.val_artifact_path is not None: - val_path = Path(self.val_artifact_path) / 'data/images/' - data_dict['val'] = str(val_path) - - if self.val_artifact is not None: - self.result_artifact = wandb.Artifact("run_" + wandb.run.id + "_progress", "evaluation") - columns = ["epoch", "id", "ground truth", "prediction"] - columns.extend(self.data_dict['names']) - self.result_table = wandb.Table(columns) - self.val_table = self.val_artifact.get("val") - if self.val_table_path_map is None: - self.map_val_table_path() - if opt.bbox_interval == -1: - self.bbox_interval = opt.bbox_interval = (opt.epochs // 10) if opt.epochs > 10 else 1 - if opt.evolve or opt.noplots: - self.bbox_interval = opt.bbox_interval = opt.epochs + 1 # disable bbox_interval - train_from_artifact = self.train_artifact_path is not None and self.val_artifact_path is not None - # Update the the data_dict to point to local artifacts dir - if train_from_artifact: - self.data_dict = data_dict - - def download_dataset_artifact(self, path, alias): - """ - download the model checkpoint artifact if the path starts with WANDB_ARTIFACT_PREFIX - - arguments: - path -- path of the dataset to be used for training - alias (str)-- alias of the artifact to be download/used for training - - returns: - (str, wandb.Artifact) -- path of the downladed dataset and it's corresponding artifact object if dataset - is found otherwise returns (None, None) - """ - if isinstance(path, str) and path.startswith(WANDB_ARTIFACT_PREFIX): - artifact_path = Path(remove_prefix(path, WANDB_ARTIFACT_PREFIX) + ":" + alias) - dataset_artifact = wandb.use_artifact(artifact_path.as_posix().replace("\\", "/")) - assert dataset_artifact is not None, "'Error: W&B dataset artifact doesn\'t exist'" - datadir = dataset_artifact.download() - return datadir, dataset_artifact - return None, None - - def download_model_artifact(self, opt): - """ - download the model checkpoint artifact if the resume path starts with WANDB_ARTIFACT_PREFIX - - arguments: - opt (namespace) -- Commandline arguments for this run - """ - if opt.resume.startswith(WANDB_ARTIFACT_PREFIX): - model_artifact = wandb.use_artifact(remove_prefix(opt.resume, WANDB_ARTIFACT_PREFIX) + ":latest") - assert model_artifact is not None, 'Error: W&B model artifact doesn\'t exist' - modeldir = model_artifact.download() - # epochs_trained = model_artifact.metadata.get('epochs_trained') - total_epochs = model_artifact.metadata.get('total_epochs') - is_finished = total_epochs is None - assert not is_finished, 'training is finished, can only resume incomplete runs.' - return modeldir, model_artifact - return None, None - - def log_model(self, path, opt, epoch, fitness_score, best_model=False): - """ - Log the model checkpoint as W&B artifact - - arguments: - path (Path) -- Path of directory containing the checkpoints - opt (namespace) -- Command line arguments for this run - epoch (int) -- Current epoch number - fitness_score (float) -- fitness score for current epoch - best_model (boolean) -- Boolean representing if the current checkpoint is the best yet. - """ - model_artifact = wandb.Artifact('run_' + wandb.run.id + '_model', - type='model', - metadata={ - 'original_url': str(path), - 'epochs_trained': epoch + 1, - 'save period': opt.save_period, - 'project': opt.project, - 'total_epochs': opt.epochs, - 'fitness_score': fitness_score}) - model_artifact.add_file(str(path / 'last.pt'), name='last.pt') - wandb.log_artifact(model_artifact, - aliases=['latest', 'last', 'epoch ' + str(self.current_epoch), 'best' if best_model else '']) - LOGGER.info(f"Saving model artifact on epoch {epoch + 1}") - - def log_dataset_artifact(self, data_file, single_cls, project, overwrite_config=False): - """ - Log the dataset as W&B artifact and return the new data file with W&B links - - arguments: - data_file (str) -- the .yaml file with information about the dataset like - path, classes etc. - single_class (boolean) -- train multi-class data as single-class - project (str) -- project name. Used to construct the artifact path - overwrite_config (boolean) -- overwrites the data.yaml file if set to true otherwise creates a new - file with _wandb postfix. Eg -> data_wandb.yaml - - returns: - the new .yaml file with artifact links. it can be used to start training directly from artifacts - """ - upload_dataset = self.wandb_run.config.upload_dataset - log_val_only = isinstance(upload_dataset, str) and upload_dataset == 'val' - self.data_dict = check_dataset(data_file) # parse and check - data = dict(self.data_dict) - nc, names = (1, ['item']) if single_cls else (int(data['nc']), data['names']) - names = {k: v for k, v in enumerate(names)} # to index dictionary - - # log train set - if not log_val_only: - self.train_artifact = self.create_dataset_table(LoadImagesAndLabels(data['train'], rect=True, batch_size=1), - names, - name='train') if data.get('train') else None - if data.get('train'): - data['train'] = WANDB_ARTIFACT_PREFIX + str(Path(project) / 'train') - - self.val_artifact = self.create_dataset_table( - LoadImagesAndLabels(data['val'], rect=True, batch_size=1), names, name='val') if data.get('val') else None - if data.get('val'): - data['val'] = WANDB_ARTIFACT_PREFIX + str(Path(project) / 'val') - - path = Path(data_file) - # create a _wandb.yaml file with artifacts links if both train and test set are logged - if not log_val_only: - path = (path.stem if overwrite_config else path.stem + '_wandb') + '.yaml' # updated data.yaml path - path = ROOT / 'data' / path - data.pop('download', None) - data.pop('path', None) - with open(path, 'w') as f: - yaml.safe_dump(data, f) - LOGGER.info(f"Created dataset config file {path}") - - if self.job_type == 'Training': # builds correct artifact pipeline graph - if not log_val_only: - self.wandb_run.log_artifact( - self.train_artifact) # calling use_artifact downloads the dataset. NOT NEEDED! - self.wandb_run.use_artifact(self.val_artifact) - self.val_artifact.wait() - self.val_table = self.val_artifact.get('val') - self.map_val_table_path() - else: - self.wandb_run.log_artifact(self.train_artifact) - self.wandb_run.log_artifact(self.val_artifact) - return path - - def map_val_table_path(self): - """ - Map the validation dataset Table like name of file -> it's id in the W&B Table. - Useful for - referencing artifacts for evaluation. - """ - self.val_table_path_map = {} - LOGGER.info("Mapping dataset") - for i, data in enumerate(tqdm(self.val_table.data)): - self.val_table_path_map[data[3]] = data[0] - - def create_dataset_table(self, dataset: LoadImagesAndLabels, class_to_id: Dict[int, str], name: str = 'dataset'): - """ - Create and return W&B artifact containing W&B Table of the dataset. - - arguments: - dataset -- instance of LoadImagesAndLabels class used to iterate over the data to build Table - class_to_id -- hash map that maps class ids to labels - name -- name of the artifact - - returns: - dataset artifact to be logged or used - """ - # TODO: Explore multiprocessing to slpit this loop parallely| This is essential for speeding up the the logging - artifact = wandb.Artifact(name=name, type="dataset") - img_files = tqdm([dataset.path]) if isinstance(dataset.path, str) and Path(dataset.path).is_dir() else None - img_files = tqdm(dataset.im_files) if not img_files else img_files - for img_file in img_files: - if Path(img_file).is_dir(): - artifact.add_dir(img_file, name='data/images') - labels_path = 'labels'.join(dataset.path.rsplit('images', 1)) - artifact.add_dir(labels_path, name='data/labels') - else: - artifact.add_file(img_file, name='data/images/' + Path(img_file).name) - label_file = Path(img2label_paths([img_file])[0]) - artifact.add_file(str(label_file), name='data/labels/' + - label_file.name) if label_file.exists() else None - table = wandb.Table(columns=["id", "train_image", "Classes", "name"]) - class_set = wandb.Classes([{'id': id, 'name': name} for id, name in class_to_id.items()]) - for si, (img, labels, paths, shapes) in enumerate(tqdm(dataset)): - box_data, img_classes = [], {} - for cls, *xywh in labels[:, 1:].tolist(): - cls = int(cls) - box_data.append({ - "position": { - "middle": [xywh[0], xywh[1]], - "width": xywh[2], - "height": xywh[3]}, - "class_id": cls, - "box_caption": "%s" % (class_to_id[cls])}) - img_classes[cls] = class_to_id[cls] - boxes = {"ground_truth": {"box_data": box_data, "class_labels": class_to_id}} # inference-space - table.add_data(si, wandb.Image(paths, classes=class_set, boxes=boxes), list(img_classes.values()), - Path(paths).name) - artifact.add(table, name) - return artifact - - def log_training_progress(self, predn, path, names): - """ - Build evaluation Table. Uses reference from validation dataset table. - - arguments: - predn (list): list of predictions in the native space in the format - [xmin, ymin, xmax, ymax, confidence, class] - path (str): local path of the current evaluation image - names (dict(int, str)): hash map that maps class ids to labels - """ - class_set = wandb.Classes([{'id': id, 'name': name} for id, name in names.items()]) - box_data = [] - avg_conf_per_class = [0] * len(self.data_dict['names']) - pred_class_count = {} - for *xyxy, conf, cls in predn.tolist(): - if conf >= 0.25: - cls = int(cls) - box_data.append({ - "position": { - "minX": xyxy[0], - "minY": xyxy[1], - "maxX": xyxy[2], - "maxY": xyxy[3]}, - "class_id": cls, - "box_caption": f"{names[cls]} {conf:.3f}", - "scores": { - "class_score": conf}, - "domain": "pixel"}) - avg_conf_per_class[cls] += conf - - if cls in pred_class_count: - pred_class_count[cls] += 1 - else: - pred_class_count[cls] = 1 - - for pred_class in pred_class_count.keys(): - avg_conf_per_class[pred_class] = avg_conf_per_class[pred_class] / pred_class_count[pred_class] - - boxes = {"predictions": {"box_data": box_data, "class_labels": names}} # inference-space - id = self.val_table_path_map[Path(path).name] - self.result_table.add_data(self.current_epoch, id, self.val_table.data[id][1], - wandb.Image(self.val_table.data[id][1], boxes=boxes, classes=class_set), - *avg_conf_per_class) - - def val_one_image(self, pred, predn, path, names, im): - """ - Log validation data for one image. updates the result Table if validation dataset is uploaded and log bbox media panel - - arguments: - pred (list): list of scaled predictions in the format - [xmin, ymin, xmax, ymax, confidence, class] - predn (list): list of predictions in the native space - [xmin, ymin, xmax, ymax, confidence, class] - path (str): local path of the current evaluation image - """ - if self.val_table and self.result_table: # Log Table if Val dataset is uploaded as artifact - self.log_training_progress(predn, path, names) - - if len(self.bbox_media_panel_images) < self.max_imgs_to_log and self.current_epoch > 0: - if self.current_epoch % self.bbox_interval == 0: - box_data = [{ - "position": { - "minX": xyxy[0], - "minY": xyxy[1], - "maxX": xyxy[2], - "maxY": xyxy[3]}, - "class_id": int(cls), - "box_caption": f"{names[int(cls)]} {conf:.3f}", - "scores": { - "class_score": conf}, - "domain": "pixel"} for *xyxy, conf, cls in pred.tolist()] - boxes = {"predictions": {"box_data": box_data, "class_labels": names}} # inference-space - self.bbox_media_panel_images.append(wandb.Image(im, boxes=boxes, caption=path.name)) - - def log(self, log_dict): - """ - save the metrics to the logging dictionary - - arguments: - log_dict (Dict) -- metrics/media to be logged in current step - """ - if self.wandb_run: - for key, value in log_dict.items(): - self.log_dict[key] = value - - def end_epoch(self, best_result=False): - """ - commit the log_dict, model artifacts and Tables to W&B and flush the log_dict. - - arguments: - best_result (boolean): Boolean representing if the result of this evaluation is best or not - """ - if self.wandb_run: - with all_logging_disabled(): - if self.bbox_media_panel_images: - self.log_dict["BoundingBoxDebugger"] = self.bbox_media_panel_images - try: - wandb.log(self.log_dict) - except BaseException as e: - LOGGER.info( - f"An error occurred in wandb logger. The training will proceed without interruption. More info\n{e}" - ) - self.wandb_run.finish() - self.wandb_run = None - - self.log_dict = {} - self.bbox_media_panel_images = [] - if self.result_artifact: - self.result_artifact.add(self.result_table, 'result') - wandb.log_artifact(self.result_artifact, - aliases=[ - 'latest', 'last', 'epoch ' + str(self.current_epoch), - ('best' if best_result else '')]) - - wandb.log({"evaluation": self.result_table}) - columns = ["epoch", "id", "ground truth", "prediction"] - columns.extend(self.data_dict['names']) - self.result_table = wandb.Table(columns) - self.result_artifact = wandb.Artifact("run_" + wandb.run.id + "_progress", "evaluation") - - def finish_run(self): - """ - Log metrics if any and finish the current W&B run - """ - if self.wandb_run: - if self.log_dict: - with all_logging_disabled(): - wandb.log(self.log_dict) - wandb.run.finish() - - -@contextmanager -def all_logging_disabled(highest_level=logging.CRITICAL): - """ source - https://gist.github.com/simon-weber/7853144 - A context manager that will prevent any logging messages triggered during the body from being processed. - :param highest_level: the maximum logging level in use. - This would only need to be changed if a custom level greater than CRITICAL is defined. - """ - previous_level = logging.root.manager.disable - logging.disable(highest_level) - try: - yield - finally: - logging.disable(previous_level) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/distlib/util.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/distlib/util.py deleted file mode 100644 index dd01849d997e5ae9dc9809295e29ceb871b14216..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/distlib/util.py +++ /dev/null @@ -1,1932 +0,0 @@ -# -# Copyright (C) 2012-2021 The Python Software Foundation. -# See LICENSE.txt and CONTRIBUTORS.txt. -# -import codecs -from collections import deque -import contextlib -import csv -from glob import iglob as std_iglob -import io -import json -import logging -import os -import py_compile -import re -import socket -try: - import ssl -except ImportError: # pragma: no cover - ssl = None -import subprocess -import sys -import tarfile -import tempfile -import textwrap - -try: - import threading -except ImportError: # pragma: no cover - import dummy_threading as threading -import time - -from . import DistlibException -from .compat import (string_types, text_type, shutil, raw_input, StringIO, - cache_from_source, urlopen, urljoin, httplib, xmlrpclib, - splittype, HTTPHandler, BaseConfigurator, valid_ident, - Container, configparser, URLError, ZipFile, fsdecode, - unquote, urlparse) - -logger = logging.getLogger(__name__) - -# -# Requirement parsing code as per PEP 508 -# - -IDENTIFIER = re.compile(r'^([\w\.-]+)\s*') -VERSION_IDENTIFIER = re.compile(r'^([\w\.*+-]+)\s*') -COMPARE_OP = re.compile(r'^(<=?|>=?|={2,3}|[~!]=)\s*') -MARKER_OP = re.compile(r'^((<=?)|(>=?)|={2,3}|[~!]=|in|not\s+in)\s*') -OR = re.compile(r'^or\b\s*') -AND = re.compile(r'^and\b\s*') -NON_SPACE = re.compile(r'(\S+)\s*') -STRING_CHUNK = re.compile(r'([\s\w\.{}()*+#:;,/?!~`@$%^&=|<>\[\]-]+)') - - -def parse_marker(marker_string): - """ - Parse a marker string and return a dictionary containing a marker expression. - - The dictionary will contain keys "op", "lhs" and "rhs" for non-terminals in - the expression grammar, or strings. A string contained in quotes is to be - interpreted as a literal string, and a string not contained in quotes is a - variable (such as os_name). - """ - def marker_var(remaining): - # either identifier, or literal string - m = IDENTIFIER.match(remaining) - if m: - result = m.groups()[0] - remaining = remaining[m.end():] - elif not remaining: - raise SyntaxError('unexpected end of input') - else: - q = remaining[0] - if q not in '\'"': - raise SyntaxError('invalid expression: %s' % remaining) - oq = '\'"'.replace(q, '') - remaining = remaining[1:] - parts = [q] - while remaining: - # either a string chunk, or oq, or q to terminate - if remaining[0] == q: - break - elif remaining[0] == oq: - parts.append(oq) - remaining = remaining[1:] - else: - m = STRING_CHUNK.match(remaining) - if not m: - raise SyntaxError('error in string literal: %s' % remaining) - parts.append(m.groups()[0]) - remaining = remaining[m.end():] - else: - s = ''.join(parts) - raise SyntaxError('unterminated string: %s' % s) - parts.append(q) - result = ''.join(parts) - remaining = remaining[1:].lstrip() # skip past closing quote - return result, remaining - - def marker_expr(remaining): - if remaining and remaining[0] == '(': - result, remaining = marker(remaining[1:].lstrip()) - if remaining[0] != ')': - raise SyntaxError('unterminated parenthesis: %s' % remaining) - remaining = remaining[1:].lstrip() - else: - lhs, remaining = marker_var(remaining) - while remaining: - m = MARKER_OP.match(remaining) - if not m: - break - op = m.groups()[0] - remaining = remaining[m.end():] - rhs, remaining = marker_var(remaining) - lhs = {'op': op, 'lhs': lhs, 'rhs': rhs} - result = lhs - return result, remaining - - def marker_and(remaining): - lhs, remaining = marker_expr(remaining) - while remaining: - m = AND.match(remaining) - if not m: - break - remaining = remaining[m.end():] - rhs, remaining = marker_expr(remaining) - lhs = {'op': 'and', 'lhs': lhs, 'rhs': rhs} - return lhs, remaining - - def marker(remaining): - lhs, remaining = marker_and(remaining) - while remaining: - m = OR.match(remaining) - if not m: - break - remaining = remaining[m.end():] - rhs, remaining = marker_and(remaining) - lhs = {'op': 'or', 'lhs': lhs, 'rhs': rhs} - return lhs, remaining - - return marker(marker_string) - - -def parse_requirement(req): - """ - Parse a requirement passed in as a string. Return a Container - whose attributes contain the various parts of the requirement. - """ - remaining = req.strip() - if not remaining or remaining.startswith('#'): - return None - m = IDENTIFIER.match(remaining) - if not m: - raise SyntaxError('name expected: %s' % remaining) - distname = m.groups()[0] - remaining = remaining[m.end():] - extras = mark_expr = versions = uri = None - if remaining and remaining[0] == '[': - i = remaining.find(']', 1) - if i < 0: - raise SyntaxError('unterminated extra: %s' % remaining) - s = remaining[1:i] - remaining = remaining[i + 1:].lstrip() - extras = [] - while s: - m = IDENTIFIER.match(s) - if not m: - raise SyntaxError('malformed extra: %s' % s) - extras.append(m.groups()[0]) - s = s[m.end():] - if not s: - break - if s[0] != ',': - raise SyntaxError('comma expected in extras: %s' % s) - s = s[1:].lstrip() - if not extras: - extras = None - if remaining: - if remaining[0] == '@': - # it's a URI - remaining = remaining[1:].lstrip() - m = NON_SPACE.match(remaining) - if not m: - raise SyntaxError('invalid URI: %s' % remaining) - uri = m.groups()[0] - t = urlparse(uri) - # there are issues with Python and URL parsing, so this test - # is a bit crude. See bpo-20271, bpo-23505. Python doesn't - # always parse invalid URLs correctly - it should raise - # exceptions for malformed URLs - if not (t.scheme and t.netloc): - raise SyntaxError('Invalid URL: %s' % uri) - remaining = remaining[m.end():].lstrip() - else: - - def get_versions(ver_remaining): - """ - Return a list of operator, version tuples if any are - specified, else None. - """ - m = COMPARE_OP.match(ver_remaining) - versions = None - if m: - versions = [] - while True: - op = m.groups()[0] - ver_remaining = ver_remaining[m.end():] - m = VERSION_IDENTIFIER.match(ver_remaining) - if not m: - raise SyntaxError('invalid version: %s' % ver_remaining) - v = m.groups()[0] - versions.append((op, v)) - ver_remaining = ver_remaining[m.end():] - if not ver_remaining or ver_remaining[0] != ',': - break - ver_remaining = ver_remaining[1:].lstrip() - # Some packages have a trailing comma which would break things - # See issue #148 - if not ver_remaining: - break - m = COMPARE_OP.match(ver_remaining) - if not m: - raise SyntaxError('invalid constraint: %s' % ver_remaining) - if not versions: - versions = None - return versions, ver_remaining - - if remaining[0] != '(': - versions, remaining = get_versions(remaining) - else: - i = remaining.find(')', 1) - if i < 0: - raise SyntaxError('unterminated parenthesis: %s' % remaining) - s = remaining[1:i] - remaining = remaining[i + 1:].lstrip() - # As a special diversion from PEP 508, allow a version number - # a.b.c in parentheses as a synonym for ~= a.b.c (because this - # is allowed in earlier PEPs) - if COMPARE_OP.match(s): - versions, _ = get_versions(s) - else: - m = VERSION_IDENTIFIER.match(s) - if not m: - raise SyntaxError('invalid constraint: %s' % s) - v = m.groups()[0] - s = s[m.end():].lstrip() - if s: - raise SyntaxError('invalid constraint: %s' % s) - versions = [('~=', v)] - - if remaining: - if remaining[0] != ';': - raise SyntaxError('invalid requirement: %s' % remaining) - remaining = remaining[1:].lstrip() - - mark_expr, remaining = parse_marker(remaining) - - if remaining and remaining[0] != '#': - raise SyntaxError('unexpected trailing data: %s' % remaining) - - if not versions: - rs = distname - else: - rs = '%s %s' % (distname, ', '.join(['%s %s' % con for con in versions])) - return Container(name=distname, extras=extras, constraints=versions, - marker=mark_expr, url=uri, requirement=rs) - - -def get_resources_dests(resources_root, rules): - """Find destinations for resources files""" - - def get_rel_path(root, path): - # normalizes and returns a lstripped-/-separated path - root = root.replace(os.path.sep, '/') - path = path.replace(os.path.sep, '/') - assert path.startswith(root) - return path[len(root):].lstrip('/') - - destinations = {} - for base, suffix, dest in rules: - prefix = os.path.join(resources_root, base) - for abs_base in iglob(prefix): - abs_glob = os.path.join(abs_base, suffix) - for abs_path in iglob(abs_glob): - resource_file = get_rel_path(resources_root, abs_path) - if dest is None: # remove the entry if it was here - destinations.pop(resource_file, None) - else: - rel_path = get_rel_path(abs_base, abs_path) - rel_dest = dest.replace(os.path.sep, '/').rstrip('/') - destinations[resource_file] = rel_dest + '/' + rel_path - return destinations - - -def in_venv(): - if hasattr(sys, 'real_prefix'): - # virtualenv venvs - result = True - else: - # PEP 405 venvs - result = sys.prefix != getattr(sys, 'base_prefix', sys.prefix) - return result - - -def get_executable(): -# The __PYVENV_LAUNCHER__ dance is apparently no longer needed, as -# changes to the stub launcher mean that sys.executable always points -# to the stub on OS X -# if sys.platform == 'darwin' and ('__PYVENV_LAUNCHER__' -# in os.environ): -# result = os.environ['__PYVENV_LAUNCHER__'] -# else: -# result = sys.executable -# return result - # Avoid normcasing: see issue #143 - # result = os.path.normcase(sys.executable) - result = sys.executable - if not isinstance(result, text_type): - result = fsdecode(result) - return result - - -def proceed(prompt, allowed_chars, error_prompt=None, default=None): - p = prompt - while True: - s = raw_input(p) - p = prompt - if not s and default: - s = default - if s: - c = s[0].lower() - if c in allowed_chars: - break - if error_prompt: - p = '%c: %s\n%s' % (c, error_prompt, prompt) - return c - - -def extract_by_key(d, keys): - if isinstance(keys, string_types): - keys = keys.split() - result = {} - for key in keys: - if key in d: - result[key] = d[key] - return result - -def read_exports(stream): - if sys.version_info[0] >= 3: - # needs to be a text stream - stream = codecs.getreader('utf-8')(stream) - # Try to load as JSON, falling back on legacy format - data = stream.read() - stream = StringIO(data) - try: - jdata = json.load(stream) - result = jdata['extensions']['python.exports']['exports'] - for group, entries in result.items(): - for k, v in entries.items(): - s = '%s = %s' % (k, v) - entry = get_export_entry(s) - assert entry is not None - entries[k] = entry - return result - except Exception: - stream.seek(0, 0) - - def read_stream(cp, stream): - if hasattr(cp, 'read_file'): - cp.read_file(stream) - else: - cp.readfp(stream) - - cp = configparser.ConfigParser() - try: - read_stream(cp, stream) - except configparser.MissingSectionHeaderError: - stream.close() - data = textwrap.dedent(data) - stream = StringIO(data) - read_stream(cp, stream) - - result = {} - for key in cp.sections(): - result[key] = entries = {} - for name, value in cp.items(key): - s = '%s = %s' % (name, value) - entry = get_export_entry(s) - assert entry is not None - #entry.dist = self - entries[name] = entry - return result - - -def write_exports(exports, stream): - if sys.version_info[0] >= 3: - # needs to be a text stream - stream = codecs.getwriter('utf-8')(stream) - cp = configparser.ConfigParser() - for k, v in exports.items(): - # TODO check k, v for valid values - cp.add_section(k) - for entry in v.values(): - if entry.suffix is None: - s = entry.prefix - else: - s = '%s:%s' % (entry.prefix, entry.suffix) - if entry.flags: - s = '%s [%s]' % (s, ', '.join(entry.flags)) - cp.set(k, entry.name, s) - cp.write(stream) - - -@contextlib.contextmanager -def tempdir(): - td = tempfile.mkdtemp() - try: - yield td - finally: - shutil.rmtree(td) - -@contextlib.contextmanager -def chdir(d): - cwd = os.getcwd() - try: - os.chdir(d) - yield - finally: - os.chdir(cwd) - - -@contextlib.contextmanager -def socket_timeout(seconds=15): - cto = socket.getdefaulttimeout() - try: - socket.setdefaulttimeout(seconds) - yield - finally: - socket.setdefaulttimeout(cto) - - -class cached_property(object): - def __init__(self, func): - self.func = func - #for attr in ('__name__', '__module__', '__doc__'): - # setattr(self, attr, getattr(func, attr, None)) - - def __get__(self, obj, cls=None): - if obj is None: - return self - value = self.func(obj) - object.__setattr__(obj, self.func.__name__, value) - #obj.__dict__[self.func.__name__] = value = self.func(obj) - return value - -def convert_path(pathname): - """Return 'pathname' as a name that will work on the native filesystem. - - The path is split on '/' and put back together again using the current - directory separator. Needed because filenames in the setup script are - always supplied in Unix style, and have to be converted to the local - convention before we can actually use them in the filesystem. Raises - ValueError on non-Unix-ish systems if 'pathname' either starts or - ends with a slash. - """ - if os.sep == '/': - return pathname - if not pathname: - return pathname - if pathname[0] == '/': - raise ValueError("path '%s' cannot be absolute" % pathname) - if pathname[-1] == '/': - raise ValueError("path '%s' cannot end with '/'" % pathname) - - paths = pathname.split('/') - while os.curdir in paths: - paths.remove(os.curdir) - if not paths: - return os.curdir - return os.path.join(*paths) - - -class FileOperator(object): - def __init__(self, dry_run=False): - self.dry_run = dry_run - self.ensured = set() - self._init_record() - - def _init_record(self): - self.record = False - self.files_written = set() - self.dirs_created = set() - - def record_as_written(self, path): - if self.record: - self.files_written.add(path) - - def newer(self, source, target): - """Tell if the target is newer than the source. - - Returns true if 'source' exists and is more recently modified than - 'target', or if 'source' exists and 'target' doesn't. - - Returns false if both exist and 'target' is the same age or younger - than 'source'. Raise PackagingFileError if 'source' does not exist. - - Note that this test is not very accurate: files created in the same - second will have the same "age". - """ - if not os.path.exists(source): - raise DistlibException("file '%r' does not exist" % - os.path.abspath(source)) - if not os.path.exists(target): - return True - - return os.stat(source).st_mtime > os.stat(target).st_mtime - - def copy_file(self, infile, outfile, check=True): - """Copy a file respecting dry-run and force flags. - """ - self.ensure_dir(os.path.dirname(outfile)) - logger.info('Copying %s to %s', infile, outfile) - if not self.dry_run: - msg = None - if check: - if os.path.islink(outfile): - msg = '%s is a symlink' % outfile - elif os.path.exists(outfile) and not os.path.isfile(outfile): - msg = '%s is a non-regular file' % outfile - if msg: - raise ValueError(msg + ' which would be overwritten') - shutil.copyfile(infile, outfile) - self.record_as_written(outfile) - - def copy_stream(self, instream, outfile, encoding=None): - assert not os.path.isdir(outfile) - self.ensure_dir(os.path.dirname(outfile)) - logger.info('Copying stream %s to %s', instream, outfile) - if not self.dry_run: - if encoding is None: - outstream = open(outfile, 'wb') - else: - outstream = codecs.open(outfile, 'w', encoding=encoding) - try: - shutil.copyfileobj(instream, outstream) - finally: - outstream.close() - self.record_as_written(outfile) - - def write_binary_file(self, path, data): - self.ensure_dir(os.path.dirname(path)) - if not self.dry_run: - if os.path.exists(path): - os.remove(path) - with open(path, 'wb') as f: - f.write(data) - self.record_as_written(path) - - def write_text_file(self, path, data, encoding): - self.write_binary_file(path, data.encode(encoding)) - - def set_mode(self, bits, mask, files): - if os.name == 'posix' or (os.name == 'java' and os._name == 'posix'): - # Set the executable bits (owner, group, and world) on - # all the files specified. - for f in files: - if self.dry_run: - logger.info("changing mode of %s", f) - else: - mode = (os.stat(f).st_mode | bits) & mask - logger.info("changing mode of %s to %o", f, mode) - os.chmod(f, mode) - - set_executable_mode = lambda s, f: s.set_mode(0o555, 0o7777, f) - - def ensure_dir(self, path): - path = os.path.abspath(path) - if path not in self.ensured and not os.path.exists(path): - self.ensured.add(path) - d, f = os.path.split(path) - self.ensure_dir(d) - logger.info('Creating %s' % path) - if not self.dry_run: - os.mkdir(path) - if self.record: - self.dirs_created.add(path) - - def byte_compile(self, path, optimize=False, force=False, prefix=None, hashed_invalidation=False): - dpath = cache_from_source(path, not optimize) - logger.info('Byte-compiling %s to %s', path, dpath) - if not self.dry_run: - if force or self.newer(path, dpath): - if not prefix: - diagpath = None - else: - assert path.startswith(prefix) - diagpath = path[len(prefix):] - compile_kwargs = {} - if hashed_invalidation and hasattr(py_compile, 'PycInvalidationMode'): - compile_kwargs['invalidation_mode'] = py_compile.PycInvalidationMode.CHECKED_HASH - py_compile.compile(path, dpath, diagpath, True, **compile_kwargs) # raise error - self.record_as_written(dpath) - return dpath - - def ensure_removed(self, path): - if os.path.exists(path): - if os.path.isdir(path) and not os.path.islink(path): - logger.debug('Removing directory tree at %s', path) - if not self.dry_run: - shutil.rmtree(path) - if self.record: - if path in self.dirs_created: - self.dirs_created.remove(path) - else: - if os.path.islink(path): - s = 'link' - else: - s = 'file' - logger.debug('Removing %s %s', s, path) - if not self.dry_run: - os.remove(path) - if self.record: - if path in self.files_written: - self.files_written.remove(path) - - def is_writable(self, path): - result = False - while not result: - if os.path.exists(path): - result = os.access(path, os.W_OK) - break - parent = os.path.dirname(path) - if parent == path: - break - path = parent - return result - - def commit(self): - """ - Commit recorded changes, turn off recording, return - changes. - """ - assert self.record - result = self.files_written, self.dirs_created - self._init_record() - return result - - def rollback(self): - if not self.dry_run: - for f in list(self.files_written): - if os.path.exists(f): - os.remove(f) - # dirs should all be empty now, except perhaps for - # __pycache__ subdirs - # reverse so that subdirs appear before their parents - dirs = sorted(self.dirs_created, reverse=True) - for d in dirs: - flist = os.listdir(d) - if flist: - assert flist == ['__pycache__'] - sd = os.path.join(d, flist[0]) - os.rmdir(sd) - os.rmdir(d) # should fail if non-empty - self._init_record() - -def resolve(module_name, dotted_path): - if module_name in sys.modules: - mod = sys.modules[module_name] - else: - mod = __import__(module_name) - if dotted_path is None: - result = mod - else: - parts = dotted_path.split('.') - result = getattr(mod, parts.pop(0)) - for p in parts: - result = getattr(result, p) - return result - - -class ExportEntry(object): - def __init__(self, name, prefix, suffix, flags): - self.name = name - self.prefix = prefix - self.suffix = suffix - self.flags = flags - - @cached_property - def value(self): - return resolve(self.prefix, self.suffix) - - def __repr__(self): # pragma: no cover - return '' % (self.name, self.prefix, - self.suffix, self.flags) - - def __eq__(self, other): - if not isinstance(other, ExportEntry): - result = False - else: - result = (self.name == other.name and - self.prefix == other.prefix and - self.suffix == other.suffix and - self.flags == other.flags) - return result - - __hash__ = object.__hash__ - - -ENTRY_RE = re.compile(r'''(?P(\w|[-.+])+) - \s*=\s*(?P(\w+)([:\.]\w+)*) - \s*(\[\s*(?P[\w-]+(=\w+)?(,\s*\w+(=\w+)?)*)\s*\])? - ''', re.VERBOSE) - -def get_export_entry(specification): - m = ENTRY_RE.search(specification) - if not m: - result = None - if '[' in specification or ']' in specification: - raise DistlibException("Invalid specification " - "'%s'" % specification) - else: - d = m.groupdict() - name = d['name'] - path = d['callable'] - colons = path.count(':') - if colons == 0: - prefix, suffix = path, None - else: - if colons != 1: - raise DistlibException("Invalid specification " - "'%s'" % specification) - prefix, suffix = path.split(':') - flags = d['flags'] - if flags is None: - if '[' in specification or ']' in specification: - raise DistlibException("Invalid specification " - "'%s'" % specification) - flags = [] - else: - flags = [f.strip() for f in flags.split(',')] - result = ExportEntry(name, prefix, suffix, flags) - return result - - -def get_cache_base(suffix=None): - """ - Return the default base location for distlib caches. If the directory does - not exist, it is created. Use the suffix provided for the base directory, - and default to '.distlib' if it isn't provided. - - On Windows, if LOCALAPPDATA is defined in the environment, then it is - assumed to be a directory, and will be the parent directory of the result. - On POSIX, and on Windows if LOCALAPPDATA is not defined, the user's home - directory - using os.expanduser('~') - will be the parent directory of - the result. - - The result is just the directory '.distlib' in the parent directory as - determined above, or with the name specified with ``suffix``. - """ - if suffix is None: - suffix = '.distlib' - if os.name == 'nt' and 'LOCALAPPDATA' in os.environ: - result = os.path.expandvars('$localappdata') - else: - # Assume posix, or old Windows - result = os.path.expanduser('~') - # we use 'isdir' instead of 'exists', because we want to - # fail if there's a file with that name - if os.path.isdir(result): - usable = os.access(result, os.W_OK) - if not usable: - logger.warning('Directory exists but is not writable: %s', result) - else: - try: - os.makedirs(result) - usable = True - except OSError: - logger.warning('Unable to create %s', result, exc_info=True) - usable = False - if not usable: - result = tempfile.mkdtemp() - logger.warning('Default location unusable, using %s', result) - return os.path.join(result, suffix) - - -def path_to_cache_dir(path): - """ - Convert an absolute path to a directory name for use in a cache. - - The algorithm used is: - - #. On Windows, any ``':'`` in the drive is replaced with ``'---'``. - #. Any occurrence of ``os.sep`` is replaced with ``'--'``. - #. ``'.cache'`` is appended. - """ - d, p = os.path.splitdrive(os.path.abspath(path)) - if d: - d = d.replace(':', '---') - p = p.replace(os.sep, '--') - return d + p + '.cache' - - -def ensure_slash(s): - if not s.endswith('/'): - return s + '/' - return s - - -def parse_credentials(netloc): - username = password = None - if '@' in netloc: - prefix, netloc = netloc.rsplit('@', 1) - if ':' not in prefix: - username = prefix - else: - username, password = prefix.split(':', 1) - if username: - username = unquote(username) - if password: - password = unquote(password) - return username, password, netloc - - -def get_process_umask(): - result = os.umask(0o22) - os.umask(result) - return result - -def is_string_sequence(seq): - result = True - i = None - for i, s in enumerate(seq): - if not isinstance(s, string_types): - result = False - break - assert i is not None - return result - -PROJECT_NAME_AND_VERSION = re.compile('([a-z0-9_]+([.-][a-z_][a-z0-9_]*)*)-' - '([a-z0-9_.+-]+)', re.I) -PYTHON_VERSION = re.compile(r'-py(\d\.?\d?)') - - -def split_filename(filename, project_name=None): - """ - Extract name, version, python version from a filename (no extension) - - Return name, version, pyver or None - """ - result = None - pyver = None - filename = unquote(filename).replace(' ', '-') - m = PYTHON_VERSION.search(filename) - if m: - pyver = m.group(1) - filename = filename[:m.start()] - if project_name and len(filename) > len(project_name) + 1: - m = re.match(re.escape(project_name) + r'\b', filename) - if m: - n = m.end() - result = filename[:n], filename[n + 1:], pyver - if result is None: - m = PROJECT_NAME_AND_VERSION.match(filename) - if m: - result = m.group(1), m.group(3), pyver - return result - -# Allow spaces in name because of legacy dists like "Twisted Core" -NAME_VERSION_RE = re.compile(r'(?P[\w .-]+)\s*' - r'\(\s*(?P[^\s)]+)\)$') - -def parse_name_and_version(p): - """ - A utility method used to get name and version from a string. - - From e.g. a Provides-Dist value. - - :param p: A value in a form 'foo (1.0)' - :return: The name and version as a tuple. - """ - m = NAME_VERSION_RE.match(p) - if not m: - raise DistlibException('Ill-formed name/version string: \'%s\'' % p) - d = m.groupdict() - return d['name'].strip().lower(), d['ver'] - -def get_extras(requested, available): - result = set() - requested = set(requested or []) - available = set(available or []) - if '*' in requested: - requested.remove('*') - result |= available - for r in requested: - if r == '-': - result.add(r) - elif r.startswith('-'): - unwanted = r[1:] - if unwanted not in available: - logger.warning('undeclared extra: %s' % unwanted) - if unwanted in result: - result.remove(unwanted) - else: - if r not in available: - logger.warning('undeclared extra: %s' % r) - result.add(r) - return result -# -# Extended metadata functionality -# - -def _get_external_data(url): - result = {} - try: - # urlopen might fail if it runs into redirections, - # because of Python issue #13696. Fixed in locators - # using a custom redirect handler. - resp = urlopen(url) - headers = resp.info() - ct = headers.get('Content-Type') - if not ct.startswith('application/json'): - logger.debug('Unexpected response for JSON request: %s', ct) - else: - reader = codecs.getreader('utf-8')(resp) - #data = reader.read().decode('utf-8') - #result = json.loads(data) - result = json.load(reader) - except Exception as e: - logger.exception('Failed to get external data for %s: %s', url, e) - return result - -_external_data_base_url = 'https://www.red-dove.com/pypi/projects/' - -def get_project_data(name): - url = '%s/%s/project.json' % (name[0].upper(), name) - url = urljoin(_external_data_base_url, url) - result = _get_external_data(url) - return result - -def get_package_data(name, version): - url = '%s/%s/package-%s.json' % (name[0].upper(), name, version) - url = urljoin(_external_data_base_url, url) - return _get_external_data(url) - - -class Cache(object): - """ - A class implementing a cache for resources that need to live in the file system - e.g. shared libraries. This class was moved from resources to here because it - could be used by other modules, e.g. the wheel module. - """ - - def __init__(self, base): - """ - Initialise an instance. - - :param base: The base directory where the cache should be located. - """ - # we use 'isdir' instead of 'exists', because we want to - # fail if there's a file with that name - if not os.path.isdir(base): # pragma: no cover - os.makedirs(base) - if (os.stat(base).st_mode & 0o77) != 0: - logger.warning('Directory \'%s\' is not private', base) - self.base = os.path.abspath(os.path.normpath(base)) - - def prefix_to_dir(self, prefix): - """ - Converts a resource prefix to a directory name in the cache. - """ - return path_to_cache_dir(prefix) - - def clear(self): - """ - Clear the cache. - """ - not_removed = [] - for fn in os.listdir(self.base): - fn = os.path.join(self.base, fn) - try: - if os.path.islink(fn) or os.path.isfile(fn): - os.remove(fn) - elif os.path.isdir(fn): - shutil.rmtree(fn) - except Exception: - not_removed.append(fn) - return not_removed - - -class EventMixin(object): - """ - A very simple publish/subscribe system. - """ - def __init__(self): - self._subscribers = {} - - def add(self, event, subscriber, append=True): - """ - Add a subscriber for an event. - - :param event: The name of an event. - :param subscriber: The subscriber to be added (and called when the - event is published). - :param append: Whether to append or prepend the subscriber to an - existing subscriber list for the event. - """ - subs = self._subscribers - if event not in subs: - subs[event] = deque([subscriber]) - else: - sq = subs[event] - if append: - sq.append(subscriber) - else: - sq.appendleft(subscriber) - - def remove(self, event, subscriber): - """ - Remove a subscriber for an event. - - :param event: The name of an event. - :param subscriber: The subscriber to be removed. - """ - subs = self._subscribers - if event not in subs: - raise ValueError('No subscribers: %r' % event) - subs[event].remove(subscriber) - - def get_subscribers(self, event): - """ - Return an iterator for the subscribers for an event. - :param event: The event to return subscribers for. - """ - return iter(self._subscribers.get(event, ())) - - def publish(self, event, *args, **kwargs): - """ - Publish a event and return a list of values returned by its - subscribers. - - :param event: The event to publish. - :param args: The positional arguments to pass to the event's - subscribers. - :param kwargs: The keyword arguments to pass to the event's - subscribers. - """ - result = [] - for subscriber in self.get_subscribers(event): - try: - value = subscriber(event, *args, **kwargs) - except Exception: - logger.exception('Exception during event publication') - value = None - result.append(value) - logger.debug('publish %s: args = %s, kwargs = %s, result = %s', - event, args, kwargs, result) - return result - -# -# Simple sequencing -# -class Sequencer(object): - def __init__(self): - self._preds = {} - self._succs = {} - self._nodes = set() # nodes with no preds/succs - - def add_node(self, node): - self._nodes.add(node) - - def remove_node(self, node, edges=False): - if node in self._nodes: - self._nodes.remove(node) - if edges: - for p in set(self._preds.get(node, ())): - self.remove(p, node) - for s in set(self._succs.get(node, ())): - self.remove(node, s) - # Remove empties - for k, v in list(self._preds.items()): - if not v: - del self._preds[k] - for k, v in list(self._succs.items()): - if not v: - del self._succs[k] - - def add(self, pred, succ): - assert pred != succ - self._preds.setdefault(succ, set()).add(pred) - self._succs.setdefault(pred, set()).add(succ) - - def remove(self, pred, succ): - assert pred != succ - try: - preds = self._preds[succ] - succs = self._succs[pred] - except KeyError: # pragma: no cover - raise ValueError('%r not a successor of anything' % succ) - try: - preds.remove(pred) - succs.remove(succ) - except KeyError: # pragma: no cover - raise ValueError('%r not a successor of %r' % (succ, pred)) - - def is_step(self, step): - return (step in self._preds or step in self._succs or - step in self._nodes) - - def get_steps(self, final): - if not self.is_step(final): - raise ValueError('Unknown: %r' % final) - result = [] - todo = [] - seen = set() - todo.append(final) - while todo: - step = todo.pop(0) - if step in seen: - # if a step was already seen, - # move it to the end (so it will appear earlier - # when reversed on return) ... but not for the - # final step, as that would be confusing for - # users - if step != final: - result.remove(step) - result.append(step) - else: - seen.add(step) - result.append(step) - preds = self._preds.get(step, ()) - todo.extend(preds) - return reversed(result) - - @property - def strong_connections(self): - #http://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm - index_counter = [0] - stack = [] - lowlinks = {} - index = {} - result = [] - - graph = self._succs - - def strongconnect(node): - # set the depth index for this node to the smallest unused index - index[node] = index_counter[0] - lowlinks[node] = index_counter[0] - index_counter[0] += 1 - stack.append(node) - - # Consider successors - try: - successors = graph[node] - except Exception: - successors = [] - for successor in successors: - if successor not in lowlinks: - # Successor has not yet been visited - strongconnect(successor) - lowlinks[node] = min(lowlinks[node],lowlinks[successor]) - elif successor in stack: - # the successor is in the stack and hence in the current - # strongly connected component (SCC) - lowlinks[node] = min(lowlinks[node],index[successor]) - - # If `node` is a root node, pop the stack and generate an SCC - if lowlinks[node] == index[node]: - connected_component = [] - - while True: - successor = stack.pop() - connected_component.append(successor) - if successor == node: break - component = tuple(connected_component) - # storing the result - result.append(component) - - for node in graph: - if node not in lowlinks: - strongconnect(node) - - return result - - @property - def dot(self): - result = ['digraph G {'] - for succ in self._preds: - preds = self._preds[succ] - for pred in preds: - result.append(' %s -> %s;' % (pred, succ)) - for node in self._nodes: - result.append(' %s;' % node) - result.append('}') - return '\n'.join(result) - -# -# Unarchiving functionality for zip, tar, tgz, tbz, whl -# - -ARCHIVE_EXTENSIONS = ('.tar.gz', '.tar.bz2', '.tar', '.zip', - '.tgz', '.tbz', '.whl') - -def unarchive(archive_filename, dest_dir, format=None, check=True): - - def check_path(path): - if not isinstance(path, text_type): - path = path.decode('utf-8') - p = os.path.abspath(os.path.join(dest_dir, path)) - if not p.startswith(dest_dir) or p[plen] != os.sep: - raise ValueError('path outside destination: %r' % p) - - dest_dir = os.path.abspath(dest_dir) - plen = len(dest_dir) - archive = None - if format is None: - if archive_filename.endswith(('.zip', '.whl')): - format = 'zip' - elif archive_filename.endswith(('.tar.gz', '.tgz')): - format = 'tgz' - mode = 'r:gz' - elif archive_filename.endswith(('.tar.bz2', '.tbz')): - format = 'tbz' - mode = 'r:bz2' - elif archive_filename.endswith('.tar'): - format = 'tar' - mode = 'r' - else: # pragma: no cover - raise ValueError('Unknown format for %r' % archive_filename) - try: - if format == 'zip': - archive = ZipFile(archive_filename, 'r') - if check: - names = archive.namelist() - for name in names: - check_path(name) - else: - archive = tarfile.open(archive_filename, mode) - if check: - names = archive.getnames() - for name in names: - check_path(name) - if format != 'zip' and sys.version_info[0] < 3: - # See Python issue 17153. If the dest path contains Unicode, - # tarfile extraction fails on Python 2.x if a member path name - # contains non-ASCII characters - it leads to an implicit - # bytes -> unicode conversion using ASCII to decode. - for tarinfo in archive.getmembers(): - if not isinstance(tarinfo.name, text_type): - tarinfo.name = tarinfo.name.decode('utf-8') - archive.extractall(dest_dir) - - finally: - if archive: - archive.close() - - -def zip_dir(directory): - """zip a directory tree into a BytesIO object""" - result = io.BytesIO() - dlen = len(directory) - with ZipFile(result, "w") as zf: - for root, dirs, files in os.walk(directory): - for name in files: - full = os.path.join(root, name) - rel = root[dlen:] - dest = os.path.join(rel, name) - zf.write(full, dest) - return result - -# -# Simple progress bar -# - -UNITS = ('', 'K', 'M', 'G','T','P') - - -class Progress(object): - unknown = 'UNKNOWN' - - def __init__(self, minval=0, maxval=100): - assert maxval is None or maxval >= minval - self.min = self.cur = minval - self.max = maxval - self.started = None - self.elapsed = 0 - self.done = False - - def update(self, curval): - assert self.min <= curval - assert self.max is None or curval <= self.max - self.cur = curval - now = time.time() - if self.started is None: - self.started = now - else: - self.elapsed = now - self.started - - def increment(self, incr): - assert incr >= 0 - self.update(self.cur + incr) - - def start(self): - self.update(self.min) - return self - - def stop(self): - if self.max is not None: - self.update(self.max) - self.done = True - - @property - def maximum(self): - return self.unknown if self.max is None else self.max - - @property - def percentage(self): - if self.done: - result = '100 %' - elif self.max is None: - result = ' ?? %' - else: - v = 100.0 * (self.cur - self.min) / (self.max - self.min) - result = '%3d %%' % v - return result - - def format_duration(self, duration): - if (duration <= 0) and self.max is None or self.cur == self.min: - result = '??:??:??' - #elif duration < 1: - # result = '--:--:--' - else: - result = time.strftime('%H:%M:%S', time.gmtime(duration)) - return result - - @property - def ETA(self): - if self.done: - prefix = 'Done' - t = self.elapsed - #import pdb; pdb.set_trace() - else: - prefix = 'ETA ' - if self.max is None: - t = -1 - elif self.elapsed == 0 or (self.cur == self.min): - t = 0 - else: - #import pdb; pdb.set_trace() - t = float(self.max - self.min) - t /= self.cur - self.min - t = (t - 1) * self.elapsed - return '%s: %s' % (prefix, self.format_duration(t)) - - @property - def speed(self): - if self.elapsed == 0: - result = 0.0 - else: - result = (self.cur - self.min) / self.elapsed - for unit in UNITS: - if result < 1000: - break - result /= 1000.0 - return '%d %sB/s' % (result, unit) - -# -# Glob functionality -# - -RICH_GLOB = re.compile(r'\{([^}]*)\}') -_CHECK_RECURSIVE_GLOB = re.compile(r'[^/\\,{]\*\*|\*\*[^/\\,}]') -_CHECK_MISMATCH_SET = re.compile(r'^[^{]*\}|\{[^}]*$') - - -def iglob(path_glob): - """Extended globbing function that supports ** and {opt1,opt2,opt3}.""" - if _CHECK_RECURSIVE_GLOB.search(path_glob): - msg = """invalid glob %r: recursive glob "**" must be used alone""" - raise ValueError(msg % path_glob) - if _CHECK_MISMATCH_SET.search(path_glob): - msg = """invalid glob %r: mismatching set marker '{' or '}'""" - raise ValueError(msg % path_glob) - return _iglob(path_glob) - - -def _iglob(path_glob): - rich_path_glob = RICH_GLOB.split(path_glob, 1) - if len(rich_path_glob) > 1: - assert len(rich_path_glob) == 3, rich_path_glob - prefix, set, suffix = rich_path_glob - for item in set.split(','): - for path in _iglob(''.join((prefix, item, suffix))): - yield path - else: - if '**' not in path_glob: - for item in std_iglob(path_glob): - yield item - else: - prefix, radical = path_glob.split('**', 1) - if prefix == '': - prefix = '.' - if radical == '': - radical = '*' - else: - # we support both - radical = radical.lstrip('/') - radical = radical.lstrip('\\') - for path, dir, files in os.walk(prefix): - path = os.path.normpath(path) - for fn in _iglob(os.path.join(path, radical)): - yield fn - -if ssl: - from .compat import (HTTPSHandler as BaseHTTPSHandler, match_hostname, - CertificateError) - - -# -# HTTPSConnection which verifies certificates/matches domains -# - - class HTTPSConnection(httplib.HTTPSConnection): - ca_certs = None # set this to the path to the certs file (.pem) - check_domain = True # only used if ca_certs is not None - - # noinspection PyPropertyAccess - def connect(self): - sock = socket.create_connection((self.host, self.port), self.timeout) - if getattr(self, '_tunnel_host', False): - self.sock = sock - self._tunnel() - - context = ssl.SSLContext(ssl.PROTOCOL_SSLv23) - if hasattr(ssl, 'OP_NO_SSLv2'): - context.options |= ssl.OP_NO_SSLv2 - if self.cert_file: - context.load_cert_chain(self.cert_file, self.key_file) - kwargs = {} - if self.ca_certs: - context.verify_mode = ssl.CERT_REQUIRED - context.load_verify_locations(cafile=self.ca_certs) - if getattr(ssl, 'HAS_SNI', False): - kwargs['server_hostname'] = self.host - - self.sock = context.wrap_socket(sock, **kwargs) - if self.ca_certs and self.check_domain: - try: - match_hostname(self.sock.getpeercert(), self.host) - logger.debug('Host verified: %s', self.host) - except CertificateError: # pragma: no cover - self.sock.shutdown(socket.SHUT_RDWR) - self.sock.close() - raise - - class HTTPSHandler(BaseHTTPSHandler): - def __init__(self, ca_certs, check_domain=True): - BaseHTTPSHandler.__init__(self) - self.ca_certs = ca_certs - self.check_domain = check_domain - - def _conn_maker(self, *args, **kwargs): - """ - This is called to create a connection instance. Normally you'd - pass a connection class to do_open, but it doesn't actually check for - a class, and just expects a callable. As long as we behave just as a - constructor would have, we should be OK. If it ever changes so that - we *must* pass a class, we'll create an UnsafeHTTPSConnection class - which just sets check_domain to False in the class definition, and - choose which one to pass to do_open. - """ - result = HTTPSConnection(*args, **kwargs) - if self.ca_certs: - result.ca_certs = self.ca_certs - result.check_domain = self.check_domain - return result - - def https_open(self, req): - try: - return self.do_open(self._conn_maker, req) - except URLError as e: - if 'certificate verify failed' in str(e.reason): - raise CertificateError('Unable to verify server certificate ' - 'for %s' % req.host) - else: - raise - - # - # To prevent against mixing HTTP traffic with HTTPS (examples: A Man-In-The- - # Middle proxy using HTTP listens on port 443, or an index mistakenly serves - # HTML containing a http://xyz link when it should be https://xyz), - # you can use the following handler class, which does not allow HTTP traffic. - # - # It works by inheriting from HTTPHandler - so build_opener won't add a - # handler for HTTP itself. - # - class HTTPSOnlyHandler(HTTPSHandler, HTTPHandler): - def http_open(self, req): - raise URLError('Unexpected HTTP request on what should be a secure ' - 'connection: %s' % req) - -# -# XML-RPC with timeouts -# -class Transport(xmlrpclib.Transport): - def __init__(self, timeout, use_datetime=0): - self.timeout = timeout - xmlrpclib.Transport.__init__(self, use_datetime) - - def make_connection(self, host): - h, eh, x509 = self.get_host_info(host) - if not self._connection or host != self._connection[0]: - self._extra_headers = eh - self._connection = host, httplib.HTTPConnection(h) - return self._connection[1] - -if ssl: - class SafeTransport(xmlrpclib.SafeTransport): - def __init__(self, timeout, use_datetime=0): - self.timeout = timeout - xmlrpclib.SafeTransport.__init__(self, use_datetime) - - def make_connection(self, host): - h, eh, kwargs = self.get_host_info(host) - if not kwargs: - kwargs = {} - kwargs['timeout'] = self.timeout - if not self._connection or host != self._connection[0]: - self._extra_headers = eh - self._connection = host, httplib.HTTPSConnection(h, None, - **kwargs) - return self._connection[1] - - -class ServerProxy(xmlrpclib.ServerProxy): - def __init__(self, uri, **kwargs): - self.timeout = timeout = kwargs.pop('timeout', None) - # The above classes only come into play if a timeout - # is specified - if timeout is not None: - # scheme = splittype(uri) # deprecated as of Python 3.8 - scheme = urlparse(uri)[0] - use_datetime = kwargs.get('use_datetime', 0) - if scheme == 'https': - tcls = SafeTransport - else: - tcls = Transport - kwargs['transport'] = t = tcls(timeout, use_datetime=use_datetime) - self.transport = t - xmlrpclib.ServerProxy.__init__(self, uri, **kwargs) - -# -# CSV functionality. This is provided because on 2.x, the csv module can't -# handle Unicode. However, we need to deal with Unicode in e.g. RECORD files. -# - -def _csv_open(fn, mode, **kwargs): - if sys.version_info[0] < 3: - mode += 'b' - else: - kwargs['newline'] = '' - # Python 3 determines encoding from locale. Force 'utf-8' - # file encoding to match other forced utf-8 encoding - kwargs['encoding'] = 'utf-8' - return open(fn, mode, **kwargs) - - -class CSVBase(object): - defaults = { - 'delimiter': str(','), # The strs are used because we need native - 'quotechar': str('"'), # str in the csv API (2.x won't take - 'lineterminator': str('\n') # Unicode) - } - - def __enter__(self): - return self - - def __exit__(self, *exc_info): - self.stream.close() - - -class CSVReader(CSVBase): - def __init__(self, **kwargs): - if 'stream' in kwargs: - stream = kwargs['stream'] - if sys.version_info[0] >= 3: - # needs to be a text stream - stream = codecs.getreader('utf-8')(stream) - self.stream = stream - else: - self.stream = _csv_open(kwargs['path'], 'r') - self.reader = csv.reader(self.stream, **self.defaults) - - def __iter__(self): - return self - - def next(self): - result = next(self.reader) - if sys.version_info[0] < 3: - for i, item in enumerate(result): - if not isinstance(item, text_type): - result[i] = item.decode('utf-8') - return result - - __next__ = next - -class CSVWriter(CSVBase): - def __init__(self, fn, **kwargs): - self.stream = _csv_open(fn, 'w') - self.writer = csv.writer(self.stream, **self.defaults) - - def writerow(self, row): - if sys.version_info[0] < 3: - r = [] - for item in row: - if isinstance(item, text_type): - item = item.encode('utf-8') - r.append(item) - row = r - self.writer.writerow(row) - -# -# Configurator functionality -# - -class Configurator(BaseConfigurator): - - value_converters = dict(BaseConfigurator.value_converters) - value_converters['inc'] = 'inc_convert' - - def __init__(self, config, base=None): - super(Configurator, self).__init__(config) - self.base = base or os.getcwd() - - def configure_custom(self, config): - def convert(o): - if isinstance(o, (list, tuple)): - result = type(o)([convert(i) for i in o]) - elif isinstance(o, dict): - if '()' in o: - result = self.configure_custom(o) - else: - result = {} - for k in o: - result[k] = convert(o[k]) - else: - result = self.convert(o) - return result - - c = config.pop('()') - if not callable(c): - c = self.resolve(c) - props = config.pop('.', None) - # Check for valid identifiers - args = config.pop('[]', ()) - if args: - args = tuple([convert(o) for o in args]) - items = [(k, convert(config[k])) for k in config if valid_ident(k)] - kwargs = dict(items) - result = c(*args, **kwargs) - if props: - for n, v in props.items(): - setattr(result, n, convert(v)) - return result - - def __getitem__(self, key): - result = self.config[key] - if isinstance(result, dict) and '()' in result: - self.config[key] = result = self.configure_custom(result) - return result - - def inc_convert(self, value): - """Default converter for the inc:// protocol.""" - if not os.path.isabs(value): - value = os.path.join(self.base, value) - with codecs.open(value, 'r', encoding='utf-8') as f: - result = json.load(f) - return result - - -class SubprocessMixin(object): - """ - Mixin for running subprocesses and capturing their output - """ - def __init__(self, verbose=False, progress=None): - self.verbose = verbose - self.progress = progress - - def reader(self, stream, context): - """ - Read lines from a subprocess' output stream and either pass to a progress - callable (if specified) or write progress information to sys.stderr. - """ - progress = self.progress - verbose = self.verbose - while True: - s = stream.readline() - if not s: - break - if progress is not None: - progress(s, context) - else: - if not verbose: - sys.stderr.write('.') - else: - sys.stderr.write(s.decode('utf-8')) - sys.stderr.flush() - stream.close() - - def run_command(self, cmd, **kwargs): - p = subprocess.Popen(cmd, stdout=subprocess.PIPE, - stderr=subprocess.PIPE, **kwargs) - t1 = threading.Thread(target=self.reader, args=(p.stdout, 'stdout')) - t1.start() - t2 = threading.Thread(target=self.reader, args=(p.stderr, 'stderr')) - t2.start() - p.wait() - t1.join() - t2.join() - if self.progress is not None: - self.progress('done.', 'main') - elif self.verbose: - sys.stderr.write('done.\n') - return p - - -def normalize_name(name): - """Normalize a python package name a la PEP 503""" - # https://www.python.org/dev/peps/pep-0503/#normalized-names - return re.sub('[-_.]+', '-', name).lower() - -# def _get_pypirc_command(): - # """ - # Get the distutils command for interacting with PyPI configurations. - # :return: the command. - # """ - # from distutils.core import Distribution - # from distutils.config import PyPIRCCommand - # d = Distribution() - # return PyPIRCCommand(d) - -class PyPIRCFile(object): - - DEFAULT_REPOSITORY = 'https://upload.pypi.org/legacy/' - DEFAULT_REALM = 'pypi' - - def __init__(self, fn=None, url=None): - if fn is None: - fn = os.path.join(os.path.expanduser('~'), '.pypirc') - self.filename = fn - self.url = url - - def read(self): - result = {} - - if os.path.exists(self.filename): - repository = self.url or self.DEFAULT_REPOSITORY - - config = configparser.RawConfigParser() - config.read(self.filename) - sections = config.sections() - if 'distutils' in sections: - # let's get the list of servers - index_servers = config.get('distutils', 'index-servers') - _servers = [server.strip() for server in - index_servers.split('\n') - if server.strip() != ''] - if _servers == []: - # nothing set, let's try to get the default pypi - if 'pypi' in sections: - _servers = ['pypi'] - else: - for server in _servers: - result = {'server': server} - result['username'] = config.get(server, 'username') - - # optional params - for key, default in (('repository', self.DEFAULT_REPOSITORY), - ('realm', self.DEFAULT_REALM), - ('password', None)): - if config.has_option(server, key): - result[key] = config.get(server, key) - else: - result[key] = default - - # work around people having "repository" for the "pypi" - # section of their config set to the HTTP (rather than - # HTTPS) URL - if (server == 'pypi' and - repository in (self.DEFAULT_REPOSITORY, 'pypi')): - result['repository'] = self.DEFAULT_REPOSITORY - elif (result['server'] != repository and - result['repository'] != repository): - result = {} - elif 'server-login' in sections: - # old format - server = 'server-login' - if config.has_option(server, 'repository'): - repository = config.get(server, 'repository') - else: - repository = self.DEFAULT_REPOSITORY - result = { - 'username': config.get(server, 'username'), - 'password': config.get(server, 'password'), - 'repository': repository, - 'server': server, - 'realm': self.DEFAULT_REALM - } - return result - - def update(self, username, password): - # import pdb; pdb.set_trace() - config = configparser.RawConfigParser() - fn = self.filename - config.read(fn) - if not config.has_section('pypi'): - config.add_section('pypi') - config.set('pypi', 'username', username) - config.set('pypi', 'password', password) - with open(fn, 'w') as f: - config.write(f) - -def _load_pypirc(index): - """ - Read the PyPI access configuration as supported by distutils. - """ - return PyPIRCFile(url=index.url).read() - -def _store_pypirc(index): - PyPIRCFile().update(index.username, index.password) - -# -# get_platform()/get_host_platform() copied from Python 3.10.a0 source, with some minor -# tweaks -# - -def get_host_platform(): - """Return a string that identifies the current platform. This is used mainly to - distinguish platform-specific build directories and platform-specific built - distributions. Typically includes the OS name and version and the - architecture (as supplied by 'os.uname()'), although the exact information - included depends on the OS; eg. on Linux, the kernel version isn't - particularly important. - - Examples of returned values: - linux-i586 - linux-alpha (?) - solaris-2.6-sun4u - - Windows will return one of: - win-amd64 (64bit Windows on AMD64 (aka x86_64, Intel64, EM64T, etc) - win32 (all others - specifically, sys.platform is returned) - - For other non-POSIX platforms, currently just returns 'sys.platform'. - - """ - if os.name == 'nt': - if 'amd64' in sys.version.lower(): - return 'win-amd64' - if '(arm)' in sys.version.lower(): - return 'win-arm32' - if '(arm64)' in sys.version.lower(): - return 'win-arm64' - return sys.platform - - # Set for cross builds explicitly - if "_PYTHON_HOST_PLATFORM" in os.environ: - return os.environ["_PYTHON_HOST_PLATFORM"] - - if os.name != 'posix' or not hasattr(os, 'uname'): - # XXX what about the architecture? NT is Intel or Alpha, - # Mac OS is M68k or PPC, etc. - return sys.platform - - # Try to distinguish various flavours of Unix - - (osname, host, release, version, machine) = os.uname() - - # Convert the OS name to lowercase, remove '/' characters, and translate - # spaces (for "Power Macintosh") - osname = osname.lower().replace('/', '') - machine = machine.replace(' ', '_').replace('/', '-') - - if osname[:5] == 'linux': - # At least on Linux/Intel, 'machine' is the processor -- - # i386, etc. - # XXX what about Alpha, SPARC, etc? - return "%s-%s" % (osname, machine) - - elif osname[:5] == 'sunos': - if release[0] >= '5': # SunOS 5 == Solaris 2 - osname = 'solaris' - release = '%d.%s' % (int(release[0]) - 3, release[2:]) - # We can't use 'platform.architecture()[0]' because a - # bootstrap problem. We use a dict to get an error - # if some suspicious happens. - bitness = {2147483647:'32bit', 9223372036854775807:'64bit'} - machine += '.%s' % bitness[sys.maxsize] - # fall through to standard osname-release-machine representation - elif osname[:3] == 'aix': - from _aix_support import aix_platform - return aix_platform() - elif osname[:6] == 'cygwin': - osname = 'cygwin' - rel_re = re.compile (r'[\d.]+', re.ASCII) - m = rel_re.match(release) - if m: - release = m.group() - elif osname[:6] == 'darwin': - import _osx_support, distutils.sysconfig - osname, release, machine = _osx_support.get_platform_osx( - distutils.sysconfig.get_config_vars(), - osname, release, machine) - - return '%s-%s-%s' % (osname, release, machine) - - -_TARGET_TO_PLAT = { - 'x86' : 'win32', - 'x64' : 'win-amd64', - 'arm' : 'win-arm32', -} - - -def get_platform(): - if os.name != 'nt': - return get_host_platform() - cross_compilation_target = os.environ.get('VSCMD_ARG_TGT_ARCH') - if cross_compilation_target not in _TARGET_TO_PLAT: - return get_host_platform() - return _TARGET_TO_PLAT[cross_compilation_target] diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/live.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/live.py deleted file mode 100644 index 3ebbbc4ccbe47043eb62f8dd770f079745d3b743..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/live.py +++ /dev/null @@ -1,375 +0,0 @@ -import sys -from threading import Event, RLock, Thread -from types import TracebackType -from typing import IO, Any, Callable, List, Optional, TextIO, Type, cast - -from . import get_console -from .console import Console, ConsoleRenderable, RenderableType, RenderHook -from .control import Control -from .file_proxy import FileProxy -from .jupyter import JupyterMixin -from .live_render import LiveRender, VerticalOverflowMethod -from .screen import Screen -from .text import Text - - -class _RefreshThread(Thread): - """A thread that calls refresh() at regular intervals.""" - - def __init__(self, live: "Live", refresh_per_second: float) -> None: - self.live = live - self.refresh_per_second = refresh_per_second - self.done = Event() - super().__init__(daemon=True) - - def stop(self) -> None: - self.done.set() - - def run(self) -> None: - while not self.done.wait(1 / self.refresh_per_second): - with self.live._lock: - if not self.done.is_set(): - self.live.refresh() - - -class Live(JupyterMixin, RenderHook): - """Renders an auto-updating live display of any given renderable. - - Args: - renderable (RenderableType, optional): The renderable to live display. Defaults to displaying nothing. - console (Console, optional): Optional Console instance. Default will an internal Console instance writing to stdout. - screen (bool, optional): Enable alternate screen mode. Defaults to False. - auto_refresh (bool, optional): Enable auto refresh. If disabled, you will need to call `refresh()` or `update()` with refresh flag. Defaults to True - refresh_per_second (float, optional): Number of times per second to refresh the live display. Defaults to 4. - transient (bool, optional): Clear the renderable on exit (has no effect when screen=True). Defaults to False. - redirect_stdout (bool, optional): Enable redirection of stdout, so ``print`` may be used. Defaults to True. - redirect_stderr (bool, optional): Enable redirection of stderr. Defaults to True. - vertical_overflow (VerticalOverflowMethod, optional): How to handle renderable when it is too tall for the console. Defaults to "ellipsis". - get_renderable (Callable[[], RenderableType], optional): Optional callable to get renderable. Defaults to None. - """ - - def __init__( - self, - renderable: Optional[RenderableType] = None, - *, - console: Optional[Console] = None, - screen: bool = False, - auto_refresh: bool = True, - refresh_per_second: float = 4, - transient: bool = False, - redirect_stdout: bool = True, - redirect_stderr: bool = True, - vertical_overflow: VerticalOverflowMethod = "ellipsis", - get_renderable: Optional[Callable[[], RenderableType]] = None, - ) -> None: - assert refresh_per_second > 0, "refresh_per_second must be > 0" - self._renderable = renderable - self.console = console if console is not None else get_console() - self._screen = screen - self._alt_screen = False - - self._redirect_stdout = redirect_stdout - self._redirect_stderr = redirect_stderr - self._restore_stdout: Optional[IO[str]] = None - self._restore_stderr: Optional[IO[str]] = None - - self._lock = RLock() - self.ipy_widget: Optional[Any] = None - self.auto_refresh = auto_refresh - self._started: bool = False - self.transient = True if screen else transient - - self._refresh_thread: Optional[_RefreshThread] = None - self.refresh_per_second = refresh_per_second - - self.vertical_overflow = vertical_overflow - self._get_renderable = get_renderable - self._live_render = LiveRender( - self.get_renderable(), vertical_overflow=vertical_overflow - ) - - @property - def is_started(self) -> bool: - """Check if live display has been started.""" - return self._started - - def get_renderable(self) -> RenderableType: - renderable = ( - self._get_renderable() - if self._get_renderable is not None - else self._renderable - ) - return renderable or "" - - def start(self, refresh: bool = False) -> None: - """Start live rendering display. - - Args: - refresh (bool, optional): Also refresh. Defaults to False. - """ - with self._lock: - if self._started: - return - self.console.set_live(self) - self._started = True - if self._screen: - self._alt_screen = self.console.set_alt_screen(True) - self.console.show_cursor(False) - self._enable_redirect_io() - self.console.push_render_hook(self) - if refresh: - try: - self.refresh() - except Exception: - # If refresh fails, we want to stop the redirection of sys.stderr, - # so the error stacktrace is properly displayed in the terminal. - # (or, if the code that calls Rich captures the exception and wants to display something, - # let this be displayed in the terminal). - self.stop() - raise - if self.auto_refresh: - self._refresh_thread = _RefreshThread(self, self.refresh_per_second) - self._refresh_thread.start() - - def stop(self) -> None: - """Stop live rendering display.""" - with self._lock: - if not self._started: - return - self.console.clear_live() - self._started = False - - if self.auto_refresh and self._refresh_thread is not None: - self._refresh_thread.stop() - self._refresh_thread = None - # allow it to fully render on the last even if overflow - self.vertical_overflow = "visible" - with self.console: - try: - if not self._alt_screen and not self.console.is_jupyter: - self.refresh() - finally: - self._disable_redirect_io() - self.console.pop_render_hook() - if not self._alt_screen and self.console.is_terminal: - self.console.line() - self.console.show_cursor(True) - if self._alt_screen: - self.console.set_alt_screen(False) - - if self.transient and not self._alt_screen: - self.console.control(self._live_render.restore_cursor()) - if self.ipy_widget is not None and self.transient: - self.ipy_widget.close() # pragma: no cover - - def __enter__(self) -> "Live": - self.start(refresh=self._renderable is not None) - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> None: - self.stop() - - def _enable_redirect_io(self) -> None: - """Enable redirecting of stdout / stderr.""" - if self.console.is_terminal or self.console.is_jupyter: - if self._redirect_stdout and not isinstance(sys.stdout, FileProxy): - self._restore_stdout = sys.stdout - sys.stdout = cast("TextIO", FileProxy(self.console, sys.stdout)) - if self._redirect_stderr and not isinstance(sys.stderr, FileProxy): - self._restore_stderr = sys.stderr - sys.stderr = cast("TextIO", FileProxy(self.console, sys.stderr)) - - def _disable_redirect_io(self) -> None: - """Disable redirecting of stdout / stderr.""" - if self._restore_stdout: - sys.stdout = cast("TextIO", self._restore_stdout) - self._restore_stdout = None - if self._restore_stderr: - sys.stderr = cast("TextIO", self._restore_stderr) - self._restore_stderr = None - - @property - def renderable(self) -> RenderableType: - """Get the renderable that is being displayed - - Returns: - RenderableType: Displayed renderable. - """ - renderable = self.get_renderable() - return Screen(renderable) if self._alt_screen else renderable - - def update(self, renderable: RenderableType, *, refresh: bool = False) -> None: - """Update the renderable that is being displayed - - Args: - renderable (RenderableType): New renderable to use. - refresh (bool, optional): Refresh the display. Defaults to False. - """ - if isinstance(renderable, str): - renderable = self.console.render_str(renderable) - with self._lock: - self._renderable = renderable - if refresh: - self.refresh() - - def refresh(self) -> None: - """Update the display of the Live Render.""" - with self._lock: - self._live_render.set_renderable(self.renderable) - if self.console.is_jupyter: # pragma: no cover - try: - from IPython.display import display - from ipywidgets import Output - except ImportError: - import warnings - - warnings.warn('install "ipywidgets" for Jupyter support') - else: - if self.ipy_widget is None: - self.ipy_widget = Output() - display(self.ipy_widget) - - with self.ipy_widget: - self.ipy_widget.clear_output(wait=True) - self.console.print(self._live_render.renderable) - elif self.console.is_terminal and not self.console.is_dumb_terminal: - with self.console: - self.console.print(Control()) - elif ( - not self._started and not self.transient - ): # if it is finished allow files or dumb-terminals to see final result - with self.console: - self.console.print(Control()) - - def process_renderables( - self, renderables: List[ConsoleRenderable] - ) -> List[ConsoleRenderable]: - """Process renderables to restore cursor and display progress.""" - self._live_render.vertical_overflow = self.vertical_overflow - if self.console.is_interactive: - # lock needs acquiring as user can modify live_render renderable at any time unlike in Progress. - with self._lock: - reset = ( - Control.home() - if self._alt_screen - else self._live_render.position_cursor() - ) - renderables = [reset, *renderables, self._live_render] - elif ( - not self._started and not self.transient - ): # if it is finished render the final output for files or dumb_terminals - renderables = [*renderables, self._live_render] - - return renderables - - -if __name__ == "__main__": # pragma: no cover - import random - import time - from itertools import cycle - from typing import Dict, List, Tuple - - from .align import Align - from .console import Console - from .live import Live as Live - from .panel import Panel - from .rule import Rule - from .syntax import Syntax - from .table import Table - - console = Console() - - syntax = Syntax( - '''def loop_last(values: Iterable[T]) -> Iterable[Tuple[bool, T]]: - """Iterate and generate a tuple with a flag for last value.""" - iter_values = iter(values) - try: - previous_value = next(iter_values) - except StopIteration: - return - for value in iter_values: - yield False, previous_value - previous_value = value - yield True, previous_value''', - "python", - line_numbers=True, - ) - - table = Table("foo", "bar", "baz") - table.add_row("1", "2", "3") - - progress_renderables = [ - "You can make the terminal shorter and taller to see the live table hide" - "Text may be printed while the progress bars are rendering.", - Panel("In fact, [i]any[/i] renderable will work"), - "Such as [magenta]tables[/]...", - table, - "Pretty printed structures...", - {"type": "example", "text": "Pretty printed"}, - "Syntax...", - syntax, - Rule("Give it a try!"), - ] - - examples = cycle(progress_renderables) - - exchanges = [ - "SGD", - "MYR", - "EUR", - "USD", - "AUD", - "JPY", - "CNH", - "HKD", - "CAD", - "INR", - "DKK", - "GBP", - "RUB", - "NZD", - "MXN", - "IDR", - "TWD", - "THB", - "VND", - ] - with Live(console=console) as live_table: - exchange_rate_dict: Dict[Tuple[str, str], float] = {} - - for index in range(100): - select_exchange = exchanges[index % len(exchanges)] - - for exchange in exchanges: - if exchange == select_exchange: - continue - time.sleep(0.4) - if random.randint(0, 10) < 1: - console.log(next(examples)) - exchange_rate_dict[(select_exchange, exchange)] = 200 / ( - (random.random() * 320) + 1 - ) - if len(exchange_rate_dict) > len(exchanges) - 1: - exchange_rate_dict.pop(list(exchange_rate_dict.keys())[0]) - table = Table(title="Exchange Rates") - - table.add_column("Source Currency") - table.add_column("Destination Currency") - table.add_column("Exchange Rate") - - for ((source, dest), exchange_rate) in exchange_rate_dict.items(): - table.add_row( - source, - dest, - Text( - f"{exchange_rate:.4f}", - style="red" if exchange_rate < 1.0 else "green", - ), - ) - - live_table.update(Align.center(table)) diff --git a/spaces/Awesimo/jojogan/e4e/models/stylegan2/op/upfirdn2d.py b/spaces/Awesimo/jojogan/e4e/models/stylegan2/op/upfirdn2d.py deleted file mode 100644 index 7bc5a1e331c2bbb1893ac748cfd0f144ff0651b4..0000000000000000000000000000000000000000 --- a/spaces/Awesimo/jojogan/e4e/models/stylegan2/op/upfirdn2d.py +++ /dev/null @@ -1,184 +0,0 @@ -import os - -import torch -from torch.autograd import Function -from torch.utils.cpp_extension import load - -module_path = os.path.dirname(__file__) -upfirdn2d_op = load( - 'upfirdn2d', - sources=[ - os.path.join(module_path, 'upfirdn2d.cpp'), - os.path.join(module_path, 'upfirdn2d_kernel.cu'), - ], -) - - -class UpFirDn2dBackward(Function): - @staticmethod - def forward( - ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, in_size, out_size - ): - up_x, up_y = up - down_x, down_y = down - g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad - - grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1) - - grad_input = upfirdn2d_op.upfirdn2d( - grad_output, - grad_kernel, - down_x, - down_y, - up_x, - up_y, - g_pad_x0, - g_pad_x1, - g_pad_y0, - g_pad_y1, - ) - grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], in_size[3]) - - ctx.save_for_backward(kernel) - - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - ctx.up_x = up_x - ctx.up_y = up_y - ctx.down_x = down_x - ctx.down_y = down_y - ctx.pad_x0 = pad_x0 - ctx.pad_x1 = pad_x1 - ctx.pad_y0 = pad_y0 - ctx.pad_y1 = pad_y1 - ctx.in_size = in_size - ctx.out_size = out_size - - return grad_input - - @staticmethod - def backward(ctx, gradgrad_input): - kernel, = ctx.saved_tensors - - gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], ctx.in_size[3], 1) - - gradgrad_out = upfirdn2d_op.upfirdn2d( - gradgrad_input, - kernel, - ctx.up_x, - ctx.up_y, - ctx.down_x, - ctx.down_y, - ctx.pad_x0, - ctx.pad_x1, - ctx.pad_y0, - ctx.pad_y1, - ) - # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], ctx.out_size[1], ctx.in_size[3]) - gradgrad_out = gradgrad_out.view( - ctx.in_size[0], ctx.in_size[1], ctx.out_size[0], ctx.out_size[1] - ) - - return gradgrad_out, None, None, None, None, None, None, None, None - - -class UpFirDn2d(Function): - @staticmethod - def forward(ctx, input, kernel, up, down, pad): - up_x, up_y = up - down_x, down_y = down - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - kernel_h, kernel_w = kernel.shape - batch, channel, in_h, in_w = input.shape - ctx.in_size = input.shape - - input = input.reshape(-1, in_h, in_w, 1) - - ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1])) - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - ctx.out_size = (out_h, out_w) - - ctx.up = (up_x, up_y) - ctx.down = (down_x, down_y) - ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1) - - g_pad_x0 = kernel_w - pad_x0 - 1 - g_pad_y0 = kernel_h - pad_y0 - 1 - g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1 - g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1 - - ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1) - - out = upfirdn2d_op.upfirdn2d( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 - ) - # out = out.view(major, out_h, out_w, minor) - out = out.view(-1, channel, out_h, out_w) - - return out - - @staticmethod - def backward(ctx, grad_output): - kernel, grad_kernel = ctx.saved_tensors - - grad_input = UpFirDn2dBackward.apply( - grad_output, - kernel, - grad_kernel, - ctx.up, - ctx.down, - ctx.pad, - ctx.g_pad, - ctx.in_size, - ctx.out_size, - ) - - return grad_input, None, None, None, None - - -def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)): - out = UpFirDn2d.apply( - input, kernel, (up, up), (down, down), (pad[0], pad[1], pad[0], pad[1]) - ) - - return out - - -def upfirdn2d_native( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 -): - _, in_h, in_w, minor = input.shape - kernel_h, kernel_w = kernel.shape - - out = input.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad( - out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)] - ) - out = out[ - :, - max(-pad_y0, 0): out.shape[1] - max(-pad_y1, 0), - max(-pad_x0, 0): out.shape[2] - max(-pad_x1, 0), - :, - ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape( - [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1] - ) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - - return out[:, ::down_y, ::down_x, :] diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/c10.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/c10.py deleted file mode 100644 index 25ee23009547913733dc528fb8a39ca995fd9e31..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/c10.py +++ /dev/null @@ -1,534 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import math -import torch -import torch.nn.functional as F - -from detectron2.layers import cat -from detectron2.layers.roi_align_rotated import ROIAlignRotated -from detectron2.modeling import poolers -from detectron2.modeling.proposal_generator import rpn -from detectron2.modeling.roi_heads.mask_head import mask_rcnn_inference -from detectron2.structures import Boxes, ImageList, Instances, Keypoints - -from .shared import alias, to_device - - -""" -This file contains caffe2-compatible implementation of several detectron2 components. -""" - - -class Caffe2Boxes(Boxes): - """ - Representing a list of detectron2.structures.Boxes from minibatch, each box - is represented by a 5d vector (batch index + 4 coordinates), or a 6d vector - (batch index + 5 coordinates) for RotatedBoxes. - """ - - def __init__(self, tensor): - assert isinstance(tensor, torch.Tensor) - assert tensor.dim() == 2 and tensor.size(-1) in [4, 5, 6], tensor.size() - # TODO: make tensor immutable when dim is Nx5 for Boxes, - # and Nx6 for RotatedBoxes? - self.tensor = tensor - - -# TODO clean up this class, maybe just extend Instances -class InstancesList(object): - """ - Tensor representation of a list of Instances object for a batch of images. - - When dealing with a batch of images with Caffe2 ops, a list of bboxes - (instances) are usually represented by single Tensor with size - (sigma(Ni), 5) or (sigma(Ni), 4) plus a batch split Tensor. This class is - for providing common functions to convert between these two representations. - """ - - def __init__(self, im_info, indices, extra_fields=None): - # [N, 3] -> (H, W, Scale) - self.im_info = im_info - # [N,] -> indice of batch to which the instance belongs - self.indices = indices - # [N, ...] - self.batch_extra_fields = extra_fields or {} - - self.image_size = self.im_info - - def get_fields(self): - """like `get_fields` in the Instances object, - but return each field in tensor representations""" - ret = {} - for k, v in self.batch_extra_fields.items(): - # if isinstance(v, torch.Tensor): - # tensor_rep = v - # elif isinstance(v, (Boxes, Keypoints)): - # tensor_rep = v.tensor - # else: - # raise ValueError("Can't find tensor representation for: {}".format()) - ret[k] = v - return ret - - def has(self, name): - return name in self.batch_extra_fields - - def set(self, name, value): - data_len = len(value) - if len(self.batch_extra_fields): - assert ( - len(self) == data_len - ), "Adding a field of length {} to a Instances of length {}".format(data_len, len(self)) - self.batch_extra_fields[name] = value - - def __setattr__(self, name, val): - if name in ["im_info", "indices", "batch_extra_fields", "image_size"]: - super().__setattr__(name, val) - else: - self.set(name, val) - - def __getattr__(self, name): - if name not in self.batch_extra_fields: - raise AttributeError("Cannot find field '{}' in the given Instances!".format(name)) - return self.batch_extra_fields[name] - - def __len__(self): - return len(self.indices) - - def flatten(self): - ret = [] - for _, v in self.batch_extra_fields.items(): - if isinstance(v, (Boxes, Keypoints)): - ret.append(v.tensor) - else: - ret.append(v) - return ret - - @staticmethod - def to_d2_instances_list(instances_list): - """ - Convert InstancesList to List[Instances]. The input `instances_list` can - also be a List[Instances], in this case this method is a non-op. - """ - if not isinstance(instances_list, InstancesList): - assert all(isinstance(x, Instances) for x in instances_list) - return instances_list - - ret = [] - for i, info in enumerate(instances_list.im_info): - instances = Instances(torch.Size([int(info[0].item()), int(info[1].item())])) - - ids = instances_list.indices == i - for k, v in instances_list.batch_extra_fields.items(): - if isinstance(v, torch.Tensor): - instances.set(k, v[ids]) - continue - elif isinstance(v, Boxes): - instances.set(k, v[ids, -4:]) - continue - - target_type, tensor_source = v - assert isinstance(tensor_source, torch.Tensor) - assert tensor_source.shape[0] == instances_list.indices.shape[0] - tensor_source = tensor_source[ids] - - if issubclass(target_type, Boxes): - instances.set(k, Boxes(tensor_source[:, -4:])) - elif issubclass(target_type, Keypoints): - instances.set(k, Keypoints(tensor_source)) - elif issubclass(target_type, torch.Tensor): - instances.set(k, tensor_source) - else: - raise ValueError("Can't handle targe type: {}".format(target_type)) - - ret.append(instances) - return ret - - -class Caffe2Compatible(object): - """ - A model can inherit this class to indicate that it can be traced and deployed with caffe2. - """ - - def _get_tensor_mode(self): - return self._tensor_mode - - def _set_tensor_mode(self, v): - self._tensor_mode = v - - tensor_mode = property(_get_tensor_mode, _set_tensor_mode) - """ - If true, the model expects C2-style tensor only inputs/outputs format. - """ - - -class Caffe2RPN(Caffe2Compatible, rpn.RPN): - def _generate_proposals( - self, images, objectness_logits_pred, anchor_deltas_pred, gt_instances=None - ): - assert isinstance(images, ImageList) - if self.tensor_mode: - im_info = images.image_sizes - else: - im_info = torch.tensor([[im_sz[0], im_sz[1], 1.0] for im_sz in images.image_sizes]).to( - images.tensor.device - ) - assert isinstance(im_info, torch.Tensor) - - rpn_rois_list = [] - rpn_roi_probs_list = [] - for scores, bbox_deltas, cell_anchors_tensor, feat_stride in zip( - objectness_logits_pred, - anchor_deltas_pred, - iter(self.anchor_generator.cell_anchors), - self.anchor_generator.strides, - ): - scores = scores.detach() - bbox_deltas = bbox_deltas.detach() - - rpn_rois, rpn_roi_probs = torch.ops._caffe2.GenerateProposals( - scores, - bbox_deltas, - im_info, - cell_anchors_tensor, - spatial_scale=1.0 / feat_stride, - pre_nms_topN=self.pre_nms_topk[self.training], - post_nms_topN=self.post_nms_topk[self.training], - nms_thresh=self.nms_thresh, - min_size=self.min_box_size, - # correct_transform_coords=True, # deprecated argument - angle_bound_on=True, # Default - angle_bound_lo=-180, - angle_bound_hi=180, - clip_angle_thresh=1.0, # Default - legacy_plus_one=False, - ) - rpn_rois_list.append(rpn_rois) - rpn_roi_probs_list.append(rpn_roi_probs) - - # For FPN in D2, in RPN all proposals from different levels are concated - # together, ranked and picked by top post_nms_topk. Then in ROIPooler - # it calculates level_assignments and calls the RoIAlign from - # the corresponding level. - - if len(objectness_logits_pred) == 1: - rpn_rois = rpn_rois_list[0] - rpn_roi_probs = rpn_roi_probs_list[0] - else: - assert len(rpn_rois_list) == len(rpn_roi_probs_list) - rpn_post_nms_topN = self.post_nms_topk[self.training] - - device = rpn_rois_list[0].device - input_list = [to_device(x, "cpu") for x in (rpn_rois_list + rpn_roi_probs_list)] - - # TODO remove this after confirming rpn_max_level/rpn_min_level - # is not needed in CollectRpnProposals. - feature_strides = list(self.anchor_generator.strides) - rpn_min_level = int(math.log2(feature_strides[0])) - rpn_max_level = int(math.log2(feature_strides[-1])) - assert (rpn_max_level - rpn_min_level + 1) == len( - rpn_rois_list - ), "CollectRpnProposals requires continuous levels" - - rpn_rois = torch.ops._caffe2.CollectRpnProposals( - input_list, - # NOTE: in current implementation, rpn_max_level and rpn_min_level - # are not needed, only the subtraction of two matters and it - # can be infer from the number of inputs. Keep them now for - # consistency. - rpn_max_level=2 + len(rpn_rois_list) - 1, - rpn_min_level=2, - rpn_post_nms_topN=rpn_post_nms_topN, - ) - rpn_rois = to_device(rpn_rois, device) - rpn_roi_probs = [] - - proposals = self.c2_postprocess(im_info, rpn_rois, rpn_roi_probs, self.tensor_mode) - return proposals, {} - - def forward(self, images, features, gt_instances=None): - assert not self.training - features = [features[f] for f in self.in_features] - objectness_logits_pred, anchor_deltas_pred = self.rpn_head(features) - return self._generate_proposals( - images, - objectness_logits_pred, - anchor_deltas_pred, - gt_instances, - ) - - @staticmethod - def c2_postprocess(im_info, rpn_rois, rpn_roi_probs, tensor_mode): - proposals = InstancesList( - im_info=im_info, - indices=rpn_rois[:, 0], - extra_fields={ - "proposal_boxes": Caffe2Boxes(rpn_rois), - "objectness_logits": (torch.Tensor, rpn_roi_probs), - }, - ) - if not tensor_mode: - proposals = InstancesList.to_d2_instances_list(proposals) - else: - proposals = [proposals] - return proposals - - -class Caffe2ROIPooler(Caffe2Compatible, poolers.ROIPooler): - @staticmethod - def c2_preprocess(box_lists): - assert all(isinstance(x, Boxes) for x in box_lists) - if all(isinstance(x, Caffe2Boxes) for x in box_lists): - # input is pure-tensor based - assert len(box_lists) == 1 - pooler_fmt_boxes = box_lists[0].tensor - else: - pooler_fmt_boxes = poolers.convert_boxes_to_pooler_format(box_lists) - return pooler_fmt_boxes - - def forward(self, x, box_lists): - assert not self.training - - pooler_fmt_boxes = self.c2_preprocess(box_lists) - num_level_assignments = len(self.level_poolers) - - if num_level_assignments == 1: - if isinstance(self.level_poolers[0], ROIAlignRotated): - c2_roi_align = torch.ops._caffe2.RoIAlignRotated - aligned = True - else: - c2_roi_align = torch.ops._caffe2.RoIAlign - aligned = self.level_poolers[0].aligned - - x0 = x[0] - if x0.is_quantized: - x0 = x0.dequantize() - - out = c2_roi_align( - x0, - pooler_fmt_boxes, - order="NCHW", - spatial_scale=float(self.level_poolers[0].spatial_scale), - pooled_h=int(self.output_size[0]), - pooled_w=int(self.output_size[1]), - sampling_ratio=int(self.level_poolers[0].sampling_ratio), - aligned=aligned, - ) - return out - - device = pooler_fmt_boxes.device - assert ( - self.max_level - self.min_level + 1 == 4 - ), "Currently DistributeFpnProposals only support 4 levels" - fpn_outputs = torch.ops._caffe2.DistributeFpnProposals( - to_device(pooler_fmt_boxes, "cpu"), - roi_canonical_scale=self.canonical_box_size, - roi_canonical_level=self.canonical_level, - roi_max_level=self.max_level, - roi_min_level=self.min_level, - legacy_plus_one=False, - ) - fpn_outputs = [to_device(x, device) for x in fpn_outputs] - - rois_fpn_list = fpn_outputs[:-1] - rois_idx_restore_int32 = fpn_outputs[-1] - - roi_feat_fpn_list = [] - for roi_fpn, x_level, pooler in zip(rois_fpn_list, x, self.level_poolers): - if isinstance(pooler, ROIAlignRotated): - c2_roi_align = torch.ops._caffe2.RoIAlignRotated - aligned = True - else: - c2_roi_align = torch.ops._caffe2.RoIAlign - aligned = bool(pooler.aligned) - - if x_level.is_quantized: - x_level = x_level.dequantize() - - roi_feat_fpn = c2_roi_align( - x_level, - roi_fpn, - order="NCHW", - spatial_scale=float(pooler.spatial_scale), - pooled_h=int(self.output_size[0]), - pooled_w=int(self.output_size[1]), - sampling_ratio=int(pooler.sampling_ratio), - aligned=aligned, - ) - roi_feat_fpn_list.append(roi_feat_fpn) - - roi_feat_shuffled = cat(roi_feat_fpn_list, dim=0) - assert roi_feat_shuffled.numel() > 0 and rois_idx_restore_int32.numel() > 0, ( - "Caffe2 export requires tracing with a model checkpoint + input that can produce valid" - " detections. But no detections were obtained with the given checkpoint and input!" - ) - roi_feat = torch.ops._caffe2.BatchPermutation(roi_feat_shuffled, rois_idx_restore_int32) - return roi_feat - - -class Caffe2FastRCNNOutputsInference: - def __init__(self, tensor_mode): - self.tensor_mode = tensor_mode # whether the output is caffe2 tensor mode - - def __call__(self, box_predictor, predictions, proposals): - """equivalent to FastRCNNOutputLayers.inference""" - num_classes = box_predictor.num_classes - score_thresh = box_predictor.test_score_thresh - nms_thresh = box_predictor.test_nms_thresh - topk_per_image = box_predictor.test_topk_per_image - is_rotated = len(box_predictor.box2box_transform.weights) == 5 - - if is_rotated: - box_dim = 5 - assert box_predictor.box2box_transform.weights[4] == 1, ( - "The weights for Rotated BBoxTransform in C2 have only 4 dimensions," - + " thus enforcing the angle weight to be 1 for now" - ) - box2box_transform_weights = box_predictor.box2box_transform.weights[:4] - else: - box_dim = 4 - box2box_transform_weights = box_predictor.box2box_transform.weights - - class_logits, box_regression = predictions - if num_classes + 1 == class_logits.shape[1]: - class_prob = F.softmax(class_logits, -1) - else: - assert num_classes == class_logits.shape[1] - class_prob = F.sigmoid(class_logits) - # BoxWithNMSLimit will infer num_classes from the shape of the class_prob - # So append a zero column as placeholder for the background class - class_prob = torch.cat((class_prob, torch.zeros(class_prob.shape[0], 1)), dim=1) - - assert box_regression.shape[1] % box_dim == 0 - cls_agnostic_bbox_reg = box_regression.shape[1] // box_dim == 1 - - input_tensor_mode = proposals[0].proposal_boxes.tensor.shape[1] == box_dim + 1 - - rois = type(proposals[0].proposal_boxes).cat([p.proposal_boxes for p in proposals]) - device, dtype = rois.tensor.device, rois.tensor.dtype - if input_tensor_mode: - im_info = proposals[0].image_size - rois = rois.tensor - else: - im_info = torch.tensor( - [[sz[0], sz[1], 1.0] for sz in [x.image_size for x in proposals]] - ) - batch_ids = cat( - [ - torch.full((b, 1), i, dtype=dtype, device=device) - for i, b in enumerate(len(p) for p in proposals) - ], - dim=0, - ) - rois = torch.cat([batch_ids, rois.tensor], dim=1) - - roi_pred_bbox, roi_batch_splits = torch.ops._caffe2.BBoxTransform( - to_device(rois, "cpu"), - to_device(box_regression, "cpu"), - to_device(im_info, "cpu"), - weights=box2box_transform_weights, - apply_scale=True, - rotated=is_rotated, - angle_bound_on=True, - angle_bound_lo=-180, - angle_bound_hi=180, - clip_angle_thresh=1.0, - legacy_plus_one=False, - ) - roi_pred_bbox = to_device(roi_pred_bbox, device) - roi_batch_splits = to_device(roi_batch_splits, device) - - nms_outputs = torch.ops._caffe2.BoxWithNMSLimit( - to_device(class_prob, "cpu"), - to_device(roi_pred_bbox, "cpu"), - to_device(roi_batch_splits, "cpu"), - score_thresh=float(score_thresh), - nms=float(nms_thresh), - detections_per_im=int(topk_per_image), - soft_nms_enabled=False, - soft_nms_method="linear", - soft_nms_sigma=0.5, - soft_nms_min_score_thres=0.001, - rotated=is_rotated, - cls_agnostic_bbox_reg=cls_agnostic_bbox_reg, - input_boxes_include_bg_cls=False, - output_classes_include_bg_cls=False, - legacy_plus_one=False, - ) - roi_score_nms = to_device(nms_outputs[0], device) - roi_bbox_nms = to_device(nms_outputs[1], device) - roi_class_nms = to_device(nms_outputs[2], device) - roi_batch_splits_nms = to_device(nms_outputs[3], device) - roi_keeps_nms = to_device(nms_outputs[4], device) - roi_keeps_size_nms = to_device(nms_outputs[5], device) - if not self.tensor_mode: - roi_class_nms = roi_class_nms.to(torch.int64) - - roi_batch_ids = cat( - [ - torch.full((b, 1), i, dtype=dtype, device=device) - for i, b in enumerate(int(x.item()) for x in roi_batch_splits_nms) - ], - dim=0, - ) - - roi_class_nms = alias(roi_class_nms, "class_nms") - roi_score_nms = alias(roi_score_nms, "score_nms") - roi_bbox_nms = alias(roi_bbox_nms, "bbox_nms") - roi_batch_splits_nms = alias(roi_batch_splits_nms, "batch_splits_nms") - roi_keeps_nms = alias(roi_keeps_nms, "keeps_nms") - roi_keeps_size_nms = alias(roi_keeps_size_nms, "keeps_size_nms") - - results = InstancesList( - im_info=im_info, - indices=roi_batch_ids[:, 0], - extra_fields={ - "pred_boxes": Caffe2Boxes(roi_bbox_nms), - "scores": roi_score_nms, - "pred_classes": roi_class_nms, - }, - ) - - if not self.tensor_mode: - results = InstancesList.to_d2_instances_list(results) - batch_splits = roi_batch_splits_nms.int().tolist() - kept_indices = list(roi_keeps_nms.to(torch.int64).split(batch_splits)) - else: - results = [results] - kept_indices = [roi_keeps_nms] - - return results, kept_indices - - -class Caffe2MaskRCNNInference: - def __call__(self, pred_mask_logits, pred_instances): - """equivalent to mask_head.mask_rcnn_inference""" - if all(isinstance(x, InstancesList) for x in pred_instances): - assert len(pred_instances) == 1 - mask_probs_pred = pred_mask_logits.sigmoid() - mask_probs_pred = alias(mask_probs_pred, "mask_fcn_probs") - pred_instances[0].pred_masks = mask_probs_pred - else: - mask_rcnn_inference(pred_mask_logits, pred_instances) - - -class Caffe2KeypointRCNNInference: - def __init__(self, use_heatmap_max_keypoint): - self.use_heatmap_max_keypoint = use_heatmap_max_keypoint - - def __call__(self, pred_keypoint_logits, pred_instances): - # just return the keypoint heatmap for now, - # there will be option to call HeatmapMaxKeypointOp - output = alias(pred_keypoint_logits, "kps_score") - if all(isinstance(x, InstancesList) for x in pred_instances): - assert len(pred_instances) == 1 - if self.use_heatmap_max_keypoint: - device = output.device - output = torch.ops._caffe2.HeatmapMaxKeypoint( - to_device(output, "cpu"), - pred_instances[0].pred_boxes.tensor, - should_output_softmax=True, # worth make it configerable? - ) - output = to_device(output, device) - output = alias(output, "keypoints_out") - pred_instances[0].pred_keypoints = output - return pred_keypoint_logits diff --git a/spaces/BAAI/AltDiffusion/README.md b/spaces/BAAI/AltDiffusion/README.md deleted file mode 100644 index 9d335cabb273fee5c9d0cf59e538fd93bedc15a6..0000000000000000000000000000000000000000 --- a/spaces/BAAI/AltDiffusion/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AltDiffusion -emoji: ❤️ -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.10.1 -app_file: app.py -pinned: false -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Benson/text-generation/Examples/20 Minutos Hasta El Amanecer Descarga Gratuita.md b/spaces/Benson/text-generation/Examples/20 Minutos Hasta El Amanecer Descarga Gratuita.md deleted file mode 100644 index 2f1caf7c693cbcf3d620eb7ecce7d63cceb58e58..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/20 Minutos Hasta El Amanecer Descarga Gratuita.md +++ /dev/null @@ -1,61 +0,0 @@ -
    -

    20 minutos hasta el amanecer: un juego de supervivencia Roguelite revisión

    -

    Si usted está buscando un ritmo rápido, lleno de acción, y desafiante juego que pondrá a prueba sus habilidades y reflejos, entonces es posible que desee echa un vistazo 20 Minutes Till Dawn. Este es un juego de supervivencia roguelite donde tienes que luchar contra hordas interminables de monstruos Lovecraftian y sobrevivir a la noche. En este artículo, revisaremos las características del juego, la jugabilidad, los gráficos, el sonido, los pros, los contras y más.

    -

    Introducción

    -

    20 Minutes Till Dawn es un videojuego roguelike shoot 'em up desarrollado y publicado por flanne. El juego fue lanzado en acceso temprano en Steam el 8 de junio de 2022, y fue portado a Android e iOS por Erabit Studios el 9 de septiembre, 2022. El juego salió de Steam con la versión 1.0 el 8 de junio de 2023.

    -

    20 minutos hasta el amanecer descarga gratuita


    Download --->>> https://bltlly.com/2v6MGV



    -

    El juego pertenece al género de la supervivencia roguelite, lo que significa que cuenta con permadeath, aleatorización y progresión a través de carreras. El objetivo del juego es sobrevivir durante 20 minutos hasta el amanecer, mientras se enfrenta a un ataque de monstruos que se vuelven más fuertes y más numerosos a medida que pasa el tiempo. El juego está inspirado en Vampire Survivors, pero con opciones de combate y personalización más activas.

    -

    El juego está disponible en Steam por $4.99, así como en Google Play, App Store y TapTap gratis. El juego ha recibido críticas muy positivas de jugadores y críticos por igual, con más de 20.000 comentarios en Steam y más de 6 millones de descargas en plataformas móviles. El juego también ha sido presentado por IGN, TheGamer, Level Winner y otros medios de comunicación.

    -

    Juego

    -

    El modo de juego de 20 Minutes Till Dawn es simple pero desafiante. Usted controla a un personaje que puede moverse con las teclas WASD o un joystick virtual, apuntar con el ratón o la pantalla táctil, y disparar con clic izquierdo o toque. También puedes usar el botón derecho o doble toque para activar tu habilidad especial, que varía dependiendo de tu personaje.

    - -

    A medida que matas monstruos, ganas puntos de experiencia que te permiten subir de nivel. Cada vez que subes de nivel, puedes elegir una de las cuatro mejoras generadas al azar que mejoran tus estadísticas o habilidades. Estas mejoras pueden ir desde aumentar tu daño o salud, hasta agregar efectos como fuego, veneno o aturdimiento a tus ataques, hasta desbloquear nuevas habilidades como guion, escudo o invocación. Las actualizaciones son permanentes para la ejecución actual, pero se pierden cuando mueres o reinicias.

    -

    Para sobrevivir a la noche, tienes que seguir moviéndote y disparando, evitando los ataques de los enemigos y los peligros ambientales. Los enemigos vienen en diferentes formas y tamaños, cada uno con su propio comportamiento y patrón de ataque. Algunos de ellos son rápidos y ágiles, algunos son lentos y sucios, algunos son a distancia y explosivos, y algunos son sigilosos y mortales. También encontrarás jefes cada pocos minutos, que son mucho más fuertes y más duros que los enemigos normales. Los jefes tienen habilidades y debilidades únicas que tienes que explotar para derrotarlos.

    -

    El juego tiene cuatro modos de juego diferentes: Normal, Hardcore, Endless y Custom. El modo normal es el modo predeterminado, donde tienes que sobrevivir durante 20 minutos con tres vidas. El modo Hardcore es similar al modo Normal, pero solo tienes una vida y los enemigos son más agresivos. El modo sin fin es donde puedes jugar todo el tiempo que quieras, pero los enemigos se vuelven más difíciles y más frecuentes a medida que pasa el tiempo. El modo personalizado es donde puedes crear tus propias reglas y ajustes para el juego, como cambiar el límite de tiempo, la tasa de aparición de enemigos, el nivel de dificultad y más.

    -

    Gráficos y sonido

    - -

    El sonido de 20 Minutes Till Dawn es envolvente y cautivador, con una banda sonora que coincide con el estado de ánimo y la intensidad del juego. El juego tiene una música estilo synthwave que es pegadiza y energética, con diferentes pistas para cada entorno y jefe. El juego también tiene efectos de sonido que son realistas y satisfactorios, como el sonido de disparos, explosiones, gritos, pasos y más. El juego no tiene voz ni diálogo, pero tiene mensajes de texto que aparecen en la pantalla para darte pistas o advertencias.

    -

    -

    El juego funciona bien en la mayoría de los dispositivos y plataformas, con un juego suave y un retraso mínimo o problemas técnicos. El juego tiene bajos requisitos del sistema para los usuarios de PC, así como opciones para ajustar la calidad de los gráficos y la resolución para los usuarios móviles. El juego también es compatible con el ahorro de la nube , soporte de controlador, tablas de clasificación , logros , y cooperativo multijugador .

    -

    Pros y contras

    -

    20 Minutes Till Dawn es un juego divertido y adictivo que te mantendrá entretenido durante horas. Sin embargo, como cualquier otro juego, también tiene sus pros y sus contras. Aquí están algunos de ellos:

    - -ProsContras -- Juego rápido y desafiante que requiere habilidad y estrategia- Permadeath puede ser frustrante y desalentador para algunos jugadores -- Variedad de personajes, armas, mejoras, enemigos, jefes, entornos y modos de juego que ofrecen valor de reproducción- La aleatorización puede ser injusta o desequilibrada a veces -- Gráficos de estilo retro que son coloridos y atmosféricos- Los gráficos pixelados pueden no atraer a todos -- Música estilo synthwave que es pegadiza y energética- La música puede ser repetitiva o molesta después de un rato -- Bajos requisitos del sistema y compatibilidad multiplataforma- Algunos errores o fallos ocasionales pueden ocurrir - -

    Conclusión

    - -

    Si estás interesado en jugar 20 Minutes Till Dawn, puedes encontrar más información o descargar el juego desde los siguientes enlaces:

    -
      -
    • Vapor: [20 minutos hasta el amanecer en el vapor]
    • -
    • Google Play: [20 minutos hasta el amanecer - Aplicaciones en Google Play]
    • -
    • App Store: [ 20 minutos hasta el amanecer en la App Store]
    • -
    • TapTap: [20 minutos hasta el amanecer - TapTap]
    • -
    -

    También puede ver algunos videos de juego o leer algunos comentarios de las siguientes fuentes:

    -
      -
    • IGN: [20 minutos hasta el amanecer Revisión - IGN]
    • -
    • TheGamer: [20 minutos hasta el amanecer Revisión: Una Roguelite que te mantiene en sus dedos de los pies]
    • -
    • Nivel ganador: [20 minutos hasta el amanecer Guía para principiantes: Consejos, trucos y estrategias para sobrevivir la noche]
    • -
    -

    Preguntas frecuentes

    -

    Aquí están algunas de las preguntas más frecuentes sobre 20 minutos hasta el amanecer:

    -
      -
    1. ¿Cómo puedo desbloquear más personajes y armas?
    2. -

      Puedes desbloquear más personajes y armas gastando gemas, que se ganan matando monstruos o completando logros. También puedes encontrar algunas armas como botín gotas de enemigos o cofres.

      -
    3. ¿Cómo puedo guardar mi progreso?
    4. -

      Puede guardar su progreso utilizando la función de almacenamiento en la nube, que está disponible en todas las plataformas. También puede utilizar la función de ahorro local, que está disponible en PC y plataformas móviles. Sin embargo, tenga en cuenta que su progreso solo se guarda entre ejecuciones, no durante las ejecuciones. Si muere o reinicia, perderá sus actualizaciones y elementos actuales.

      -
    5. ¿Cómo puedo jugar con mis amigos?
    6. -

      Puedes jugar con tus amigos usando la función multijugador co-op, que está disponible en todas las plataformas. Puedes unirte o alojar un juego con hasta cuatro jugadores en línea o localmente. También puedes chatear con tus amigos usando la función de chat de voz o texto.

      -
    7. ¿Cómo cambio la configuración del juego?
    8. - -
    9. ¿Cómo puedo contactar a los desarrolladores o reportar un error?
    10. -

      Puede ponerse en contacto con los desarrolladores o informar de un error mediante la función de retroalimentación, que está disponible en todas las plataformas. También puede visitar el sitio web oficial, el servidor de discordia, la página de Twitter o la página de Facebook del juego.

      -
    -

    Espero que hayas disfrutado de este artículo y te haya resultado útil. Si tienes alguna pregunta o comentario, puedes dejarlos abajo. Gracias por leer y tener un gran día!

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Billyosoro/ESRGAN/realesrgan/__init__.py b/spaces/Billyosoro/ESRGAN/realesrgan/__init__.py deleted file mode 100644 index bfea78f284116dee22510d4aa91f9e44afb7d472..0000000000000000000000000000000000000000 --- a/spaces/Billyosoro/ESRGAN/realesrgan/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# flake8: noqa -from .archs import * -from .data import * -from .models import * -from .utils import * -#from .version import * diff --git a/spaces/Bradjan310/ehartford-Wizard-Vicuna-30B-Uncensored/app.py b/spaces/Bradjan310/ehartford-Wizard-Vicuna-30B-Uncensored/app.py deleted file mode 100644 index 4cdd13923578027e405184827b4f353131ce7341..0000000000000000000000000000000000000000 --- a/spaces/Bradjan310/ehartford-Wizard-Vicuna-30B-Uncensored/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/ehartford/Wizard-Vicuna-30B-Uncensored").launch() \ No newline at end of file diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/.github/ISSUE_TEMPLATE/questions-help-support.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/.github/ISSUE_TEMPLATE/questions-help-support.md deleted file mode 100644 index 4166219b7de584d26b3795e07162df0eff2733e3..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/.github/ISSUE_TEMPLATE/questions-help-support.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -name: "❓How to do something?" -about: How to do X with detectron2? How detectron2 does X? - ---- - -## ❓ How to use Detectron2 - -Questions like: - -1. How to do X with detectron2? -2. How detectron2 does X? - -NOTE: - -1. If you met any unexpected issue when using detectron2 and wish to know why, - please use the "Unexpected Problems / Bugs" issue template. - -2. We do not answer general machine learning / computer vision questions that are not specific to - detectron2, such as how a model works, how to improve your training/make it converge, or what algorithm/methods can be - used to achieve X. diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/temporary_buffer.h b/spaces/CVPR/LIVE/thrust/thrust/detail/temporary_buffer.h deleted file mode 100644 index 4dca3be3b9b0525aa01bcaa339a13782ac38272f..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/temporary_buffer.h +++ /dev/null @@ -1,76 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include -#include -#include -#include -#include -#include -#include - -namespace thrust -{ -namespace detail -{ - - -template -__host__ __device__ - thrust::pair, typename thrust::pointer::difference_type> - down_cast_pair(Pair p) -{ - // XXX should use a hypothetical thrust::static_pointer_cast here - thrust::pointer ptr = thrust::pointer(static_cast(thrust::raw_pointer_cast(p.first))); - - typedef thrust::pair, typename thrust::pointer::difference_type> result_type; - return result_type(ptr, p.second); -} // end down_cast_pair() - - -} // end detail - - -__thrust_exec_check_disable__ -template -__host__ __device__ - thrust::pair, typename thrust::pointer::difference_type> - get_temporary_buffer(const thrust::detail::execution_policy_base &exec, typename thrust::pointer::difference_type n) -{ - using thrust::detail::get_temporary_buffer; // execute_with_allocator - using thrust::system::detail::generic::get_temporary_buffer; - - return thrust::detail::down_cast_pair(get_temporary_buffer(thrust::detail::derived_cast(thrust::detail::strip_const(exec)), n)); -} // end get_temporary_buffer() - - -__thrust_exec_check_disable__ -template -__host__ __device__ - void return_temporary_buffer(const thrust::detail::execution_policy_base &exec, Pointer p, std::ptrdiff_t n) -{ - using thrust::detail::return_temporary_buffer; // execute_with_allocator - using thrust::system::detail::generic::return_temporary_buffer; - - return return_temporary_buffer(thrust::detail::derived_cast(thrust::detail::strip_const(exec)), p, n); -} // end return_temporary_buffer() - - -} // end thrust - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/generate.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/generate.h deleted file mode 100644 index edc2cc5eb3582a11ab7afa0cd78030b2b26688f2..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/generate.h +++ /dev/null @@ -1,57 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -#pragma once - -#include -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ -namespace generic -{ - -template -__host__ __device__ - void generate(thrust::execution_policy &exec, - ForwardIterator first, - ForwardIterator last, - Generator gen); - -template -__host__ __device__ - OutputIterator generate_n(thrust::execution_policy &exec, - OutputIterator first, - Size n, - Generator gen); - -} // end namespace generic -} // end namespace detail -} // end namespace system -} // end namespace thrust - -#include - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/scatter.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/scatter.h deleted file mode 100644 index 4a65a4cc01ea23211330192f69999532f6d60575..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/scatter.h +++ /dev/null @@ -1,81 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -#pragma once - -#include -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ -namespace generic -{ - - -template -__host__ __device__ - void scatter(thrust::execution_policy &exec, - InputIterator1 first, - InputIterator1 last, - InputIterator2 map, - RandomAccessIterator output); - - -template -__host__ __device__ - void scatter_if(thrust::execution_policy &exec, - InputIterator1 first, - InputIterator1 last, - InputIterator2 map, - InputIterator3 stencil, - RandomAccessIterator output); - - -template -__host__ __device__ - void scatter_if(thrust::execution_policy &exec, - InputIterator1 first, - InputIterator1 last, - InputIterator2 map, - InputIterator3 stencil, - RandomAccessIterator output, - Predicate pred); - - -} // end namespace generic -} // end namespace detail -} // end namespace system -} // end namespace thrust - -#include - diff --git a/spaces/CVPR/WALT/walt/datasets/pipelines/transforms.py b/spaces/CVPR/WALT/walt/datasets/pipelines/transforms.py deleted file mode 100644 index 02fd63f2bfaac64fbf9495f2fe6ffe83dc9371e1..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/walt/datasets/pipelines/transforms.py +++ /dev/null @@ -1,1861 +0,0 @@ -import copy -import inspect - -import mmcv -import numpy as np -from numpy import random - -from mmdet.core import PolygonMasks -from mmdet.core.evaluation.bbox_overlaps import bbox_overlaps -from ..builder import PIPELINES - -try: - from imagecorruptions import corrupt -except ImportError: - corrupt = None - -try: - import albumentations - from albumentations import Compose -except ImportError: - albumentations = None - Compose = None - - -@PIPELINES.register_module() -class Resize(object): - """Resize images & bbox & mask. - - This transform resizes the input image to some scale. Bboxes and masks are - then resized with the same scale factor. If the input dict contains the key - "scale", then the scale in the input dict is used, otherwise the specified - scale in the init method is used. If the input dict contains the key - "scale_factor" (if MultiScaleFlipAug does not give img_scale but - scale_factor), the actual scale will be computed by image shape and - scale_factor. - - `img_scale` can either be a tuple (single-scale) or a list of tuple - (multi-scale). There are 3 multiscale modes: - - - ``ratio_range is not None``: randomly sample a ratio from the ratio \ - range and multiply it with the image scale. - - ``ratio_range is None`` and ``multiscale_mode == "range"``: randomly \ - sample a scale from the multiscale range. - - ``ratio_range is None`` and ``multiscale_mode == "value"``: randomly \ - sample a scale from multiple scales. - - Args: - img_scale (tuple or list[tuple]): Images scales for resizing. - multiscale_mode (str): Either "range" or "value". - ratio_range (tuple[float]): (min_ratio, max_ratio) - keep_ratio (bool): Whether to keep the aspect ratio when resizing the - image. - bbox_clip_border (bool, optional): Whether clip the objects outside - the border of the image. Defaults to True. - backend (str): Image resize backend, choices are 'cv2' and 'pillow'. - These two backends generates slightly different results. Defaults - to 'cv2'. - override (bool, optional): Whether to override `scale` and - `scale_factor` so as to call resize twice. Default False. If True, - after the first resizing, the existed `scale` and `scale_factor` - will be ignored so the second resizing can be allowed. - This option is a work-around for multiple times of resize in DETR. - Defaults to False. - """ - - def __init__(self, - img_scale=None, - multiscale_mode='range', - ratio_range=None, - keep_ratio=True, - bbox_clip_border=True, - backend='cv2', - override=False): - if img_scale is None: - self.img_scale = None - else: - if isinstance(img_scale, list): - self.img_scale = img_scale - else: - self.img_scale = [img_scale] - assert mmcv.is_list_of(self.img_scale, tuple) - - if ratio_range is not None: - # mode 1: given a scale and a range of image ratio - assert len(self.img_scale) == 1 - else: - # mode 2: given multiple scales or a range of scales - assert multiscale_mode in ['value', 'range'] - - self.backend = backend - self.multiscale_mode = multiscale_mode - self.ratio_range = ratio_range - self.keep_ratio = keep_ratio - # TODO: refactor the override option in Resize - self.override = override - self.bbox_clip_border = bbox_clip_border - - @staticmethod - def random_select(img_scales): - """Randomly select an img_scale from given candidates. - - Args: - img_scales (list[tuple]): Images scales for selection. - - Returns: - (tuple, int): Returns a tuple ``(img_scale, scale_dix)``, \ - where ``img_scale`` is the selected image scale and \ - ``scale_idx`` is the selected index in the given candidates. - """ - - assert mmcv.is_list_of(img_scales, tuple) - scale_idx = np.random.randint(len(img_scales)) - img_scale = img_scales[scale_idx] - return img_scale, scale_idx - - @staticmethod - def random_sample(img_scales): - """Randomly sample an img_scale when ``multiscale_mode=='range'``. - - Args: - img_scales (list[tuple]): Images scale range for sampling. - There must be two tuples in img_scales, which specify the lower - and upper bound of image scales. - - Returns: - (tuple, None): Returns a tuple ``(img_scale, None)``, where \ - ``img_scale`` is sampled scale and None is just a placeholder \ - to be consistent with :func:`random_select`. - """ - - assert mmcv.is_list_of(img_scales, tuple) and len(img_scales) == 2 - img_scale_long = [max(s) for s in img_scales] - img_scale_short = [min(s) for s in img_scales] - long_edge = np.random.randint( - min(img_scale_long), - max(img_scale_long) + 1) - short_edge = np.random.randint( - min(img_scale_short), - max(img_scale_short) + 1) - img_scale = (long_edge, short_edge) - return img_scale, None - - @staticmethod - def random_sample_ratio(img_scale, ratio_range): - """Randomly sample an img_scale when ``ratio_range`` is specified. - - A ratio will be randomly sampled from the range specified by - ``ratio_range``. Then it would be multiplied with ``img_scale`` to - generate sampled scale. - - Args: - img_scale (tuple): Images scale base to multiply with ratio. - ratio_range (tuple[float]): The minimum and maximum ratio to scale - the ``img_scale``. - - Returns: - (tuple, None): Returns a tuple ``(scale, None)``, where \ - ``scale`` is sampled ratio multiplied with ``img_scale`` and \ - None is just a placeholder to be consistent with \ - :func:`random_select`. - """ - - assert isinstance(img_scale, tuple) and len(img_scale) == 2 - min_ratio, max_ratio = ratio_range - assert min_ratio <= max_ratio - ratio = np.random.random_sample() * (max_ratio - min_ratio) + min_ratio - scale = int(img_scale[0] * ratio), int(img_scale[1] * ratio) - return scale, None - - def _random_scale(self, results): - """Randomly sample an img_scale according to ``ratio_range`` and - ``multiscale_mode``. - - If ``ratio_range`` is specified, a ratio will be sampled and be - multiplied with ``img_scale``. - If multiple scales are specified by ``img_scale``, a scale will be - sampled according to ``multiscale_mode``. - Otherwise, single scale will be used. - - Args: - results (dict): Result dict from :obj:`dataset`. - - Returns: - dict: Two new keys 'scale` and 'scale_idx` are added into \ - ``results``, which would be used by subsequent pipelines. - """ - - if self.ratio_range is not None: - scale, scale_idx = self.random_sample_ratio( - self.img_scale[0], self.ratio_range) - elif len(self.img_scale) == 1: - scale, scale_idx = self.img_scale[0], 0 - elif self.multiscale_mode == 'range': - scale, scale_idx = self.random_sample(self.img_scale) - elif self.multiscale_mode == 'value': - scale, scale_idx = self.random_select(self.img_scale) - else: - raise NotImplementedError - - results['scale'] = scale - results['scale_idx'] = scale_idx - - def _resize_img(self, results): - """Resize images with ``results['scale']``.""" - for key in results.get('img_fields', ['img']): - if self.keep_ratio: - img, scale_factor = mmcv.imrescale( - results[key], - results['scale'], - return_scale=True, - backend=self.backend) - # the w_scale and h_scale has minor difference - # a real fix should be done in the mmcv.imrescale in the future - new_h, new_w = img.shape[:2] - h, w = results[key].shape[:2] - w_scale = new_w / w - h_scale = new_h / h - else: - img, w_scale, h_scale = mmcv.imresize( - results[key], - results['scale'], - return_scale=True, - backend=self.backend) - results[key] = img - - scale_factor = np.array([w_scale, h_scale, w_scale, h_scale], - dtype=np.float32) - results['img_shape'] = img.shape - # in case that there is no padding - results['pad_shape'] = img.shape - results['scale_factor'] = scale_factor - results['keep_ratio'] = self.keep_ratio - - def _resize_bboxes(self, results): - """Resize bounding boxes with ``results['scale_factor']``.""" - for key in results.get('bbox_fields', []): - bboxes = results[key] * results['scale_factor'] - if self.bbox_clip_border: - img_shape = results['img_shape'] - bboxes[:, 0::2] = np.clip(bboxes[:, 0::2], 0, img_shape[1]) - bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0]) - results[key] = bboxes - - def _resize_bboxes3d(self, results): - """Resize bounding boxes with ``results['scale_factor']``.""" - key = 'gt_bboxes_3d_proj' - bboxes3d_proj = results[key][:,:,:2] - img_shape = results['img_shape'] - for i in range(results[key].shape[1]): - bboxes3d_proj[:,i,:] = bboxes3d_proj[:,i,:] * results['scale_factor'][:2] - if self.bbox_clip_border: - bboxes3d_proj[:, i, 0] = np.clip(bboxes3d_proj[:, i, 0], 0, img_shape[1]) - bboxes3d_proj[:, i, 1] = np.clip(bboxes3d_proj[:, i, 1], 0, img_shape[1]) - results[key] = bboxes3d_proj - - def _resize_masks(self, results): - """Resize masks with ``results['scale']``""" - for key in results.get('mask_fields', []): - if results[key] is None: - continue - if self.keep_ratio: - results[key] = results[key].rescale(results['scale']) - else: - results[key] = results[key].resize(results['img_shape'][:2]) - - def _resize_seg(self, results): - """Resize semantic segmentation map with ``results['scale']``.""" - for key in results.get('seg_fields', []): - if self.keep_ratio: - gt_seg = mmcv.imrescale( - results[key], - results['scale'], - interpolation='nearest', - backend=self.backend) - else: - gt_seg = mmcv.imresize( - results[key], - results['scale'], - interpolation='nearest', - backend=self.backend) - results['gt_semantic_seg'] = gt_seg - - def __call__(self, results): - """Call function to resize images, bounding boxes, masks, semantic - segmentation map. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Resized results, 'img_shape', 'pad_shape', 'scale_factor', \ - 'keep_ratio' keys are added into result dict. - """ - - if 'scale' not in results: - if 'scale_factor' in results: - img_shape = results['img'].shape[:2] - scale_factor = results['scale_factor'] - assert isinstance(scale_factor, float) - results['scale'] = tuple( - [int(x * scale_factor) for x in img_shape][::-1]) - else: - self._random_scale(results) - else: - if not self.override: - assert 'scale_factor' not in results, ( - 'scale and scale_factor cannot be both set.') - else: - results.pop('scale') - if 'scale_factor' in results: - results.pop('scale_factor') - self._random_scale(results) - - self._resize_img(results) - self._resize_bboxes(results) - self._resize_bboxes3d(results) - self._resize_masks(results) - self._resize_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(img_scale={self.img_scale}, ' - repr_str += f'multiscale_mode={self.multiscale_mode}, ' - repr_str += f'ratio_range={self.ratio_range}, ' - repr_str += f'keep_ratio={self.keep_ratio}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - - -@PIPELINES.register_module() -class RandomFlip(object): - """Flip the image & bbox & mask. - - If the input dict contains the key "flip", then the flag will be used, - otherwise it will be randomly decided by a ratio specified in the init - method. - - When random flip is enabled, ``flip_ratio``/``direction`` can either be a - float/string or tuple of float/string. There are 3 flip modes: - - - ``flip_ratio`` is float, ``direction`` is string: the image will be - ``direction``ly flipped with probability of ``flip_ratio`` . - E.g., ``flip_ratio=0.5``, ``direction='horizontal'``, - then image will be horizontally flipped with probability of 0.5. - - ``flip_ratio`` is float, ``direction`` is list of string: the image wil - be ``direction[i]``ly flipped with probability of - ``flip_ratio/len(direction)``. - E.g., ``flip_ratio=0.5``, ``direction=['horizontal', 'vertical']``, - then image will be horizontally flipped with probability of 0.25, - vertically with probability of 0.25. - - ``flip_ratio`` is list of float, ``direction`` is list of string: - given ``len(flip_ratio) == len(direction)``, the image wil - be ``direction[i]``ly flipped with probability of ``flip_ratio[i]``. - E.g., ``flip_ratio=[0.3, 0.5]``, ``direction=['horizontal', - 'vertical']``, then image will be horizontally flipped with probability - of 0.3, vertically with probability of 0.5 - - Args: - flip_ratio (float | list[float], optional): The flipping probability. - Default: None. - direction(str | list[str], optional): The flipping direction. Options - are 'horizontal', 'vertical', 'diagonal'. Default: 'horizontal'. - If input is a list, the length must equal ``flip_ratio``. Each - element in ``flip_ratio`` indicates the flip probability of - corresponding direction. - """ - - def __init__(self, flip_ratio=None, direction='horizontal'): - if isinstance(flip_ratio, list): - assert mmcv.is_list_of(flip_ratio, float) - assert 0 <= sum(flip_ratio) <= 1 - elif isinstance(flip_ratio, float): - assert 0 <= flip_ratio <= 1 - elif flip_ratio is None: - pass - else: - raise ValueError('flip_ratios must be None, float, ' - 'or list of float') - self.flip_ratio = flip_ratio - - valid_directions = ['horizontal', 'vertical', 'diagonal'] - if isinstance(direction, str): - assert direction in valid_directions - elif isinstance(direction, list): - assert mmcv.is_list_of(direction, str) - assert set(direction).issubset(set(valid_directions)) - else: - raise ValueError('direction must be either str or list of str') - self.direction = direction - - if isinstance(flip_ratio, list): - assert len(self.flip_ratio) == len(self.direction) - - def bbox_flip(self, bboxes, img_shape, direction): - """Flip bboxes horizontally. - - Args: - bboxes (numpy.ndarray): Bounding boxes, shape (..., 4*k) - img_shape (tuple[int]): Image shape (height, width) - direction (str): Flip direction. Options are 'horizontal', - 'vertical'. - - Returns: - numpy.ndarray: Flipped bounding boxes. - """ - - assert bboxes.shape[-1] % 4 == 0 - flipped = bboxes.copy() - if direction == 'horizontal': - w = img_shape[1] - flipped[..., 0::4] = w - bboxes[..., 2::4] - flipped[..., 2::4] = w - bboxes[..., 0::4] - elif direction == 'vertical': - h = img_shape[0] - flipped[..., 1::4] = h - bboxes[..., 3::4] - flipped[..., 3::4] = h - bboxes[..., 1::4] - elif direction == 'diagonal': - w = img_shape[1] - h = img_shape[0] - flipped[..., 0::4] = w - bboxes[..., 2::4] - flipped[..., 1::4] = h - bboxes[..., 3::4] - flipped[..., 2::4] = w - bboxes[..., 0::4] - flipped[..., 3::4] = h - bboxes[..., 1::4] - else: - raise ValueError(f"Invalid flipping direction '{direction}'") - return flipped - - def bbox3d_proj_flip(self, bboxes, img_shape, direction): - """Flip bboxes horizontally. - - Args: - bboxes (numpy.ndarray): Bounding boxes, shape (..., 4*k) - img_shape (tuple[int]): Image shape (height, width) - direction (str): Flip direction. Options are 'horizontal', - 'vertical'. - - Returns: - numpy.ndarray: Flipped bounding boxes. - """ - - flipped = bboxes.copy() - if direction == 'horizontal': - w = img_shape[1] - - flipped[:,:,0] = w - bboxes[:,:, 0] - elif direction == 'vertical': - h = img_shape[0] - flipped[:,:,1] = h - bboxes[:,:, 1] - elif direction == 'diagonal': - w = img_shape[1] - h = img_shape[0] - flipped[:,:,0] = w - bboxes[:,:, 0] - flipped[:,:,1] = h - bboxes[:,:, 1] - else: - raise ValueError(f"Invalid flipping direction '{direction}'") - flipped[bboxes == -100] = -100 - return flipped - - - def __call__(self, results): - """Call function to flip bounding boxes, masks, semantic segmentation - maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Flipped results, 'flip', 'flip_direction' keys are added \ - into result dict. - """ - - if 'flip' not in results: - if isinstance(self.direction, list): - # None means non-flip - direction_list = self.direction + [None] - else: - # None means non-flip - direction_list = [self.direction, None] - - if isinstance(self.flip_ratio, list): - non_flip_ratio = 1 - sum(self.flip_ratio) - flip_ratio_list = self.flip_ratio + [non_flip_ratio] - else: - non_flip_ratio = 1 - self.flip_ratio - # exclude non-flip - single_ratio = self.flip_ratio / (len(direction_list) - 1) - flip_ratio_list = [single_ratio] * (len(direction_list) - - 1) + [non_flip_ratio] - - cur_dir = np.random.choice(direction_list, p=flip_ratio_list) - - results['flip'] = cur_dir is not None - if 'flip_direction' not in results: - results['flip_direction'] = cur_dir - if results['flip']: - # flip image - for key in results.get('img_fields', ['img']): - results[key] = mmcv.imflip( - results[key], direction=results['flip_direction']) - # flip bboxes - for key in results.get('bbox_fields', []): - results[key] = self.bbox_flip(results[key], - results['img_shape'], - results['flip_direction']) - for key in results.get('bbox3d_fields', []): - if '_proj' in key: - results[key] = self.bbox3d_proj_flip(results[key], - results['img_shape'], - results['flip_direction']) - # flip masks - for key in results.get('mask_fields', []): - results[key] = results[key].flip(results['flip_direction']) - - # flip segs - for key in results.get('seg_fields', []): - results[key] = mmcv.imflip( - results[key], direction=results['flip_direction']) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(flip_ratio={self.flip_ratio})' - - -@PIPELINES.register_module() -class Pad(object): - """Pad the image & mask. - - There are two padding modes: (1) pad to a fixed size and (2) pad to the - minimum size that is divisible by some number. - Added keys are "pad_shape", "pad_fixed_size", "pad_size_divisor", - - Args: - size (tuple, optional): Fixed padding size. - size_divisor (int, optional): The divisor of padded size. - pad_val (float, optional): Padding value, 0 by default. - """ - - def __init__(self, size=None, size_divisor=None, pad_val=0): - self.size = size - self.size_divisor = size_divisor - self.pad_val = pad_val - # only one of size and size_divisor should be valid - assert size is not None or size_divisor is not None - assert size is None or size_divisor is None - - def _pad_img(self, results): - """Pad images according to ``self.size``.""" - for key in results.get('img_fields', ['img']): - if self.size is not None: - padded_img = mmcv.impad( - results[key], shape=self.size, pad_val=self.pad_val) - elif self.size_divisor is not None: - padded_img = mmcv.impad_to_multiple( - results[key], self.size_divisor, pad_val=self.pad_val) - results[key] = padded_img - results['pad_shape'] = padded_img.shape - results['pad_fixed_size'] = self.size - results['pad_size_divisor'] = self.size_divisor - - def _pad_masks(self, results): - """Pad masks according to ``results['pad_shape']``.""" - pad_shape = results['pad_shape'][:2] - for key in results.get('mask_fields', []): - results[key] = results[key].pad(pad_shape, pad_val=self.pad_val) - - def _pad_seg(self, results): - """Pad semantic segmentation map according to - ``results['pad_shape']``.""" - for key in results.get('seg_fields', []): - results[key] = mmcv.impad( - results[key], shape=results['pad_shape'][:2]) - - def __call__(self, results): - """Call function to pad images, masks, semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Updated result dict. - """ - self._pad_img(results) - self._pad_masks(results) - self._pad_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(size={self.size}, ' - repr_str += f'size_divisor={self.size_divisor}, ' - repr_str += f'pad_val={self.pad_val})' - return repr_str - - -@PIPELINES.register_module() -class Normalize(object): - """Normalize the image. - - Added key is "img_norm_cfg". - - Args: - mean (sequence): Mean values of 3 channels. - std (sequence): Std values of 3 channels. - to_rgb (bool): Whether to convert the image from BGR to RGB, - default is true. - """ - - def __init__(self, mean, std, to_rgb=True): - self.mean = np.array(mean, dtype=np.float32) - self.std = np.array(std, dtype=np.float32) - self.to_rgb = to_rgb - - def __call__(self, results): - """Call function to normalize images. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Normalized results, 'img_norm_cfg' key is added into - result dict. - """ - for key in results.get('img_fields', ['img']): - results[key] = mmcv.imnormalize(results[key], self.mean, self.std, - self.to_rgb) - results['img_norm_cfg'] = dict( - mean=self.mean, std=self.std, to_rgb=self.to_rgb) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(mean={self.mean}, std={self.std}, to_rgb={self.to_rgb})' - return repr_str - - -@PIPELINES.register_module() -class RandomCrop(object): - """Random crop the image & bboxes & masks. - - The absolute `crop_size` is sampled based on `crop_type` and `image_size`, - then the cropped results are generated. - - Args: - crop_size (tuple): The relative ratio or absolute pixels of - height and width. - crop_type (str, optional): one of "relative_range", "relative", - "absolute", "absolute_range". "relative" randomly crops - (h * crop_size[0], w * crop_size[1]) part from an input of size - (h, w). "relative_range" uniformly samples relative crop size from - range [crop_size[0], 1] and [crop_size[1], 1] for height and width - respectively. "absolute" crops from an input with absolute size - (crop_size[0], crop_size[1]). "absolute_range" uniformly samples - crop_h in range [crop_size[0], min(h, crop_size[1])] and crop_w - in range [crop_size[0], min(w, crop_size[1])]. Default "absolute". - allow_negative_crop (bool, optional): Whether to allow a crop that does - not contain any bbox area. Default False. - bbox_clip_border (bool, optional): Whether clip the objects outside - the border of the image. Defaults to True. - - Note: - - If the image is smaller than the absolute crop size, return the - original image. - - The keys for bboxes, labels and masks must be aligned. That is, - `gt_bboxes` corresponds to `gt_labels` and `gt_masks`, and - `gt_bboxes_ignore` corresponds to `gt_labels_ignore` and - `gt_masks_ignore`. - - If the crop does not contain any gt-bbox region and - `allow_negative_crop` is set to False, skip this image. - """ - - def __init__(self, - crop_size, - crop_type='absolute', - allow_negative_crop=False, - bbox_clip_border=True): - if crop_type not in [ - 'relative_range', 'relative', 'absolute', 'absolute_range' - ]: - raise ValueError(f'Invalid crop_type {crop_type}.') - if crop_type in ['absolute', 'absolute_range']: - assert crop_size[0] > 0 and crop_size[1] > 0 - assert isinstance(crop_size[0], int) and isinstance( - crop_size[1], int) - else: - assert 0 < crop_size[0] <= 1 and 0 < crop_size[1] <= 1 - self.crop_size = crop_size - self.crop_type = crop_type - self.allow_negative_crop = allow_negative_crop - self.bbox_clip_border = bbox_clip_border - # The key correspondence from bboxes to labels and masks. - self.bbox2label = { - 'gt_bboxes': 'gt_labels', - 'gt_bboxes_ignore': 'gt_labels_ignore' - } - self.bbox2mask = { - 'gt_bboxes': 'gt_masks', - 'gt_bboxes_ignore': 'gt_masks_ignore' - } - - def _crop_data(self, results, crop_size, allow_negative_crop): - """Function to randomly crop images, bounding boxes, masks, semantic - segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - crop_size (tuple): Expected absolute size after cropping, (h, w). - allow_negative_crop (bool): Whether to allow a crop that does not - contain any bbox area. Default to False. - - Returns: - dict: Randomly cropped results, 'img_shape' key in result dict is - updated according to crop size. - """ - assert crop_size[0] > 0 and crop_size[1] > 0 - for key in results.get('img_fields', ['img']): - img = results[key] - margin_h = max(img.shape[0] - crop_size[0], 0) - margin_w = max(img.shape[1] - crop_size[1], 0) - offset_h = np.random.randint(0, margin_h + 1) - offset_w = np.random.randint(0, margin_w + 1) - crop_y1, crop_y2 = offset_h, offset_h + crop_size[0] - crop_x1, crop_x2 = offset_w, offset_w + crop_size[1] - - # crop the image - img = img[crop_y1:crop_y2, crop_x1:crop_x2, ...] - img_shape = img.shape - results[key] = img - results['img_shape'] = img_shape - - # crop bboxes accordingly and clip to the image boundary - for key in results.get('bbox_fields', []): - # e.g. gt_bboxes and gt_bboxes_ignore - bbox_offset = np.array([offset_w, offset_h, offset_w, offset_h], - dtype=np.float32) - bboxes = results[key] - bbox_offset - if self.bbox_clip_border: - bboxes[:, 0::2] = np.clip(bboxes[:, 0::2], 0, img_shape[1]) - bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0]) - valid_inds = (bboxes[:, 2] > bboxes[:, 0]) & ( - bboxes[:, 3] > bboxes[:, 1]) - # If the crop does not contain any gt-bbox area and - # allow_negative_crop is False, skip this image. - if (key == 'gt_bboxes' and not valid_inds.any() - and not allow_negative_crop): - return None - results[key] = bboxes[valid_inds, :] - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = self.bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - - # mask fields, e.g. gt_masks and gt_masks_ignore - mask_key = self.bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][ - valid_inds.nonzero()[0]].crop( - np.asarray([crop_x1, crop_y1, crop_x2, crop_y2])) - - # crop semantic seg - for key in results.get('seg_fields', []): - results[key] = results[key][crop_y1:crop_y2, crop_x1:crop_x2] - - return results - - def _get_crop_size(self, image_size): - """Randomly generates the absolute crop size based on `crop_type` and - `image_size`. - - Args: - image_size (tuple): (h, w). - - Returns: - crop_size (tuple): (crop_h, crop_w) in absolute pixels. - """ - h, w = image_size - if self.crop_type == 'absolute': - return (min(self.crop_size[0], h), min(self.crop_size[1], w)) - elif self.crop_type == 'absolute_range': - assert self.crop_size[0] <= self.crop_size[1] - crop_h = np.random.randint( - min(h, self.crop_size[0]), - min(h, self.crop_size[1]) + 1) - crop_w = np.random.randint( - min(w, self.crop_size[0]), - min(w, self.crop_size[1]) + 1) - return crop_h, crop_w - elif self.crop_type == 'relative': - crop_h, crop_w = self.crop_size - return int(h * crop_h + 0.5), int(w * crop_w + 0.5) - elif self.crop_type == 'relative_range': - crop_size = np.asarray(self.crop_size, dtype=np.float32) - crop_h, crop_w = crop_size + np.random.rand(2) * (1 - crop_size) - return int(h * crop_h + 0.5), int(w * crop_w + 0.5) - - def __call__(self, results): - """Call function to randomly crop images, bounding boxes, masks, - semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Randomly cropped results, 'img_shape' key in result dict is - updated according to crop size. - """ - image_size = results['img'].shape[:2] - crop_size = self._get_crop_size(image_size) - results = self._crop_data(results, crop_size, self.allow_negative_crop) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(crop_size={self.crop_size}, ' - repr_str += f'crop_type={self.crop_type}, ' - repr_str += f'allow_negative_crop={self.allow_negative_crop}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - - -@PIPELINES.register_module() -class SegRescale(object): - """Rescale semantic segmentation maps. - - Args: - scale_factor (float): The scale factor of the final output. - backend (str): Image rescale backend, choices are 'cv2' and 'pillow'. - These two backends generates slightly different results. Defaults - to 'cv2'. - """ - - def __init__(self, scale_factor=1, backend='cv2'): - self.scale_factor = scale_factor - self.backend = backend - - def __call__(self, results): - """Call function to scale the semantic segmentation map. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with semantic segmentation map scaled. - """ - - for key in results.get('seg_fields', []): - if self.scale_factor != 1: - results[key] = mmcv.imrescale( - results[key], - self.scale_factor, - interpolation='nearest', - backend=self.backend) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(scale_factor={self.scale_factor})' - - -@PIPELINES.register_module() -class PhotoMetricDistortion(object): - """Apply photometric distortion to image sequentially, every transformation - is applied with a probability of 0.5. The position of random contrast is in - second or second to last. - - 1. random brightness - 2. random contrast (mode 0) - 3. convert color from BGR to HSV - 4. random saturation - 5. random hue - 6. convert color from HSV to BGR - 7. random contrast (mode 1) - 8. randomly swap channels - - Args: - brightness_delta (int): delta of brightness. - contrast_range (tuple): range of contrast. - saturation_range (tuple): range of saturation. - hue_delta (int): delta of hue. - """ - - def __init__(self, - brightness_delta=32, - contrast_range=(0.5, 1.5), - saturation_range=(0.5, 1.5), - hue_delta=18): - self.brightness_delta = brightness_delta - self.contrast_lower, self.contrast_upper = contrast_range - self.saturation_lower, self.saturation_upper = saturation_range - self.hue_delta = hue_delta - - def __call__(self, results): - """Call function to perform photometric distortion on images. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images distorted. - """ - - if 'img_fields' in results: - assert results['img_fields'] == ['img'], \ - 'Only single img_fields is allowed' - img = results['img'] - assert img.dtype == np.float32, \ - 'PhotoMetricDistortion needs the input image of dtype np.float32,'\ - ' please set "to_float32=True" in "LoadImageFromFile" pipeline' - # random brightness - if random.randint(2): - delta = random.uniform(-self.brightness_delta, - self.brightness_delta) - img += delta - - # mode == 0 --> do random contrast first - # mode == 1 --> do random contrast last - mode = random.randint(2) - if mode == 1: - if random.randint(2): - alpha = random.uniform(self.contrast_lower, - self.contrast_upper) - img *= alpha - - # convert color from BGR to HSV - img = mmcv.bgr2hsv(img) - - # random saturation - if random.randint(2): - img[..., 1] *= random.uniform(self.saturation_lower, - self.saturation_upper) - - # random hue - if random.randint(2): - img[..., 0] += random.uniform(-self.hue_delta, self.hue_delta) - img[..., 0][img[..., 0] > 360] -= 360 - img[..., 0][img[..., 0] < 0] += 360 - - # convert color from HSV to BGR - img = mmcv.hsv2bgr(img) - - # random contrast - if mode == 0: - if random.randint(2): - alpha = random.uniform(self.contrast_lower, - self.contrast_upper) - img *= alpha - - # randomly swap channels - if random.randint(2): - img = img[..., random.permutation(3)] - - results['img'] = img - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(\nbrightness_delta={self.brightness_delta},\n' - repr_str += 'contrast_range=' - repr_str += f'{(self.contrast_lower, self.contrast_upper)},\n' - repr_str += 'saturation_range=' - repr_str += f'{(self.saturation_lower, self.saturation_upper)},\n' - repr_str += f'hue_delta={self.hue_delta})' - return repr_str - - -@PIPELINES.register_module() -class Expand(object): - """Random expand the image & bboxes. - - Randomly place the original image on a canvas of 'ratio' x original image - size filled with mean values. The ratio is in the range of ratio_range. - - Args: - mean (tuple): mean value of dataset. - to_rgb (bool): if need to convert the order of mean to align with RGB. - ratio_range (tuple): range of expand ratio. - prob (float): probability of applying this transformation - """ - - def __init__(self, - mean=(0, 0, 0), - to_rgb=True, - ratio_range=(1, 4), - seg_ignore_label=None, - prob=0.5): - self.to_rgb = to_rgb - self.ratio_range = ratio_range - if to_rgb: - self.mean = mean[::-1] - else: - self.mean = mean - self.min_ratio, self.max_ratio = ratio_range - self.seg_ignore_label = seg_ignore_label - self.prob = prob - - def __call__(self, results): - """Call function to expand images, bounding boxes. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images, bounding boxes expanded - """ - - if random.uniform(0, 1) > self.prob: - return results - - if 'img_fields' in results: - assert results['img_fields'] == ['img'], \ - 'Only single img_fields is allowed' - img = results['img'] - - h, w, c = img.shape - ratio = random.uniform(self.min_ratio, self.max_ratio) - # speedup expand when meets large image - if np.all(self.mean == self.mean[0]): - expand_img = np.empty((int(h * ratio), int(w * ratio), c), - img.dtype) - expand_img.fill(self.mean[0]) - else: - expand_img = np.full((int(h * ratio), int(w * ratio), c), - self.mean, - dtype=img.dtype) - left = int(random.uniform(0, w * ratio - w)) - top = int(random.uniform(0, h * ratio - h)) - expand_img[top:top + h, left:left + w] = img - - results['img'] = expand_img - # expand bboxes - for key in results.get('bbox_fields', []): - results[key] = results[key] + np.tile( - (left, top), 2).astype(results[key].dtype) - - # expand masks - for key in results.get('mask_fields', []): - results[key] = results[key].expand( - int(h * ratio), int(w * ratio), top, left) - - # expand segs - for key in results.get('seg_fields', []): - gt_seg = results[key] - expand_gt_seg = np.full((int(h * ratio), int(w * ratio)), - self.seg_ignore_label, - dtype=gt_seg.dtype) - expand_gt_seg[top:top + h, left:left + w] = gt_seg - results[key] = expand_gt_seg - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(mean={self.mean}, to_rgb={self.to_rgb}, ' - repr_str += f'ratio_range={self.ratio_range}, ' - repr_str += f'seg_ignore_label={self.seg_ignore_label})' - return repr_str - - -@PIPELINES.register_module() -class MinIoURandomCrop(object): - """Random crop the image & bboxes, the cropped patches have minimum IoU - requirement with original image & bboxes, the IoU threshold is randomly - selected from min_ious. - - Args: - min_ious (tuple): minimum IoU threshold for all intersections with - bounding boxes - min_crop_size (float): minimum crop's size (i.e. h,w := a*h, a*w, - where a >= min_crop_size). - bbox_clip_border (bool, optional): Whether clip the objects outside - the border of the image. Defaults to True. - - Note: - The keys for bboxes, labels and masks should be paired. That is, \ - `gt_bboxes` corresponds to `gt_labels` and `gt_masks`, and \ - `gt_bboxes_ignore` to `gt_labels_ignore` and `gt_masks_ignore`. - """ - - def __init__(self, - min_ious=(0.1, 0.3, 0.5, 0.7, 0.9), - min_crop_size=0.3, - bbox_clip_border=True): - # 1: return ori img - self.min_ious = min_ious - self.sample_mode = (1, *min_ious, 0) - self.min_crop_size = min_crop_size - self.bbox_clip_border = bbox_clip_border - self.bbox2label = { - 'gt_bboxes': 'gt_labels', - 'gt_bboxes_ignore': 'gt_labels_ignore' - } - self.bbox2mask = { - 'gt_bboxes': 'gt_masks', - 'gt_bboxes_ignore': 'gt_masks_ignore' - } - - def __call__(self, results): - """Call function to crop images and bounding boxes with minimum IoU - constraint. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images and bounding boxes cropped, \ - 'img_shape' key is updated. - """ - - if 'img_fields' in results: - assert results['img_fields'] == ['img'], \ - 'Only single img_fields is allowed' - img = results['img'] - assert 'bbox_fields' in results - boxes = [results[key] for key in results['bbox_fields']] - boxes = np.concatenate(boxes, 0) - h, w, c = img.shape - while True: - mode = random.choice(self.sample_mode) - self.mode = mode - if mode == 1: - return results - - min_iou = mode - for i in range(50): - new_w = random.uniform(self.min_crop_size * w, w) - new_h = random.uniform(self.min_crop_size * h, h) - - # h / w in [0.5, 2] - if new_h / new_w < 0.5 or new_h / new_w > 2: - continue - - left = random.uniform(w - new_w) - top = random.uniform(h - new_h) - - patch = np.array( - (int(left), int(top), int(left + new_w), int(top + new_h))) - # Line or point crop is not allowed - if patch[2] == patch[0] or patch[3] == patch[1]: - continue - overlaps = bbox_overlaps( - patch.reshape(-1, 4), boxes.reshape(-1, 4)).reshape(-1) - if len(overlaps) > 0 and overlaps.min() < min_iou: - continue - - # center of boxes should inside the crop img - # only adjust boxes and instance masks when the gt is not empty - if len(overlaps) > 0: - # adjust boxes - def is_center_of_bboxes_in_patch(boxes, patch): - center = (boxes[:, :2] + boxes[:, 2:]) / 2 - mask = ((center[:, 0] > patch[0]) * - (center[:, 1] > patch[1]) * - (center[:, 0] < patch[2]) * - (center[:, 1] < patch[3])) - return mask - - mask = is_center_of_bboxes_in_patch(boxes, patch) - if not mask.any(): - continue - for key in results.get('bbox_fields', []): - boxes = results[key].copy() - mask = is_center_of_bboxes_in_patch(boxes, patch) - boxes = boxes[mask] - if self.bbox_clip_border: - boxes[:, 2:] = boxes[:, 2:].clip(max=patch[2:]) - boxes[:, :2] = boxes[:, :2].clip(min=patch[:2]) - boxes -= np.tile(patch[:2], 2) - - results[key] = boxes - # labels - label_key = self.bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][mask] - - # mask fields - mask_key = self.bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][ - mask.nonzero()[0]].crop(patch) - # adjust the img no matter whether the gt is empty before crop - img = img[patch[1]:patch[3], patch[0]:patch[2]] - results['img'] = img - results['img_shape'] = img.shape - - # seg fields - for key in results.get('seg_fields', []): - results[key] = results[key][patch[1]:patch[3], - patch[0]:patch[2]] - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(min_ious={self.min_ious}, ' - repr_str += f'min_crop_size={self.min_crop_size}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - - -@PIPELINES.register_module() -class Corrupt(object): - """Corruption augmentation. - - Corruption transforms implemented based on - `imagecorruptions `_. - - Args: - corruption (str): Corruption name. - severity (int, optional): The severity of corruption. Default: 1. - """ - - def __init__(self, corruption, severity=1): - self.corruption = corruption - self.severity = severity - - def __call__(self, results): - """Call function to corrupt image. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images corrupted. - """ - - if corrupt is None: - raise RuntimeError('imagecorruptions is not installed') - if 'img_fields' in results: - assert results['img_fields'] == ['img'], \ - 'Only single img_fields is allowed' - results['img'] = corrupt( - results['img'].astype(np.uint8), - corruption_name=self.corruption, - severity=self.severity) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(corruption={self.corruption}, ' - repr_str += f'severity={self.severity})' - return repr_str - - -@PIPELINES.register_module() -class Albu(object): - """Albumentation augmentation. - - Adds custom transformations from Albumentations library. - Please, visit `https://albumentations.readthedocs.io` - to get more information. - - An example of ``transforms`` is as followed: - - .. code-block:: - - [ - dict( - type='ShiftScaleRotate', - shift_limit=0.0625, - scale_limit=0.0, - rotate_limit=0, - interpolation=1, - p=0.5), - dict( - type='RandomBrightnessContrast', - brightness_limit=[0.1, 0.3], - contrast_limit=[0.1, 0.3], - p=0.2), - dict(type='ChannelShuffle', p=0.1), - dict( - type='OneOf', - transforms=[ - dict(type='Blur', blur_limit=3, p=1.0), - dict(type='MedianBlur', blur_limit=3, p=1.0) - ], - p=0.1), - ] - - Args: - transforms (list[dict]): A list of albu transformations - bbox_params (dict): Bbox_params for albumentation `Compose` - keymap (dict): Contains {'input key':'albumentation-style key'} - skip_img_without_anno (bool): Whether to skip the image if no ann left - after aug - """ - - def __init__(self, - transforms, - bbox_params=None, - keymap=None, - update_pad_shape=False, - skip_img_without_anno=False): - if Compose is None: - raise RuntimeError('albumentations is not installed') - - # Args will be modified later, copying it will be safer - transforms = copy.deepcopy(transforms) - if bbox_params is not None: - bbox_params = copy.deepcopy(bbox_params) - if keymap is not None: - keymap = copy.deepcopy(keymap) - self.transforms = transforms - self.filter_lost_elements = False - self.update_pad_shape = update_pad_shape - self.skip_img_without_anno = skip_img_without_anno - - # A simple workaround to remove masks without boxes - if (isinstance(bbox_params, dict) and 'label_fields' in bbox_params - and 'filter_lost_elements' in bbox_params): - self.filter_lost_elements = True - self.origin_label_fields = bbox_params['label_fields'] - bbox_params['label_fields'] = ['idx_mapper'] - del bbox_params['filter_lost_elements'] - - self.bbox_params = ( - self.albu_builder(bbox_params) if bbox_params else None) - self.aug = Compose([self.albu_builder(t) for t in self.transforms], - bbox_params=self.bbox_params) - - if not keymap: - self.keymap_to_albu = { - 'img': 'image', - 'gt_masks': 'masks', - 'gt_bboxes': 'bboxes' - } - else: - self.keymap_to_albu = keymap - self.keymap_back = {v: k for k, v in self.keymap_to_albu.items()} - - def albu_builder(self, cfg): - """Import a module from albumentations. - - It inherits some of :func:`build_from_cfg` logic. - - Args: - cfg (dict): Config dict. It should at least contain the key "type". - - Returns: - obj: The constructed object. - """ - - assert isinstance(cfg, dict) and 'type' in cfg - args = cfg.copy() - - obj_type = args.pop('type') - if mmcv.is_str(obj_type): - if albumentations is None: - raise RuntimeError('albumentations is not installed') - obj_cls = getattr(albumentations, obj_type) - elif inspect.isclass(obj_type): - obj_cls = obj_type - else: - raise TypeError( - f'type must be a str or valid type, but got {type(obj_type)}') - - if 'transforms' in args: - args['transforms'] = [ - self.albu_builder(transform) - for transform in args['transforms'] - ] - - return obj_cls(**args) - - @staticmethod - def mapper(d, keymap): - """Dictionary mapper. Renames keys according to keymap provided. - - Args: - d (dict): old dict - keymap (dict): {'old_key':'new_key'} - Returns: - dict: new dict. - """ - - updated_dict = {} - for k, v in zip(d.keys(), d.values()): - new_k = keymap.get(k, k) - updated_dict[new_k] = d[k] - return updated_dict - - def __call__(self, results): - # dict to albumentations format - results = self.mapper(results, self.keymap_to_albu) - # TODO: add bbox_fields - if 'bboxes' in results: - # to list of boxes - if isinstance(results['bboxes'], np.ndarray): - results['bboxes'] = [x for x in results['bboxes']] - # add pseudo-field for filtration - if self.filter_lost_elements: - results['idx_mapper'] = np.arange(len(results['bboxes'])) - - # TODO: Support mask structure in albu - if 'masks' in results: - if isinstance(results['masks'], PolygonMasks): - raise NotImplementedError( - 'Albu only supports BitMap masks now') - ori_masks = results['masks'] - if albumentations.__version__ < '0.5': - results['masks'] = results['masks'].masks - else: - results['masks'] = [mask for mask in results['masks'].masks] - - results = self.aug(**results) - - if 'bboxes' in results: - if isinstance(results['bboxes'], list): - results['bboxes'] = np.array( - results['bboxes'], dtype=np.float32) - results['bboxes'] = results['bboxes'].reshape(-1, 4) - - # filter label_fields - if self.filter_lost_elements: - - for label in self.origin_label_fields: - results[label] = np.array( - [results[label][i] for i in results['idx_mapper']]) - if 'masks' in results: - results['masks'] = np.array( - [results['masks'][i] for i in results['idx_mapper']]) - results['masks'] = ori_masks.__class__( - results['masks'], results['image'].shape[0], - results['image'].shape[1]) - - if (not len(results['idx_mapper']) - and self.skip_img_without_anno): - return None - - if 'gt_labels' in results: - if isinstance(results['gt_labels'], list): - results['gt_labels'] = np.array(results['gt_labels']) - results['gt_labels'] = results['gt_labels'].astype(np.int64) - - # back to the original format - results = self.mapper(results, self.keymap_back) - - # update final shape - if self.update_pad_shape: - results['pad_shape'] = results['img'].shape - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ + f'(transforms={self.transforms})' - return repr_str - - -@PIPELINES.register_module() -class RandomCenterCropPad(object): - """Random center crop and random around padding for CornerNet. - - This operation generates randomly cropped image from the original image and - pads it simultaneously. Different from :class:`RandomCrop`, the output - shape may not equal to ``crop_size`` strictly. We choose a random value - from ``ratios`` and the output shape could be larger or smaller than - ``crop_size``. The padding operation is also different from :class:`Pad`, - here we use around padding instead of right-bottom padding. - - The relation between output image (padding image) and original image: - - .. code:: text - - output image - - +----------------------------+ - | padded area | - +------|----------------------------|----------+ - | | cropped area | | - | | +---------------+ | | - | | | . center | | | original image - | | | range | | | - | | +---------------+ | | - +------|----------------------------|----------+ - | padded area | - +----------------------------+ - - There are 5 main areas in the figure: - - - output image: output image of this operation, also called padding - image in following instruction. - - original image: input image of this operation. - - padded area: non-intersect area of output image and original image. - - cropped area: the overlap of output image and original image. - - center range: a smaller area where random center chosen from. - center range is computed by ``border`` and original image's shape - to avoid our random center is too close to original image's border. - - Also this operation act differently in train and test mode, the summary - pipeline is listed below. - - Train pipeline: - - 1. Choose a ``random_ratio`` from ``ratios``, the shape of padding image - will be ``random_ratio * crop_size``. - 2. Choose a ``random_center`` in center range. - 3. Generate padding image with center matches the ``random_center``. - 4. Initialize the padding image with pixel value equals to ``mean``. - 5. Copy the cropped area to padding image. - 6. Refine annotations. - - Test pipeline: - - 1. Compute output shape according to ``test_pad_mode``. - 2. Generate padding image with center matches the original image - center. - 3. Initialize the padding image with pixel value equals to ``mean``. - 4. Copy the ``cropped area`` to padding image. - - Args: - crop_size (tuple | None): expected size after crop, final size will - computed according to ratio. Requires (h, w) in train mode, and - None in test mode. - ratios (tuple): random select a ratio from tuple and crop image to - (crop_size[0] * ratio) * (crop_size[1] * ratio). - Only available in train mode. - border (int): max distance from center select area to image border. - Only available in train mode. - mean (sequence): Mean values of 3 channels. - std (sequence): Std values of 3 channels. - to_rgb (bool): Whether to convert the image from BGR to RGB. - test_mode (bool): whether involve random variables in transform. - In train mode, crop_size is fixed, center coords and ratio is - random selected from predefined lists. In test mode, crop_size - is image's original shape, center coords and ratio is fixed. - test_pad_mode (tuple): padding method and padding shape value, only - available in test mode. Default is using 'logical_or' with - 127 as padding shape value. - - - 'logical_or': final_shape = input_shape | padding_shape_value - - 'size_divisor': final_shape = int( - ceil(input_shape / padding_shape_value) * padding_shape_value) - bbox_clip_border (bool, optional): Whether clip the objects outside - the border of the image. Defaults to True. - """ - - def __init__(self, - crop_size=None, - ratios=(0.9, 1.0, 1.1), - border=128, - mean=None, - std=None, - to_rgb=None, - test_mode=False, - test_pad_mode=('logical_or', 127), - bbox_clip_border=True): - if test_mode: - assert crop_size is None, 'crop_size must be None in test mode' - assert ratios is None, 'ratios must be None in test mode' - assert border is None, 'border must be None in test mode' - assert isinstance(test_pad_mode, (list, tuple)) - assert test_pad_mode[0] in ['logical_or', 'size_divisor'] - else: - assert isinstance(crop_size, (list, tuple)) - assert crop_size[0] > 0 and crop_size[1] > 0, ( - 'crop_size must > 0 in train mode') - assert isinstance(ratios, (list, tuple)) - assert test_pad_mode is None, ( - 'test_pad_mode must be None in train mode') - - self.crop_size = crop_size - self.ratios = ratios - self.border = border - # We do not set default value to mean, std and to_rgb because these - # hyper-parameters are easy to forget but could affect the performance. - # Please use the same setting as Normalize for performance assurance. - assert mean is not None and std is not None and to_rgb is not None - self.to_rgb = to_rgb - self.input_mean = mean - self.input_std = std - if to_rgb: - self.mean = mean[::-1] - self.std = std[::-1] - else: - self.mean = mean - self.std = std - self.test_mode = test_mode - self.test_pad_mode = test_pad_mode - self.bbox_clip_border = bbox_clip_border - - def _get_border(self, border, size): - """Get final border for the target size. - - This function generates a ``final_border`` according to image's shape. - The area between ``final_border`` and ``size - final_border`` is the - ``center range``. We randomly choose center from the ``center range`` - to avoid our random center is too close to original image's border. - Also ``center range`` should be larger than 0. - - Args: - border (int): The initial border, default is 128. - size (int): The width or height of original image. - Returns: - int: The final border. - """ - k = 2 * border / size - i = pow(2, np.ceil(np.log2(np.ceil(k))) + (k == int(k))) - return border // i - - def _filter_boxes(self, patch, boxes): - """Check whether the center of each box is in the patch. - - Args: - patch (list[int]): The cropped area, [left, top, right, bottom]. - boxes (numpy array, (N x 4)): Ground truth boxes. - - Returns: - mask (numpy array, (N,)): Each box is inside or outside the patch. - """ - center = (boxes[:, :2] + boxes[:, 2:]) / 2 - mask = (center[:, 0] > patch[0]) * (center[:, 1] > patch[1]) * ( - center[:, 0] < patch[2]) * ( - center[:, 1] < patch[3]) - return mask - - def _crop_image_and_paste(self, image, center, size): - """Crop image with a given center and size, then paste the cropped - image to a blank image with two centers align. - - This function is equivalent to generating a blank image with ``size`` - as its shape. Then cover it on the original image with two centers ( - the center of blank image and the random center of original image) - aligned. The overlap area is paste from the original image and the - outside area is filled with ``mean pixel``. - - Args: - image (np array, H x W x C): Original image. - center (list[int]): Target crop center coord. - size (list[int]): Target crop size. [target_h, target_w] - - Returns: - cropped_img (np array, target_h x target_w x C): Cropped image. - border (np array, 4): The distance of four border of - ``cropped_img`` to the original image area, [top, bottom, - left, right] - patch (list[int]): The cropped area, [left, top, right, bottom]. - """ - center_y, center_x = center - target_h, target_w = size - img_h, img_w, img_c = image.shape - - x0 = max(0, center_x - target_w // 2) - x1 = min(center_x + target_w // 2, img_w) - y0 = max(0, center_y - target_h // 2) - y1 = min(center_y + target_h // 2, img_h) - patch = np.array((int(x0), int(y0), int(x1), int(y1))) - - left, right = center_x - x0, x1 - center_x - top, bottom = center_y - y0, y1 - center_y - - cropped_center_y, cropped_center_x = target_h // 2, target_w // 2 - cropped_img = np.zeros((target_h, target_w, img_c), dtype=image.dtype) - for i in range(img_c): - cropped_img[:, :, i] += self.mean[i] - y_slice = slice(cropped_center_y - top, cropped_center_y + bottom) - x_slice = slice(cropped_center_x - left, cropped_center_x + right) - cropped_img[y_slice, x_slice, :] = image[y0:y1, x0:x1, :] - - border = np.array([ - cropped_center_y - top, cropped_center_y + bottom, - cropped_center_x - left, cropped_center_x + right - ], - dtype=np.float32) - - return cropped_img, border, patch - - def _train_aug(self, results): - """Random crop and around padding the original image. - - Args: - results (dict): Image infomations in the augment pipeline. - - Returns: - results (dict): The updated dict. - """ - img = results['img'] - h, w, c = img.shape - boxes = results['gt_bboxes'] - while True: - scale = random.choice(self.ratios) - new_h = int(self.crop_size[0] * scale) - new_w = int(self.crop_size[1] * scale) - h_border = self._get_border(self.border, h) - w_border = self._get_border(self.border, w) - - for i in range(50): - center_x = random.randint(low=w_border, high=w - w_border) - center_y = random.randint(low=h_border, high=h - h_border) - - cropped_img, border, patch = self._crop_image_and_paste( - img, [center_y, center_x], [new_h, new_w]) - - mask = self._filter_boxes(patch, boxes) - # if image do not have valid bbox, any crop patch is valid. - if not mask.any() and len(boxes) > 0: - continue - - results['img'] = cropped_img - results['img_shape'] = cropped_img.shape - results['pad_shape'] = cropped_img.shape - - x0, y0, x1, y1 = patch - - left_w, top_h = center_x - x0, center_y - y0 - cropped_center_x, cropped_center_y = new_w // 2, new_h // 2 - - # crop bboxes accordingly and clip to the image boundary - for key in results.get('bbox_fields', []): - mask = self._filter_boxes(patch, results[key]) - bboxes = results[key][mask] - bboxes[:, 0:4:2] += cropped_center_x - left_w - x0 - bboxes[:, 1:4:2] += cropped_center_y - top_h - y0 - if self.bbox_clip_border: - bboxes[:, 0:4:2] = np.clip(bboxes[:, 0:4:2], 0, new_w) - bboxes[:, 1:4:2] = np.clip(bboxes[:, 1:4:2], 0, new_h) - keep = (bboxes[:, 2] > bboxes[:, 0]) & ( - bboxes[:, 3] > bboxes[:, 1]) - bboxes = bboxes[keep] - results[key] = bboxes - if key in ['gt_bboxes']: - if 'gt_labels' in results: - labels = results['gt_labels'][mask] - labels = labels[keep] - results['gt_labels'] = labels - if 'gt_masks' in results: - raise NotImplementedError( - 'RandomCenterCropPad only supports bbox.') - - # crop semantic seg - for key in results.get('seg_fields', []): - raise NotImplementedError( - 'RandomCenterCropPad only supports bbox.') - return results - - def _test_aug(self, results): - """Around padding the original image without cropping. - - The padding mode and value are from ``test_pad_mode``. - - Args: - results (dict): Image infomations in the augment pipeline. - - Returns: - results (dict): The updated dict. - """ - img = results['img'] - h, w, c = img.shape - results['img_shape'] = img.shape - if self.test_pad_mode[0] in ['logical_or']: - target_h = h | self.test_pad_mode[1] - target_w = w | self.test_pad_mode[1] - elif self.test_pad_mode[0] in ['size_divisor']: - divisor = self.test_pad_mode[1] - target_h = int(np.ceil(h / divisor)) * divisor - target_w = int(np.ceil(w / divisor)) * divisor - else: - raise NotImplementedError( - 'RandomCenterCropPad only support two testing pad mode:' - 'logical-or and size_divisor.') - - cropped_img, border, _ = self._crop_image_and_paste( - img, [h // 2, w // 2], [target_h, target_w]) - results['img'] = cropped_img - results['pad_shape'] = cropped_img.shape - results['border'] = border - return results - - def __call__(self, results): - img = results['img'] - assert img.dtype == np.float32, ( - 'RandomCenterCropPad needs the input image of dtype np.float32,' - ' please set "to_float32=True" in "LoadImageFromFile" pipeline') - h, w, c = img.shape - assert c == len(self.mean) - if self.test_mode: - return self._test_aug(results) - else: - return self._train_aug(results) - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(crop_size={self.crop_size}, ' - repr_str += f'ratios={self.ratios}, ' - repr_str += f'border={self.border}, ' - repr_str += f'mean={self.input_mean}, ' - repr_str += f'std={self.input_std}, ' - repr_str += f'to_rgb={self.to_rgb}, ' - repr_str += f'test_mode={self.test_mode}, ' - repr_str += f'test_pad_mode={self.test_pad_mode}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - - -@PIPELINES.register_module() -class CutOut(object): - """CutOut operation. - - Randomly drop some regions of image used in - `Cutout `_. - - Args: - n_holes (int | tuple[int, int]): Number of regions to be dropped. - If it is given as a list, number of holes will be randomly - selected from the closed interval [`n_holes[0]`, `n_holes[1]`]. - cutout_shape (tuple[int, int] | list[tuple[int, int]]): The candidate - shape of dropped regions. It can be `tuple[int, int]` to use a - fixed cutout shape, or `list[tuple[int, int]]` to randomly choose - shape from the list. - cutout_ratio (tuple[float, float] | list[tuple[float, float]]): The - candidate ratio of dropped regions. It can be `tuple[float, float]` - to use a fixed ratio or `list[tuple[float, float]]` to randomly - choose ratio from the list. Please note that `cutout_shape` - and `cutout_ratio` cannot be both given at the same time. - fill_in (tuple[float, float, float] | tuple[int, int, int]): The value - of pixel to fill in the dropped regions. Default: (0, 0, 0). - """ - - def __init__(self, - n_holes, - cutout_shape=None, - cutout_ratio=None, - fill_in=(0, 0, 0)): - - assert (cutout_shape is None) ^ (cutout_ratio is None), \ - 'Either cutout_shape or cutout_ratio should be specified.' - assert (isinstance(cutout_shape, (list, tuple)) - or isinstance(cutout_ratio, (list, tuple))) - if isinstance(n_holes, tuple): - assert len(n_holes) == 2 and 0 <= n_holes[0] < n_holes[1] - else: - n_holes = (n_holes, n_holes) - self.n_holes = n_holes - self.fill_in = fill_in - self.with_ratio = cutout_ratio is not None - self.candidates = cutout_ratio if self.with_ratio else cutout_shape - if not isinstance(self.candidates, list): - self.candidates = [self.candidates] - - def __call__(self, results): - """Call function to drop some regions of image.""" - h, w, c = results['img'].shape - n_holes = np.random.randint(self.n_holes[0], self.n_holes[1] + 1) - for _ in range(n_holes): - x1 = np.random.randint(0, w) - y1 = np.random.randint(0, h) - index = np.random.randint(0, len(self.candidates)) - if not self.with_ratio: - cutout_w, cutout_h = self.candidates[index] - else: - cutout_w = int(self.candidates[index][0] * w) - cutout_h = int(self.candidates[index][1] * h) - - x2 = np.clip(x1 + cutout_w, 0, w) - y2 = np.clip(y1 + cutout_h, 0, h) - results['img'][y1:y2, x1:x2, :] = self.fill_in - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(n_holes={self.n_holes}, ' - repr_str += (f'cutout_ratio={self.candidates}, ' if self.with_ratio - else f'cutout_shape={self.candidates}, ') - repr_str += f'fill_in={self.fill_in})' - return repr_str diff --git a/spaces/CofAI/chat.b4/g4f/Provider/Providers/Theb.py b/spaces/CofAI/chat.b4/g4f/Provider/Providers/Theb.py deleted file mode 100644 index aa43ebc55d74ffaa722fe008424fce97c622a323..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat.b4/g4f/Provider/Providers/Theb.py +++ /dev/null @@ -1,28 +0,0 @@ -import os -import json -import time -import subprocess - -from ...typing import sha256, Dict, get_type_hints - -url = 'https://theb.ai' -model = ['gpt-3.5-turbo'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - - path = os.path.dirname(os.path.realpath(__file__)) - config = json.dumps({ - 'messages': messages, - 'model': model}, separators=(',', ':')) - - cmd = ['python3', f'{path}/helpers/theb.py', config] - - p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) - - for line in iter(p.stdout.readline, b''): - yield line.decode('utf-8') - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/_config.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/_config.py deleted file mode 100644 index 96d4200773d85eef9e846a4e57d63d0f2ee1b9aa..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/_config.py +++ /dev/null @@ -1,31 +0,0 @@ -# SPDX-License-Identifier: MIT - - -__all__ = ["set_run_validators", "get_run_validators"] - -_run_validators = True - - -def set_run_validators(run): - """ - Set whether or not validators are run. By default, they are run. - - .. deprecated:: 21.3.0 It will not be removed, but it also will not be - moved to new ``attrs`` namespace. Use `attrs.validators.set_disabled()` - instead. - """ - if not isinstance(run, bool): - raise TypeError("'run' must be bool.") - global _run_validators - _run_validators = run - - -def get_run_validators(): - """ - Return whether or not validators are run. - - .. deprecated:: 21.3.0 It will not be removed, but it also will not be - moved to new ``attrs`` namespace. Use `attrs.validators.get_disabled()` - instead. - """ - return _run_validators diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/dateutil/parser/_parser.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/dateutil/parser/_parser.py deleted file mode 100644 index 37d1663b2f72447800d9a553929e3de932244289..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/dateutil/parser/_parser.py +++ /dev/null @@ -1,1613 +0,0 @@ -# -*- coding: utf-8 -*- -""" -This module offers a generic date/time string parser which is able to parse -most known formats to represent a date and/or time. - -This module attempts to be forgiving with regards to unlikely input formats, -returning a datetime object even for dates which are ambiguous. If an element -of a date/time stamp is omitted, the following rules are applied: - -- If AM or PM is left unspecified, a 24-hour clock is assumed, however, an hour - on a 12-hour clock (``0 <= hour <= 12``) *must* be specified if AM or PM is - specified. -- If a time zone is omitted, a timezone-naive datetime is returned. - -If any other elements are missing, they are taken from the -:class:`datetime.datetime` object passed to the parameter ``default``. If this -results in a day number exceeding the valid number of days per month, the -value falls back to the end of the month. - -Additional resources about date/time string formats can be found below: - -- `A summary of the international standard date and time notation - `_ -- `W3C Date and Time Formats `_ -- `Time Formats (Planetary Rings Node) `_ -- `CPAN ParseDate module - `_ -- `Java SimpleDateFormat Class - `_ -""" -from __future__ import unicode_literals - -import datetime -import re -import string -import time -import warnings - -from calendar import monthrange -from io import StringIO - -import six -from six import integer_types, text_type - -from decimal import Decimal - -from warnings import warn - -from .. import relativedelta -from .. import tz - -__all__ = ["parse", "parserinfo", "ParserError"] - - -# TODO: pandas.core.tools.datetimes imports this explicitly. Might be worth -# making public and/or figuring out if there is something we can -# take off their plate. -class _timelex(object): - # Fractional seconds are sometimes split by a comma - _split_decimal = re.compile("([.,])") - - def __init__(self, instream): - if isinstance(instream, (bytes, bytearray)): - instream = instream.decode() - - if isinstance(instream, text_type): - instream = StringIO(instream) - elif getattr(instream, 'read', None) is None: - raise TypeError('Parser must be a string or character stream, not ' - '{itype}'.format(itype=instream.__class__.__name__)) - - self.instream = instream - self.charstack = [] - self.tokenstack = [] - self.eof = False - - def get_token(self): - """ - This function breaks the time string into lexical units (tokens), which - can be parsed by the parser. Lexical units are demarcated by changes in - the character set, so any continuous string of letters is considered - one unit, any continuous string of numbers is considered one unit. - - The main complication arises from the fact that dots ('.') can be used - both as separators (e.g. "Sep.20.2009") or decimal points (e.g. - "4:30:21.447"). As such, it is necessary to read the full context of - any dot-separated strings before breaking it into tokens; as such, this - function maintains a "token stack", for when the ambiguous context - demands that multiple tokens be parsed at once. - """ - if self.tokenstack: - return self.tokenstack.pop(0) - - seenletters = False - token = None - state = None - - while not self.eof: - # We only realize that we've reached the end of a token when we - # find a character that's not part of the current token - since - # that character may be part of the next token, it's stored in the - # charstack. - if self.charstack: - nextchar = self.charstack.pop(0) - else: - nextchar = self.instream.read(1) - while nextchar == '\x00': - nextchar = self.instream.read(1) - - if not nextchar: - self.eof = True - break - elif not state: - # First character of the token - determines if we're starting - # to parse a word, a number or something else. - token = nextchar - if self.isword(nextchar): - state = 'a' - elif self.isnum(nextchar): - state = '0' - elif self.isspace(nextchar): - token = ' ' - break # emit token - else: - break # emit token - elif state == 'a': - # If we've already started reading a word, we keep reading - # letters until we find something that's not part of a word. - seenletters = True - if self.isword(nextchar): - token += nextchar - elif nextchar == '.': - token += nextchar - state = 'a.' - else: - self.charstack.append(nextchar) - break # emit token - elif state == '0': - # If we've already started reading a number, we keep reading - # numbers until we find something that doesn't fit. - if self.isnum(nextchar): - token += nextchar - elif nextchar == '.' or (nextchar == ',' and len(token) >= 2): - token += nextchar - state = '0.' - else: - self.charstack.append(nextchar) - break # emit token - elif state == 'a.': - # If we've seen some letters and a dot separator, continue - # parsing, and the tokens will be broken up later. - seenletters = True - if nextchar == '.' or self.isword(nextchar): - token += nextchar - elif self.isnum(nextchar) and token[-1] == '.': - token += nextchar - state = '0.' - else: - self.charstack.append(nextchar) - break # emit token - elif state == '0.': - # If we've seen at least one dot separator, keep going, we'll - # break up the tokens later. - if nextchar == '.' or self.isnum(nextchar): - token += nextchar - elif self.isword(nextchar) and token[-1] == '.': - token += nextchar - state = 'a.' - else: - self.charstack.append(nextchar) - break # emit token - - if (state in ('a.', '0.') and (seenletters or token.count('.') > 1 or - token[-1] in '.,')): - l = self._split_decimal.split(token) - token = l[0] - for tok in l[1:]: - if tok: - self.tokenstack.append(tok) - - if state == '0.' and token.count('.') == 0: - token = token.replace(',', '.') - - return token - - def __iter__(self): - return self - - def __next__(self): - token = self.get_token() - if token is None: - raise StopIteration - - return token - - def next(self): - return self.__next__() # Python 2.x support - - @classmethod - def split(cls, s): - return list(cls(s)) - - @classmethod - def isword(cls, nextchar): - """ Whether or not the next character is part of a word """ - return nextchar.isalpha() - - @classmethod - def isnum(cls, nextchar): - """ Whether the next character is part of a number """ - return nextchar.isdigit() - - @classmethod - def isspace(cls, nextchar): - """ Whether the next character is whitespace """ - return nextchar.isspace() - - -class _resultbase(object): - - def __init__(self): - for attr in self.__slots__: - setattr(self, attr, None) - - def _repr(self, classname): - l = [] - for attr in self.__slots__: - value = getattr(self, attr) - if value is not None: - l.append("%s=%s" % (attr, repr(value))) - return "%s(%s)" % (classname, ", ".join(l)) - - def __len__(self): - return (sum(getattr(self, attr) is not None - for attr in self.__slots__)) - - def __repr__(self): - return self._repr(self.__class__.__name__) - - -class parserinfo(object): - """ - Class which handles what inputs are accepted. Subclass this to customize - the language and acceptable values for each parameter. - - :param dayfirst: - Whether to interpret the first value in an ambiguous 3-integer date - (e.g. 01/05/09) as the day (``True``) or month (``False``). If - ``yearfirst`` is set to ``True``, this distinguishes between YDM - and YMD. Default is ``False``. - - :param yearfirst: - Whether to interpret the first value in an ambiguous 3-integer date - (e.g. 01/05/09) as the year. If ``True``, the first number is taken - to be the year, otherwise the last number is taken to be the year. - Default is ``False``. - """ - - # m from a.m/p.m, t from ISO T separator - JUMP = [" ", ".", ",", ";", "-", "/", "'", - "at", "on", "and", "ad", "m", "t", "of", - "st", "nd", "rd", "th"] - - WEEKDAYS = [("Mon", "Monday"), - ("Tue", "Tuesday"), # TODO: "Tues" - ("Wed", "Wednesday"), - ("Thu", "Thursday"), # TODO: "Thurs" - ("Fri", "Friday"), - ("Sat", "Saturday"), - ("Sun", "Sunday")] - MONTHS = [("Jan", "January"), - ("Feb", "February"), # TODO: "Febr" - ("Mar", "March"), - ("Apr", "April"), - ("May", "May"), - ("Jun", "June"), - ("Jul", "July"), - ("Aug", "August"), - ("Sep", "Sept", "September"), - ("Oct", "October"), - ("Nov", "November"), - ("Dec", "December")] - HMS = [("h", "hour", "hours"), - ("m", "minute", "minutes"), - ("s", "second", "seconds")] - AMPM = [("am", "a"), - ("pm", "p")] - UTCZONE = ["UTC", "GMT", "Z", "z"] - PERTAIN = ["of"] - TZOFFSET = {} - # TODO: ERA = ["AD", "BC", "CE", "BCE", "Stardate", - # "Anno Domini", "Year of Our Lord"] - - def __init__(self, dayfirst=False, yearfirst=False): - self._jump = self._convert(self.JUMP) - self._weekdays = self._convert(self.WEEKDAYS) - self._months = self._convert(self.MONTHS) - self._hms = self._convert(self.HMS) - self._ampm = self._convert(self.AMPM) - self._utczone = self._convert(self.UTCZONE) - self._pertain = self._convert(self.PERTAIN) - - self.dayfirst = dayfirst - self.yearfirst = yearfirst - - self._year = time.localtime().tm_year - self._century = self._year // 100 * 100 - - def _convert(self, lst): - dct = {} - for i, v in enumerate(lst): - if isinstance(v, tuple): - for v in v: - dct[v.lower()] = i - else: - dct[v.lower()] = i - return dct - - def jump(self, name): - return name.lower() in self._jump - - def weekday(self, name): - try: - return self._weekdays[name.lower()] - except KeyError: - pass - return None - - def month(self, name): - try: - return self._months[name.lower()] + 1 - except KeyError: - pass - return None - - def hms(self, name): - try: - return self._hms[name.lower()] - except KeyError: - return None - - def ampm(self, name): - try: - return self._ampm[name.lower()] - except KeyError: - return None - - def pertain(self, name): - return name.lower() in self._pertain - - def utczone(self, name): - return name.lower() in self._utczone - - def tzoffset(self, name): - if name in self._utczone: - return 0 - - return self.TZOFFSET.get(name) - - def convertyear(self, year, century_specified=False): - """ - Converts two-digit years to year within [-50, 49] - range of self._year (current local time) - """ - - # Function contract is that the year is always positive - assert year >= 0 - - if year < 100 and not century_specified: - # assume current century to start - year += self._century - - if year >= self._year + 50: # if too far in future - year -= 100 - elif year < self._year - 50: # if too far in past - year += 100 - - return year - - def validate(self, res): - # move to info - if res.year is not None: - res.year = self.convertyear(res.year, res.century_specified) - - if ((res.tzoffset == 0 and not res.tzname) or - (res.tzname == 'Z' or res.tzname == 'z')): - res.tzname = "UTC" - res.tzoffset = 0 - elif res.tzoffset != 0 and res.tzname and self.utczone(res.tzname): - res.tzoffset = 0 - return True - - -class _ymd(list): - def __init__(self, *args, **kwargs): - super(self.__class__, self).__init__(*args, **kwargs) - self.century_specified = False - self.dstridx = None - self.mstridx = None - self.ystridx = None - - @property - def has_year(self): - return self.ystridx is not None - - @property - def has_month(self): - return self.mstridx is not None - - @property - def has_day(self): - return self.dstridx is not None - - def could_be_day(self, value): - if self.has_day: - return False - elif not self.has_month: - return 1 <= value <= 31 - elif not self.has_year: - # Be permissive, assume leap year - month = self[self.mstridx] - return 1 <= value <= monthrange(2000, month)[1] - else: - month = self[self.mstridx] - year = self[self.ystridx] - return 1 <= value <= monthrange(year, month)[1] - - def append(self, val, label=None): - if hasattr(val, '__len__'): - if val.isdigit() and len(val) > 2: - self.century_specified = True - if label not in [None, 'Y']: # pragma: no cover - raise ValueError(label) - label = 'Y' - elif val > 100: - self.century_specified = True - if label not in [None, 'Y']: # pragma: no cover - raise ValueError(label) - label = 'Y' - - super(self.__class__, self).append(int(val)) - - if label == 'M': - if self.has_month: - raise ValueError('Month is already set') - self.mstridx = len(self) - 1 - elif label == 'D': - if self.has_day: - raise ValueError('Day is already set') - self.dstridx = len(self) - 1 - elif label == 'Y': - if self.has_year: - raise ValueError('Year is already set') - self.ystridx = len(self) - 1 - - def _resolve_from_stridxs(self, strids): - """ - Try to resolve the identities of year/month/day elements using - ystridx, mstridx, and dstridx, if enough of these are specified. - """ - if len(self) == 3 and len(strids) == 2: - # we can back out the remaining stridx value - missing = [x for x in range(3) if x not in strids.values()] - key = [x for x in ['y', 'm', 'd'] if x not in strids] - assert len(missing) == len(key) == 1 - key = key[0] - val = missing[0] - strids[key] = val - - assert len(self) == len(strids) # otherwise this should not be called - out = {key: self[strids[key]] for key in strids} - return (out.get('y'), out.get('m'), out.get('d')) - - def resolve_ymd(self, yearfirst, dayfirst): - len_ymd = len(self) - year, month, day = (None, None, None) - - strids = (('y', self.ystridx), - ('m', self.mstridx), - ('d', self.dstridx)) - - strids = {key: val for key, val in strids if val is not None} - if (len(self) == len(strids) > 0 or - (len(self) == 3 and len(strids) == 2)): - return self._resolve_from_stridxs(strids) - - mstridx = self.mstridx - - if len_ymd > 3: - raise ValueError("More than three YMD values") - elif len_ymd == 1 or (mstridx is not None and len_ymd == 2): - # One member, or two members with a month string - if mstridx is not None: - month = self[mstridx] - # since mstridx is 0 or 1, self[mstridx-1] always - # looks up the other element - other = self[mstridx - 1] - else: - other = self[0] - - if len_ymd > 1 or mstridx is None: - if other > 31: - year = other - else: - day = other - - elif len_ymd == 2: - # Two members with numbers - if self[0] > 31: - # 99-01 - year, month = self - elif self[1] > 31: - # 01-99 - month, year = self - elif dayfirst and self[1] <= 12: - # 13-01 - day, month = self - else: - # 01-13 - month, day = self - - elif len_ymd == 3: - # Three members - if mstridx == 0: - if self[1] > 31: - # Apr-2003-25 - month, year, day = self - else: - month, day, year = self - elif mstridx == 1: - if self[0] > 31 or (yearfirst and self[2] <= 31): - # 99-Jan-01 - year, month, day = self - else: - # 01-Jan-01 - # Give precedence to day-first, since - # two-digit years is usually hand-written. - day, month, year = self - - elif mstridx == 2: - # WTF!? - if self[1] > 31: - # 01-99-Jan - day, year, month = self - else: - # 99-01-Jan - year, day, month = self - - else: - if (self[0] > 31 or - self.ystridx == 0 or - (yearfirst and self[1] <= 12 and self[2] <= 31)): - # 99-01-01 - if dayfirst and self[2] <= 12: - year, day, month = self - else: - year, month, day = self - elif self[0] > 12 or (dayfirst and self[1] <= 12): - # 13-01-01 - day, month, year = self - else: - # 01-13-01 - month, day, year = self - - return year, month, day - - -class parser(object): - def __init__(self, info=None): - self.info = info or parserinfo() - - def parse(self, timestr, default=None, - ignoretz=False, tzinfos=None, **kwargs): - """ - Parse the date/time string into a :class:`datetime.datetime` object. - - :param timestr: - Any date/time string using the supported formats. - - :param default: - The default datetime object, if this is a datetime object and not - ``None``, elements specified in ``timestr`` replace elements in the - default object. - - :param ignoretz: - If set ``True``, time zones in parsed strings are ignored and a - naive :class:`datetime.datetime` object is returned. - - :param tzinfos: - Additional time zone names / aliases which may be present in the - string. This argument maps time zone names (and optionally offsets - from those time zones) to time zones. This parameter can be a - dictionary with timezone aliases mapping time zone names to time - zones or a function taking two parameters (``tzname`` and - ``tzoffset``) and returning a time zone. - - The timezones to which the names are mapped can be an integer - offset from UTC in seconds or a :class:`tzinfo` object. - - .. doctest:: - :options: +NORMALIZE_WHITESPACE - - >>> from dateutil.parser import parse - >>> from dateutil.tz import gettz - >>> tzinfos = {"BRST": -7200, "CST": gettz("America/Chicago")} - >>> parse("2012-01-19 17:21:00 BRST", tzinfos=tzinfos) - datetime.datetime(2012, 1, 19, 17, 21, tzinfo=tzoffset(u'BRST', -7200)) - >>> parse("2012-01-19 17:21:00 CST", tzinfos=tzinfos) - datetime.datetime(2012, 1, 19, 17, 21, - tzinfo=tzfile('/usr/share/zoneinfo/America/Chicago')) - - This parameter is ignored if ``ignoretz`` is set. - - :param \\*\\*kwargs: - Keyword arguments as passed to ``_parse()``. - - :return: - Returns a :class:`datetime.datetime` object or, if the - ``fuzzy_with_tokens`` option is ``True``, returns a tuple, the - first element being a :class:`datetime.datetime` object, the second - a tuple containing the fuzzy tokens. - - :raises ParserError: - Raised for invalid or unknown string format, if the provided - :class:`tzinfo` is not in a valid format, or if an invalid date - would be created. - - :raises TypeError: - Raised for non-string or character stream input. - - :raises OverflowError: - Raised if the parsed date exceeds the largest valid C integer on - your system. - """ - - if default is None: - default = datetime.datetime.now().replace(hour=0, minute=0, - second=0, microsecond=0) - - res, skipped_tokens = self._parse(timestr, **kwargs) - - if res is None: - raise ParserError("Unknown string format: %s", timestr) - - if len(res) == 0: - raise ParserError("String does not contain a date: %s", timestr) - - try: - ret = self._build_naive(res, default) - except ValueError as e: - six.raise_from(ParserError(str(e) + ": %s", timestr), e) - - if not ignoretz: - ret = self._build_tzaware(ret, res, tzinfos) - - if kwargs.get('fuzzy_with_tokens', False): - return ret, skipped_tokens - else: - return ret - - class _result(_resultbase): - __slots__ = ["year", "month", "day", "weekday", - "hour", "minute", "second", "microsecond", - "tzname", "tzoffset", "ampm","any_unused_tokens"] - - def _parse(self, timestr, dayfirst=None, yearfirst=None, fuzzy=False, - fuzzy_with_tokens=False): - """ - Private method which performs the heavy lifting of parsing, called from - ``parse()``, which passes on its ``kwargs`` to this function. - - :param timestr: - The string to parse. - - :param dayfirst: - Whether to interpret the first value in an ambiguous 3-integer date - (e.g. 01/05/09) as the day (``True``) or month (``False``). If - ``yearfirst`` is set to ``True``, this distinguishes between YDM - and YMD. If set to ``None``, this value is retrieved from the - current :class:`parserinfo` object (which itself defaults to - ``False``). - - :param yearfirst: - Whether to interpret the first value in an ambiguous 3-integer date - (e.g. 01/05/09) as the year. If ``True``, the first number is taken - to be the year, otherwise the last number is taken to be the year. - If this is set to ``None``, the value is retrieved from the current - :class:`parserinfo` object (which itself defaults to ``False``). - - :param fuzzy: - Whether to allow fuzzy parsing, allowing for string like "Today is - January 1, 2047 at 8:21:00AM". - - :param fuzzy_with_tokens: - If ``True``, ``fuzzy`` is automatically set to True, and the parser - will return a tuple where the first element is the parsed - :class:`datetime.datetime` datetimestamp and the second element is - a tuple containing the portions of the string which were ignored: - - .. doctest:: - - >>> from dateutil.parser import parse - >>> parse("Today is January 1, 2047 at 8:21:00AM", fuzzy_with_tokens=True) - (datetime.datetime(2047, 1, 1, 8, 21), (u'Today is ', u' ', u'at ')) - - """ - if fuzzy_with_tokens: - fuzzy = True - - info = self.info - - if dayfirst is None: - dayfirst = info.dayfirst - - if yearfirst is None: - yearfirst = info.yearfirst - - res = self._result() - l = _timelex.split(timestr) # Splits the timestr into tokens - - skipped_idxs = [] - - # year/month/day list - ymd = _ymd() - - len_l = len(l) - i = 0 - try: - while i < len_l: - - # Check if it's a number - value_repr = l[i] - try: - value = float(value_repr) - except ValueError: - value = None - - if value is not None: - # Numeric token - i = self._parse_numeric_token(l, i, info, ymd, res, fuzzy) - - # Check weekday - elif info.weekday(l[i]) is not None: - value = info.weekday(l[i]) - res.weekday = value - - # Check month name - elif info.month(l[i]) is not None: - value = info.month(l[i]) - ymd.append(value, 'M') - - if i + 1 < len_l: - if l[i + 1] in ('-', '/'): - # Jan-01[-99] - sep = l[i + 1] - ymd.append(l[i + 2]) - - if i + 3 < len_l and l[i + 3] == sep: - # Jan-01-99 - ymd.append(l[i + 4]) - i += 2 - - i += 2 - - elif (i + 4 < len_l and l[i + 1] == l[i + 3] == ' ' and - info.pertain(l[i + 2])): - # Jan of 01 - # In this case, 01 is clearly year - if l[i + 4].isdigit(): - # Convert it here to become unambiguous - value = int(l[i + 4]) - year = str(info.convertyear(value)) - ymd.append(year, 'Y') - else: - # Wrong guess - pass - # TODO: not hit in tests - i += 4 - - # Check am/pm - elif info.ampm(l[i]) is not None: - value = info.ampm(l[i]) - val_is_ampm = self._ampm_valid(res.hour, res.ampm, fuzzy) - - if val_is_ampm: - res.hour = self._adjust_ampm(res.hour, value) - res.ampm = value - - elif fuzzy: - skipped_idxs.append(i) - - # Check for a timezone name - elif self._could_be_tzname(res.hour, res.tzname, res.tzoffset, l[i]): - res.tzname = l[i] - res.tzoffset = info.tzoffset(res.tzname) - - # Check for something like GMT+3, or BRST+3. Notice - # that it doesn't mean "I am 3 hours after GMT", but - # "my time +3 is GMT". If found, we reverse the - # logic so that timezone parsing code will get it - # right. - if i + 1 < len_l and l[i + 1] in ('+', '-'): - l[i + 1] = ('+', '-')[l[i + 1] == '+'] - res.tzoffset = None - if info.utczone(res.tzname): - # With something like GMT+3, the timezone - # is *not* GMT. - res.tzname = None - - # Check for a numbered timezone - elif res.hour is not None and l[i] in ('+', '-'): - signal = (-1, 1)[l[i] == '+'] - len_li = len(l[i + 1]) - - # TODO: check that l[i + 1] is integer? - if len_li == 4: - # -0300 - hour_offset = int(l[i + 1][:2]) - min_offset = int(l[i + 1][2:]) - elif i + 2 < len_l and l[i + 2] == ':': - # -03:00 - hour_offset = int(l[i + 1]) - min_offset = int(l[i + 3]) # TODO: Check that l[i+3] is minute-like? - i += 2 - elif len_li <= 2: - # -[0]3 - hour_offset = int(l[i + 1][:2]) - min_offset = 0 - else: - raise ValueError(timestr) - - res.tzoffset = signal * (hour_offset * 3600 + min_offset * 60) - - # Look for a timezone name between parenthesis - if (i + 5 < len_l and - info.jump(l[i + 2]) and l[i + 3] == '(' and - l[i + 5] == ')' and - 3 <= len(l[i + 4]) and - self._could_be_tzname(res.hour, res.tzname, - None, l[i + 4])): - # -0300 (BRST) - res.tzname = l[i + 4] - i += 4 - - i += 1 - - # Check jumps - elif not (info.jump(l[i]) or fuzzy): - raise ValueError(timestr) - - else: - skipped_idxs.append(i) - i += 1 - - # Process year/month/day - year, month, day = ymd.resolve_ymd(yearfirst, dayfirst) - - res.century_specified = ymd.century_specified - res.year = year - res.month = month - res.day = day - - except (IndexError, ValueError): - return None, None - - if not info.validate(res): - return None, None - - if fuzzy_with_tokens: - skipped_tokens = self._recombine_skipped(l, skipped_idxs) - return res, tuple(skipped_tokens) - else: - return res, None - - def _parse_numeric_token(self, tokens, idx, info, ymd, res, fuzzy): - # Token is a number - value_repr = tokens[idx] - try: - value = self._to_decimal(value_repr) - except Exception as e: - six.raise_from(ValueError('Unknown numeric token'), e) - - len_li = len(value_repr) - - len_l = len(tokens) - - if (len(ymd) == 3 and len_li in (2, 4) and - res.hour is None and - (idx + 1 >= len_l or - (tokens[idx + 1] != ':' and - info.hms(tokens[idx + 1]) is None))): - # 19990101T23[59] - s = tokens[idx] - res.hour = int(s[:2]) - - if len_li == 4: - res.minute = int(s[2:]) - - elif len_li == 6 or (len_li > 6 and tokens[idx].find('.') == 6): - # YYMMDD or HHMMSS[.ss] - s = tokens[idx] - - if not ymd and '.' not in tokens[idx]: - ymd.append(s[:2]) - ymd.append(s[2:4]) - ymd.append(s[4:]) - else: - # 19990101T235959[.59] - - # TODO: Check if res attributes already set. - res.hour = int(s[:2]) - res.minute = int(s[2:4]) - res.second, res.microsecond = self._parsems(s[4:]) - - elif len_li in (8, 12, 14): - # YYYYMMDD - s = tokens[idx] - ymd.append(s[:4], 'Y') - ymd.append(s[4:6]) - ymd.append(s[6:8]) - - if len_li > 8: - res.hour = int(s[8:10]) - res.minute = int(s[10:12]) - - if len_li > 12: - res.second = int(s[12:]) - - elif self._find_hms_idx(idx, tokens, info, allow_jump=True) is not None: - # HH[ ]h or MM[ ]m or SS[.ss][ ]s - hms_idx = self._find_hms_idx(idx, tokens, info, allow_jump=True) - (idx, hms) = self._parse_hms(idx, tokens, info, hms_idx) - if hms is not None: - # TODO: checking that hour/minute/second are not - # already set? - self._assign_hms(res, value_repr, hms) - - elif idx + 2 < len_l and tokens[idx + 1] == ':': - # HH:MM[:SS[.ss]] - res.hour = int(value) - value = self._to_decimal(tokens[idx + 2]) # TODO: try/except for this? - (res.minute, res.second) = self._parse_min_sec(value) - - if idx + 4 < len_l and tokens[idx + 3] == ':': - res.second, res.microsecond = self._parsems(tokens[idx + 4]) - - idx += 2 - - idx += 2 - - elif idx + 1 < len_l and tokens[idx + 1] in ('-', '/', '.'): - sep = tokens[idx + 1] - ymd.append(value_repr) - - if idx + 2 < len_l and not info.jump(tokens[idx + 2]): - if tokens[idx + 2].isdigit(): - # 01-01[-01] - ymd.append(tokens[idx + 2]) - else: - # 01-Jan[-01] - value = info.month(tokens[idx + 2]) - - if value is not None: - ymd.append(value, 'M') - else: - raise ValueError() - - if idx + 3 < len_l and tokens[idx + 3] == sep: - # We have three members - value = info.month(tokens[idx + 4]) - - if value is not None: - ymd.append(value, 'M') - else: - ymd.append(tokens[idx + 4]) - idx += 2 - - idx += 1 - idx += 1 - - elif idx + 1 >= len_l or info.jump(tokens[idx + 1]): - if idx + 2 < len_l and info.ampm(tokens[idx + 2]) is not None: - # 12 am - hour = int(value) - res.hour = self._adjust_ampm(hour, info.ampm(tokens[idx + 2])) - idx += 1 - else: - # Year, month or day - ymd.append(value) - idx += 1 - - elif info.ampm(tokens[idx + 1]) is not None and (0 <= value < 24): - # 12am - hour = int(value) - res.hour = self._adjust_ampm(hour, info.ampm(tokens[idx + 1])) - idx += 1 - - elif ymd.could_be_day(value): - ymd.append(value) - - elif not fuzzy: - raise ValueError() - - return idx - - def _find_hms_idx(self, idx, tokens, info, allow_jump): - len_l = len(tokens) - - if idx+1 < len_l and info.hms(tokens[idx+1]) is not None: - # There is an "h", "m", or "s" label following this token. We take - # assign the upcoming label to the current token. - # e.g. the "12" in 12h" - hms_idx = idx + 1 - - elif (allow_jump and idx+2 < len_l and tokens[idx+1] == ' ' and - info.hms(tokens[idx+2]) is not None): - # There is a space and then an "h", "m", or "s" label. - # e.g. the "12" in "12 h" - hms_idx = idx + 2 - - elif idx > 0 and info.hms(tokens[idx-1]) is not None: - # There is a "h", "m", or "s" preceding this token. Since neither - # of the previous cases was hit, there is no label following this - # token, so we use the previous label. - # e.g. the "04" in "12h04" - hms_idx = idx-1 - - elif (1 < idx == len_l-1 and tokens[idx-1] == ' ' and - info.hms(tokens[idx-2]) is not None): - # If we are looking at the final token, we allow for a - # backward-looking check to skip over a space. - # TODO: Are we sure this is the right condition here? - hms_idx = idx - 2 - - else: - hms_idx = None - - return hms_idx - - def _assign_hms(self, res, value_repr, hms): - # See GH issue #427, fixing float rounding - value = self._to_decimal(value_repr) - - if hms == 0: - # Hour - res.hour = int(value) - if value % 1: - res.minute = int(60*(value % 1)) - - elif hms == 1: - (res.minute, res.second) = self._parse_min_sec(value) - - elif hms == 2: - (res.second, res.microsecond) = self._parsems(value_repr) - - def _could_be_tzname(self, hour, tzname, tzoffset, token): - return (hour is not None and - tzname is None and - tzoffset is None and - len(token) <= 5 and - (all(x in string.ascii_uppercase for x in token) - or token in self.info.UTCZONE)) - - def _ampm_valid(self, hour, ampm, fuzzy): - """ - For fuzzy parsing, 'a' or 'am' (both valid English words) - may erroneously trigger the AM/PM flag. Deal with that - here. - """ - val_is_ampm = True - - # If there's already an AM/PM flag, this one isn't one. - if fuzzy and ampm is not None: - val_is_ampm = False - - # If AM/PM is found and hour is not, raise a ValueError - if hour is None: - if fuzzy: - val_is_ampm = False - else: - raise ValueError('No hour specified with AM or PM flag.') - elif not 0 <= hour <= 12: - # If AM/PM is found, it's a 12 hour clock, so raise - # an error for invalid range - if fuzzy: - val_is_ampm = False - else: - raise ValueError('Invalid hour specified for 12-hour clock.') - - return val_is_ampm - - def _adjust_ampm(self, hour, ampm): - if hour < 12 and ampm == 1: - hour += 12 - elif hour == 12 and ampm == 0: - hour = 0 - return hour - - def _parse_min_sec(self, value): - # TODO: Every usage of this function sets res.second to the return - # value. Are there any cases where second will be returned as None and - # we *don't* want to set res.second = None? - minute = int(value) - second = None - - sec_remainder = value % 1 - if sec_remainder: - second = int(60 * sec_remainder) - return (minute, second) - - def _parse_hms(self, idx, tokens, info, hms_idx): - # TODO: Is this going to admit a lot of false-positives for when we - # just happen to have digits and "h", "m" or "s" characters in non-date - # text? I guess hex hashes won't have that problem, but there's plenty - # of random junk out there. - if hms_idx is None: - hms = None - new_idx = idx - elif hms_idx > idx: - hms = info.hms(tokens[hms_idx]) - new_idx = hms_idx - else: - # Looking backwards, increment one. - hms = info.hms(tokens[hms_idx]) + 1 - new_idx = idx - - return (new_idx, hms) - - # ------------------------------------------------------------------ - # Handling for individual tokens. These are kept as methods instead - # of functions for the sake of customizability via subclassing. - - def _parsems(self, value): - """Parse a I[.F] seconds value into (seconds, microseconds).""" - if "." not in value: - return int(value), 0 - else: - i, f = value.split(".") - return int(i), int(f.ljust(6, "0")[:6]) - - def _to_decimal(self, val): - try: - decimal_value = Decimal(val) - # See GH 662, edge case, infinite value should not be converted - # via `_to_decimal` - if not decimal_value.is_finite(): - raise ValueError("Converted decimal value is infinite or NaN") - except Exception as e: - msg = "Could not convert %s to decimal" % val - six.raise_from(ValueError(msg), e) - else: - return decimal_value - - # ------------------------------------------------------------------ - # Post-Parsing construction of datetime output. These are kept as - # methods instead of functions for the sake of customizability via - # subclassing. - - def _build_tzinfo(self, tzinfos, tzname, tzoffset): - if callable(tzinfos): - tzdata = tzinfos(tzname, tzoffset) - else: - tzdata = tzinfos.get(tzname) - # handle case where tzinfo is paased an options that returns None - # eg tzinfos = {'BRST' : None} - if isinstance(tzdata, datetime.tzinfo) or tzdata is None: - tzinfo = tzdata - elif isinstance(tzdata, text_type): - tzinfo = tz.tzstr(tzdata) - elif isinstance(tzdata, integer_types): - tzinfo = tz.tzoffset(tzname, tzdata) - else: - raise TypeError("Offset must be tzinfo subclass, tz string, " - "or int offset.") - return tzinfo - - def _build_tzaware(self, naive, res, tzinfos): - if (callable(tzinfos) or (tzinfos and res.tzname in tzinfos)): - tzinfo = self._build_tzinfo(tzinfos, res.tzname, res.tzoffset) - aware = naive.replace(tzinfo=tzinfo) - aware = self._assign_tzname(aware, res.tzname) - - elif res.tzname and res.tzname in time.tzname: - aware = naive.replace(tzinfo=tz.tzlocal()) - - # Handle ambiguous local datetime - aware = self._assign_tzname(aware, res.tzname) - - # This is mostly relevant for winter GMT zones parsed in the UK - if (aware.tzname() != res.tzname and - res.tzname in self.info.UTCZONE): - aware = aware.replace(tzinfo=tz.UTC) - - elif res.tzoffset == 0: - aware = naive.replace(tzinfo=tz.UTC) - - elif res.tzoffset: - aware = naive.replace(tzinfo=tz.tzoffset(res.tzname, res.tzoffset)) - - elif not res.tzname and not res.tzoffset: - # i.e. no timezone information was found. - aware = naive - - elif res.tzname: - # tz-like string was parsed but we don't know what to do - # with it - warnings.warn("tzname {tzname} identified but not understood. " - "Pass `tzinfos` argument in order to correctly " - "return a timezone-aware datetime. In a future " - "version, this will raise an " - "exception.".format(tzname=res.tzname), - category=UnknownTimezoneWarning) - aware = naive - - return aware - - def _build_naive(self, res, default): - repl = {} - for attr in ("year", "month", "day", "hour", - "minute", "second", "microsecond"): - value = getattr(res, attr) - if value is not None: - repl[attr] = value - - if 'day' not in repl: - # If the default day exceeds the last day of the month, fall back - # to the end of the month. - cyear = default.year if res.year is None else res.year - cmonth = default.month if res.month is None else res.month - cday = default.day if res.day is None else res.day - - if cday > monthrange(cyear, cmonth)[1]: - repl['day'] = monthrange(cyear, cmonth)[1] - - naive = default.replace(**repl) - - if res.weekday is not None and not res.day: - naive = naive + relativedelta.relativedelta(weekday=res.weekday) - - return naive - - def _assign_tzname(self, dt, tzname): - if dt.tzname() != tzname: - new_dt = tz.enfold(dt, fold=1) - if new_dt.tzname() == tzname: - return new_dt - - return dt - - def _recombine_skipped(self, tokens, skipped_idxs): - """ - >>> tokens = ["foo", " ", "bar", " ", "19June2000", "baz"] - >>> skipped_idxs = [0, 1, 2, 5] - >>> _recombine_skipped(tokens, skipped_idxs) - ["foo bar", "baz"] - """ - skipped_tokens = [] - for i, idx in enumerate(sorted(skipped_idxs)): - if i > 0 and idx - 1 == skipped_idxs[i - 1]: - skipped_tokens[-1] = skipped_tokens[-1] + tokens[idx] - else: - skipped_tokens.append(tokens[idx]) - - return skipped_tokens - - -DEFAULTPARSER = parser() - - -def parse(timestr, parserinfo=None, **kwargs): - """ - - Parse a string in one of the supported formats, using the - ``parserinfo`` parameters. - - :param timestr: - A string containing a date/time stamp. - - :param parserinfo: - A :class:`parserinfo` object containing parameters for the parser. - If ``None``, the default arguments to the :class:`parserinfo` - constructor are used. - - The ``**kwargs`` parameter takes the following keyword arguments: - - :param default: - The default datetime object, if this is a datetime object and not - ``None``, elements specified in ``timestr`` replace elements in the - default object. - - :param ignoretz: - If set ``True``, time zones in parsed strings are ignored and a naive - :class:`datetime` object is returned. - - :param tzinfos: - Additional time zone names / aliases which may be present in the - string. This argument maps time zone names (and optionally offsets - from those time zones) to time zones. This parameter can be a - dictionary with timezone aliases mapping time zone names to time - zones or a function taking two parameters (``tzname`` and - ``tzoffset``) and returning a time zone. - - The timezones to which the names are mapped can be an integer - offset from UTC in seconds or a :class:`tzinfo` object. - - .. doctest:: - :options: +NORMALIZE_WHITESPACE - - >>> from dateutil.parser import parse - >>> from dateutil.tz import gettz - >>> tzinfos = {"BRST": -7200, "CST": gettz("America/Chicago")} - >>> parse("2012-01-19 17:21:00 BRST", tzinfos=tzinfos) - datetime.datetime(2012, 1, 19, 17, 21, tzinfo=tzoffset(u'BRST', -7200)) - >>> parse("2012-01-19 17:21:00 CST", tzinfos=tzinfos) - datetime.datetime(2012, 1, 19, 17, 21, - tzinfo=tzfile('/usr/share/zoneinfo/America/Chicago')) - - This parameter is ignored if ``ignoretz`` is set. - - :param dayfirst: - Whether to interpret the first value in an ambiguous 3-integer date - (e.g. 01/05/09) as the day (``True``) or month (``False``). If - ``yearfirst`` is set to ``True``, this distinguishes between YDM and - YMD. If set to ``None``, this value is retrieved from the current - :class:`parserinfo` object (which itself defaults to ``False``). - - :param yearfirst: - Whether to interpret the first value in an ambiguous 3-integer date - (e.g. 01/05/09) as the year. If ``True``, the first number is taken to - be the year, otherwise the last number is taken to be the year. If - this is set to ``None``, the value is retrieved from the current - :class:`parserinfo` object (which itself defaults to ``False``). - - :param fuzzy: - Whether to allow fuzzy parsing, allowing for string like "Today is - January 1, 2047 at 8:21:00AM". - - :param fuzzy_with_tokens: - If ``True``, ``fuzzy`` is automatically set to True, and the parser - will return a tuple where the first element is the parsed - :class:`datetime.datetime` datetimestamp and the second element is - a tuple containing the portions of the string which were ignored: - - .. doctest:: - - >>> from dateutil.parser import parse - >>> parse("Today is January 1, 2047 at 8:21:00AM", fuzzy_with_tokens=True) - (datetime.datetime(2047, 1, 1, 8, 21), (u'Today is ', u' ', u'at ')) - - :return: - Returns a :class:`datetime.datetime` object or, if the - ``fuzzy_with_tokens`` option is ``True``, returns a tuple, the - first element being a :class:`datetime.datetime` object, the second - a tuple containing the fuzzy tokens. - - :raises ParserError: - Raised for invalid or unknown string formats, if the provided - :class:`tzinfo` is not in a valid format, or if an invalid date would - be created. - - :raises OverflowError: - Raised if the parsed date exceeds the largest valid C integer on - your system. - """ - if parserinfo: - return parser(parserinfo).parse(timestr, **kwargs) - else: - return DEFAULTPARSER.parse(timestr, **kwargs) - - -class _tzparser(object): - - class _result(_resultbase): - - __slots__ = ["stdabbr", "stdoffset", "dstabbr", "dstoffset", - "start", "end"] - - class _attr(_resultbase): - __slots__ = ["month", "week", "weekday", - "yday", "jyday", "day", "time"] - - def __repr__(self): - return self._repr("") - - def __init__(self): - _resultbase.__init__(self) - self.start = self._attr() - self.end = self._attr() - - def parse(self, tzstr): - res = self._result() - l = [x for x in re.split(r'([,:.]|[a-zA-Z]+|[0-9]+)',tzstr) if x] - used_idxs = list() - try: - - len_l = len(l) - - i = 0 - while i < len_l: - # BRST+3[BRDT[+2]] - j = i - while j < len_l and not [x for x in l[j] - if x in "0123456789:,-+"]: - j += 1 - if j != i: - if not res.stdabbr: - offattr = "stdoffset" - res.stdabbr = "".join(l[i:j]) - else: - offattr = "dstoffset" - res.dstabbr = "".join(l[i:j]) - - for ii in range(j): - used_idxs.append(ii) - i = j - if (i < len_l and (l[i] in ('+', '-') or l[i][0] in - "0123456789")): - if l[i] in ('+', '-'): - # Yes, that's right. See the TZ variable - # documentation. - signal = (1, -1)[l[i] == '+'] - used_idxs.append(i) - i += 1 - else: - signal = -1 - len_li = len(l[i]) - if len_li == 4: - # -0300 - setattr(res, offattr, (int(l[i][:2]) * 3600 + - int(l[i][2:]) * 60) * signal) - elif i + 1 < len_l and l[i + 1] == ':': - # -03:00 - setattr(res, offattr, - (int(l[i]) * 3600 + - int(l[i + 2]) * 60) * signal) - used_idxs.append(i) - i += 2 - elif len_li <= 2: - # -[0]3 - setattr(res, offattr, - int(l[i][:2]) * 3600 * signal) - else: - return None - used_idxs.append(i) - i += 1 - if res.dstabbr: - break - else: - break - - - if i < len_l: - for j in range(i, len_l): - if l[j] == ';': - l[j] = ',' - - assert l[i] == ',' - - i += 1 - - if i >= len_l: - pass - elif (8 <= l.count(',') <= 9 and - not [y for x in l[i:] if x != ',' - for y in x if y not in "0123456789+-"]): - # GMT0BST,3,0,30,3600,10,0,26,7200[,3600] - for x in (res.start, res.end): - x.month = int(l[i]) - used_idxs.append(i) - i += 2 - if l[i] == '-': - value = int(l[i + 1]) * -1 - used_idxs.append(i) - i += 1 - else: - value = int(l[i]) - used_idxs.append(i) - i += 2 - if value: - x.week = value - x.weekday = (int(l[i]) - 1) % 7 - else: - x.day = int(l[i]) - used_idxs.append(i) - i += 2 - x.time = int(l[i]) - used_idxs.append(i) - i += 2 - if i < len_l: - if l[i] in ('-', '+'): - signal = (-1, 1)[l[i] == "+"] - used_idxs.append(i) - i += 1 - else: - signal = 1 - used_idxs.append(i) - res.dstoffset = (res.stdoffset + int(l[i]) * signal) - - # This was a made-up format that is not in normal use - warn(('Parsed time zone "%s"' % tzstr) + - 'is in a non-standard dateutil-specific format, which ' + - 'is now deprecated; support for parsing this format ' + - 'will be removed in future versions. It is recommended ' + - 'that you switch to a standard format like the GNU ' + - 'TZ variable format.', tz.DeprecatedTzFormatWarning) - elif (l.count(',') == 2 and l[i:].count('/') <= 2 and - not [y for x in l[i:] if x not in (',', '/', 'J', 'M', - '.', '-', ':') - for y in x if y not in "0123456789"]): - for x in (res.start, res.end): - if l[i] == 'J': - # non-leap year day (1 based) - used_idxs.append(i) - i += 1 - x.jyday = int(l[i]) - elif l[i] == 'M': - # month[-.]week[-.]weekday - used_idxs.append(i) - i += 1 - x.month = int(l[i]) - used_idxs.append(i) - i += 1 - assert l[i] in ('-', '.') - used_idxs.append(i) - i += 1 - x.week = int(l[i]) - if x.week == 5: - x.week = -1 - used_idxs.append(i) - i += 1 - assert l[i] in ('-', '.') - used_idxs.append(i) - i += 1 - x.weekday = (int(l[i]) - 1) % 7 - else: - # year day (zero based) - x.yday = int(l[i]) + 1 - - used_idxs.append(i) - i += 1 - - if i < len_l and l[i] == '/': - used_idxs.append(i) - i += 1 - # start time - len_li = len(l[i]) - if len_li == 4: - # -0300 - x.time = (int(l[i][:2]) * 3600 + - int(l[i][2:]) * 60) - elif i + 1 < len_l and l[i + 1] == ':': - # -03:00 - x.time = int(l[i]) * 3600 + int(l[i + 2]) * 60 - used_idxs.append(i) - i += 2 - if i + 1 < len_l and l[i + 1] == ':': - used_idxs.append(i) - i += 2 - x.time += int(l[i]) - elif len_li <= 2: - # -[0]3 - x.time = (int(l[i][:2]) * 3600) - else: - return None - used_idxs.append(i) - i += 1 - - assert i == len_l or l[i] == ',' - - i += 1 - - assert i >= len_l - - except (IndexError, ValueError, AssertionError): - return None - - unused_idxs = set(range(len_l)).difference(used_idxs) - res.any_unused_tokens = not {l[n] for n in unused_idxs}.issubset({",",":"}) - return res - - -DEFAULTTZPARSER = _tzparser() - - -def _parsetz(tzstr): - return DEFAULTTZPARSER.parse(tzstr) - - -class ParserError(ValueError): - """Exception subclass used for any failure to parse a datetime string. - - This is a subclass of :py:exc:`ValueError`, and should be raised any time - earlier versions of ``dateutil`` would have raised ``ValueError``. - - .. versionadded:: 2.8.1 - """ - def __str__(self): - try: - return self.args[0] % self.args[1:] - except (TypeError, IndexError): - return super(ParserError, self).__str__() - - def __repr__(self): - args = ", ".join("'%s'" % arg for arg in self.args) - return "%s(%s)" % (self.__class__.__name__, args) - - -class UnknownTimezoneWarning(RuntimeWarning): - """Raised when the parser finds a timezone it cannot parse into a tzinfo. - - .. versionadded:: 2.7.0 - """ -# vim:ts=4:sw=4:et diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/radio.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/radio.py deleted file mode 100644 index a8846a84a621c298a41922c0457dd38dba7a3b21..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/radio.py +++ /dev/null @@ -1,193 +0,0 @@ -"""gr.Radio() component.""" - -from __future__ import annotations - -from typing import Any, Callable, Literal - -from gradio_client.documentation import document, set_documentation_group -from gradio_client.serializing import StringSerializable - -from gradio.components.base import FormComponent, IOComponent, _Keywords -from gradio.deprecation import warn_deprecation, warn_style_method_deprecation -from gradio.events import Changeable, EventListenerMethod, Inputable, Selectable -from gradio.interpretation import NeighborInterpretable - -set_documentation_group("component") - - -@document() -class Radio( - FormComponent, - Selectable, - Changeable, - Inputable, - IOComponent, - StringSerializable, - NeighborInterpretable, -): - """ - Creates a set of radio buttons of which only one can be selected. - Preprocessing: passes the value of the selected radio button as a {str} or its index as an {int} into the function, depending on `type`. - Postprocessing: expects a {str} corresponding to the value of the radio button to be selected. - Examples-format: a {str} representing the radio option to select. - - Demos: sentence_builder, titanic_survival, blocks_essay - """ - - def __init__( - self, - choices: list[str] | None = None, - *, - value: str | Callable | None = None, - type: str = "value", - label: str | None = None, - info: str | None = None, - every: float | None = None, - show_label: bool | None = None, - container: bool = True, - scale: int | None = None, - min_width: int = 160, - interactive: bool | None = None, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - **kwargs, - ): - """ - Parameters: - choices: list of options to select from. - value: the button selected by default. If None, no button is selected by default. If callable, the function will be called whenever the app loads to set the initial value of the component. - type: Type of value to be returned by component. "value" returns the string of the choice selected, "index" returns the index of the choice selected. - label: component name in interface. - info: additional component description. - every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. - show_label: if True, will display label. - container: If True, will place the component in a container - providing some extra padding around the border. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - interactive: if True, choices in this radio group will be selectable; if False, selection will be disabled. If not provided, this is inferred based on whether the component is used as an input or output. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - """ - self.choices = choices or [] - valid_types = ["value", "index"] - if type not in valid_types: - raise ValueError( - f"Invalid value for parameter `type`: {type}. Please choose from one of: {valid_types}" - ) - self.type = type - self.select: EventListenerMethod - """ - Event listener for when the user selects Radio option. - Uses event data gradio.SelectData to carry `value` referring to label of selected option, and `index` to refer to index. - See EventData documentation on how to use this event data. - """ - IOComponent.__init__( - self, - label=label, - info=info, - every=every, - show_label=show_label, - container=container, - scale=scale, - min_width=min_width, - interactive=interactive, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - value=value, - **kwargs, - ) - NeighborInterpretable.__init__(self) - - def get_config(self): - return { - "choices": self.choices, - "value": self.value, - **IOComponent.get_config(self), - } - - def example_inputs(self) -> dict[str, Any]: - return { - "raw": self.choices[0] if self.choices else None, - "serialized": self.choices[0] if self.choices else None, - } - - @staticmethod - def update( - value: Any | Literal[_Keywords.NO_VALUE] | None = _Keywords.NO_VALUE, - choices: list[str] | None = None, - label: str | None = None, - info: str | None = None, - show_label: bool | None = None, - container: bool | None = None, - scale: int | None = None, - min_width: int | None = None, - interactive: bool | None = None, - visible: bool | None = None, - ): - return { - "choices": choices, - "label": label, - "info": info, - "show_label": show_label, - "container": container, - "scale": scale, - "min_width": min_width, - "interactive": interactive, - "visible": visible, - "value": value, - "__type__": "update", - } - - def preprocess(self, x: str | None) -> str | int | None: - """ - Parameters: - x: selected choice - Returns: - selected choice as string or index within choice list - """ - if self.type == "value": - return x - elif self.type == "index": - if x is None: - return None - else: - return self.choices.index(x) - else: - raise ValueError( - f"Unknown type: {self.type}. Please choose from: 'value', 'index'." - ) - - def get_interpretation_neighbors(self, x): - choices = list(self.choices) - choices.remove(x) - return choices, {} - - def get_interpretation_scores( - self, x, neighbors, scores: list[float | None], **kwargs - ) -> list: - """ - Returns: - Each value represents the interpretation score corresponding to each choice. - """ - scores.insert(self.choices.index(x), None) - return scores - - def style( - self, - *, - item_container: bool | None = None, - container: bool | None = None, - **kwargs, - ): - """ - This method is deprecated. Please set these arguments in the constructor instead. - """ - warn_style_method_deprecation() - if item_container is not None: - warn_deprecation("The `item_container` parameter is deprecated.") - if container is not None: - self.container = container - return self diff --git a/spaces/Defalt-404/Bittensor_Explore/README.md b/spaces/Defalt-404/Bittensor_Explore/README.md deleted file mode 100644 index c7f88e97dc1d69a22b70836ac3ec6abc9c610b68..0000000000000000000000000000000000000000 --- a/spaces/Defalt-404/Bittensor_Explore/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Bittensor Explore -emoji: ⚡ -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Detomo/ai-comic-generation/src/app/engine/forbidden.ts b/spaces/Detomo/ai-comic-generation/src/app/engine/forbidden.ts deleted file mode 100644 index 512b65e22b18f3bd39f6aec58198576b2ffc67f5..0000000000000000000000000000000000000000 --- a/spaces/Detomo/ai-comic-generation/src/app/engine/forbidden.ts +++ /dev/null @@ -1,6 +0,0 @@ - -// the NSFW has to contain bad words, but doing so might get the code flagged -// or attract unwanted attention, so we hash them -export const forbidden = [ - // TODO implement this -] \ No newline at end of file diff --git a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/models/stylegan2/model.py b/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/models/stylegan2/model.py deleted file mode 100644 index 9d5559203f4f3843fc814b090780ffa129a6fdf0..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/models/stylegan2/model.py +++ /dev/null @@ -1,674 +0,0 @@ -import math -import random - -import torch -from torch import nn -from torch.nn import functional as F - -from models.StyleCLIP.models.stylegan2.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d - - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer('kernel', kernel) - - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = F.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},' - f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})' - ) - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul) - - else: - out = F.linear( - input, self.weight * self.scale, bias=self.bias * self.lr_mul - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})' - ) - - -class ScaledLeakyReLU(nn.Module): - def __init__(self, negative_slope=0.2): - super().__init__() - - self.negative_slope = negative_slope - - def forward(self, input): - out = F.leaky_relu(input, negative_slope=self.negative_slope) - - return out * math.sqrt(2) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1)) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 - - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - - self.demodulate = demodulate - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, ' - f'upsample={self.upsample}, downsample={self.downsample})' - ) - - def forward(self, input, style): - batch, in_channel, height, width = input.shape - - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - weight = self.scale * self.weight * style - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=self.padding, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, _, height, width = image.shape - noise = image.new_empty(batch, 1, height, width).normal_() - - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, channel, size=4): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, channel, size, size)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, 1, 1, 1) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - ): - super().__init__() - - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=upsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - ) - - self.noise = NoiseInjection() - # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1)) - # self.activate = ScaledLeakyReLU(0.2) - self.activate = FusedLeakyReLU(out_channel) - - def forward(self, input, style, noise=None): - out = self.conv(input, style) - out = self.noise(out, noise=noise) - # out = out + self.bias - out = self.activate(out) - - return out - - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel) - - self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None): - out = self.conv(input, style) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - - out = out + skip - - return out - - -class Generator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - ): - super().__init__() - - self.size = size - - self.style_dim = style_dim - - layers = [PixelNorm()] - - for i in range(n_mlp): - layers.append( - EqualLinear( - style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu' - ) - ) - - self.style = nn.Sequential(*layers) - - self.channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - self.input = ConstantInput(self.channels[4]) - self.conv1 = StyledConv( - self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel - ) - self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False) - - self.log_size = int(math.log(size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channel = self.channels[4] - - for layer_idx in range(self.num_layers): - res = (layer_idx + 5) // 2 - shape = [1, 1, 2 ** res, 2 ** res] - self.noises.register_buffer(f'noise_{layer_idx}', torch.randn(*shape)) - - for i in range(3, self.log_size + 1): - out_channel = self.channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - ) - ) - - self.convs.append( - StyledConv( - out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel - ) - ) - - self.to_rgbs.append(ToRGB(out_channel, style_dim)) - - in_channel = out_channel - - self.n_latent = self.log_size * 2 - 2 - - def make_noise(self): - device = self.input.input.device - - noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device)) - - return noises - - def mean_latent(self, n_latent): - latent_in = torch.randn( - n_latent, self.style_dim, device=self.input.input.device - ) - latent = self.style(latent_in).mean(0, keepdim=True) - - return latent - - def get_latent(self, input): - return self.style(input) - - def forward( - self, - styles, - return_latents=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - noise=None, - randomize_noise=True, - ): - if not input_is_latent: - styles = [self.style(s) for s in styles] - - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers - else: - noise = [ - getattr(self.noises, f'noise_{i}') for i in range(self.num_layers) - ] - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * (style - truncation_latent) - ) - - styles = style_t - - if len(styles) < 2: - inject_index = self.n_latent - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - - else: - latent = styles[0] - - else: - if inject_index is None: - inject_index = random.randint(1, self.n_latent - 1) - - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1) - - latent = torch.cat([latent, latent2], 1) - - out = self.input(latent) - out = self.conv1(out, latent[:, 0], noise=noise[0]) - - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs - ): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - - i += 2 - - image = skip - - if return_latents: - return image, latent - - else: - return image, None - - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - ) - ) - - if activate: - if bias: - layers.append(FusedLeakyReLU(out_channel)) - - else: - layers.append(ScaledLeakyReLU(0.2)) - - super().__init__(*layers) - - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True) - - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=True, activate=False, bias=False - ) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - - -class Discriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'), - EqualLinear(channels[4], 1), - ) - - def forward(self, input): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - - out = out.view(batch, -1) - out = self.final_linear(out) - - return out - diff --git a/spaces/ECCV2022/bytetrack/tutorials/motr/README.md b/spaces/ECCV2022/bytetrack/tutorials/motr/README.md deleted file mode 100644 index 3fcc6ca471912eba104c258cc8a152f14673d813..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/tutorials/motr/README.md +++ /dev/null @@ -1,100 +0,0 @@ -# MOTR - -Step1. - -git clone https://github.com/megvii-model/MOTR.git and install - -replace https://github.com/megvii-model/MOTR/blob/main/datasets/joint.py - -replace https://github.com/megvii-model/MOTR/blob/main/datasets/transforms.py - - -train - -``` -python3 -m torch.distributed.launch --nproc_per_node=8 \ - --use_env main.py \ - --meta_arch motr \ - --dataset_file e2e_joint \ - --epoch 50 \ - --with_box_refine \ - --lr_drop 40 \ - --lr 2e-4 \ - --lr_backbone 2e-5 \ - --pretrained coco_model_final.pth \ - --output_dir exps/e2e_motr_r50_mot17trainhalf \ - --batch_size 1 \ - --sample_mode 'random_interval' \ - --sample_interval 10 \ - --sampler_steps 10 20 30 \ - --sampler_lengths 2 3 4 5 \ - --update_query_pos \ - --merger_dropout 0 \ - --dropout 0 \ - --random_drop 0.1 \ - --fp_ratio 0.3 \ - --query_interaction_layer 'QIM' \ - --extra_track_attn \ - --mot_path . - --data_txt_path_train ./datasets/data_path/mot17.half \ - --data_txt_path_val ./datasets/data_path/mot17.val \ -``` -mot17.half and mot17.val are from https://github.com/ifzhang/FairMOT/tree/master/src/data - -You can also download the MOTR model trained by us: [google](https://drive.google.com/file/d/1pzGi53VooppQqhKf3TSxLK99LERsVyTw/view?usp=sharing), [baidu(code:t87h)](https://pan.baidu.com/s/1OrcR3L9Bf2xXIo8RQl3zyA) - - -Step2. - -replace https://github.com/megvii-model/MOTR/blob/main/util/evaluation.py - -replace https://github.com/megvii-model/MOTR/blob/main/eval.py - -replace https://github.com/megvii-model/MOTR/blob/main/models/motr.py - -add byte_tracker.py to https://github.com/megvii-model/MOTR - -add mot_online to https://github.com/megvii-model/MOTR - - -Step3. - - -val - -``` -python3 eval.py \ - --meta_arch motr \ - --dataset_file e2e_joint \ - --epoch 200 \ - --with_box_refine \ - --lr_drop 100 \ - --lr 2e-4 \ - --lr_backbone 2e-5 \ - --pretrained exps/e2e_motr_r50_mot17val/motr_final.pth \ - --output_dir exps/e2e_motr_r50_mot17val \ - --batch_size 1 \ - --sample_mode 'random_interval' \ - --sample_interval 10 \ - --sampler_steps 50 90 120 \ - --sampler_lengths 2 3 4 5 \ - --update_query_pos \ - --merger_dropout 0 \ - --dropout 0 \ - --random_drop 0.1 \ - --fp_ratio 0.3 \ - --query_interaction_layer 'QIM' \ - --extra_track_attn \ - --mot_path ./MOT17/images/train - --data_txt_path_train ./datasets/data_path/mot17.half \ - --data_txt_path_val ./datasets/data_path/mot17.val \ - --resume model_final.pth \ -``` - - - -# MOTR det - -in Step2, replace https://github.com/megvii-model/MOTR/blob/main/models/motr.py by motr_det.py - -others are the same as MOTR diff --git a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/data/realesrgan_paired_dataset.py b/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/data/realesrgan_paired_dataset.py deleted file mode 100644 index 386c8d72496245dae8df033c2ebbd76b41ff45f1..0000000000000000000000000000000000000000 --- a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/data/realesrgan_paired_dataset.py +++ /dev/null @@ -1,108 +0,0 @@ -import os -from basicsr.data.data_util import paired_paths_from_folder, paired_paths_from_lmdb -from basicsr.data.transforms import augment, paired_random_crop -from basicsr.utils import FileClient, imfrombytes, img2tensor -from basicsr.utils.registry import DATASET_REGISTRY -from torch.utils import data as data -from torchvision.transforms.functional import normalize - - -@DATASET_REGISTRY.register() -class RealESRGANPairedDataset(data.Dataset): - """Paired image dataset for image restoration. - - Read LQ (Low Quality, e.g. LR (Low Resolution), blurry, noisy, etc) and GT image pairs. - - There are three modes: - 1. 'lmdb': Use lmdb files. - If opt['io_backend'] == lmdb. - 2. 'meta_info': Use meta information file to generate paths. - If opt['io_backend'] != lmdb and opt['meta_info'] is not None. - 3. 'folder': Scan folders to generate paths. - The rest. - - Args: - opt (dict): Config for train datasets. It contains the following keys: - dataroot_gt (str): Data root path for gt. - dataroot_lq (str): Data root path for lq. - meta_info (str): Path for meta information file. - io_backend (dict): IO backend type and other kwarg. - filename_tmpl (str): Template for each filename. Note that the template excludes the file extension. - Default: '{}'. - gt_size (int): Cropped patched size for gt patches. - use_hflip (bool): Use horizontal flips. - use_rot (bool): Use rotation (use vertical flip and transposing h - and w for implementation). - - scale (bool): Scale, which will be added automatically. - phase (str): 'train' or 'val'. - """ - - def __init__(self, opt): - super(RealESRGANPairedDataset, self).__init__() - self.opt = opt - self.file_client = None - self.io_backend_opt = opt['io_backend'] - # mean and std for normalizing the input images - self.mean = opt['mean'] if 'mean' in opt else None - self.std = opt['std'] if 'std' in opt else None - - self.gt_folder, self.lq_folder = opt['dataroot_gt'], opt['dataroot_lq'] - self.filename_tmpl = opt['filename_tmpl'] if 'filename_tmpl' in opt else '{}' - - # file client (lmdb io backend) - if self.io_backend_opt['type'] == 'lmdb': - self.io_backend_opt['db_paths'] = [self.lq_folder, self.gt_folder] - self.io_backend_opt['client_keys'] = ['lq', 'gt'] - self.paths = paired_paths_from_lmdb([self.lq_folder, self.gt_folder], ['lq', 'gt']) - elif 'meta_info' in self.opt and self.opt['meta_info'] is not None: - # disk backend with meta_info - # Each line in the meta_info describes the relative path to an image - with open(self.opt['meta_info']) as fin: - paths = [line.strip() for line in fin] - self.paths = [] - for path in paths: - gt_path, lq_path = path.split(', ') - gt_path = os.path.join(self.gt_folder, gt_path) - lq_path = os.path.join(self.lq_folder, lq_path) - self.paths.append(dict([('gt_path', gt_path), ('lq_path', lq_path)])) - else: - # disk backend - # it will scan the whole folder to get meta info - # it will be time-consuming for folders with too many files. It is recommended using an extra meta txt file - self.paths = paired_paths_from_folder([self.lq_folder, self.gt_folder], ['lq', 'gt'], self.filename_tmpl) - - def __getitem__(self, index): - if self.file_client is None: - self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt) - - scale = self.opt['scale'] - - # Load gt and lq images. Dimension order: HWC; channel order: BGR; - # image range: [0, 1], float32. - gt_path = self.paths[index]['gt_path'] - img_bytes = self.file_client.get(gt_path, 'gt') - img_gt = imfrombytes(img_bytes, float32=True) - lq_path = self.paths[index]['lq_path'] - img_bytes = self.file_client.get(lq_path, 'lq') - img_lq = imfrombytes(img_bytes, float32=True) - - # augmentation for training - if self.opt['phase'] == 'train': - gt_size = self.opt['gt_size'] - # random crop - img_gt, img_lq = paired_random_crop(img_gt, img_lq, gt_size, scale, gt_path) - # flip, rotation - img_gt, img_lq = augment([img_gt, img_lq], self.opt['use_hflip'], self.opt['use_rot']) - - # BGR to RGB, HWC to CHW, numpy to tensor - img_gt, img_lq = img2tensor([img_gt, img_lq], bgr2rgb=True, float32=True) - # normalize - if self.mean is not None or self.std is not None: - normalize(img_lq, self.mean, self.std, inplace=True) - normalize(img_gt, self.mean, self.std, inplace=True) - - return {'lq': img_lq, 'gt': img_gt, 'lq_path': lq_path, 'gt_path': gt_path} - - def __len__(self): - return len(self.paths) diff --git a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/modules/util.py b/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/modules/util.py deleted file mode 100644 index 9ee16385d8b1342a2d60a5f1aa5cadcfbe934bd8..0000000000000000000000000000000000000000 --- a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/modules/util.py +++ /dev/null @@ -1,130 +0,0 @@ -import torch -import torch.nn as nn - - -def count_params(model): - total_params = sum(p.numel() for p in model.parameters()) - return total_params - - -class ActNorm(nn.Module): - def __init__(self, num_features, logdet=False, affine=True, - allow_reverse_init=False): - assert affine - super().__init__() - self.logdet = logdet - self.loc = nn.Parameter(torch.zeros(1, num_features, 1, 1)) - self.scale = nn.Parameter(torch.ones(1, num_features, 1, 1)) - self.allow_reverse_init = allow_reverse_init - - self.register_buffer('initialized', torch.tensor(0, dtype=torch.uint8)) - - def initialize(self, input): - with torch.no_grad(): - flatten = input.permute(1, 0, 2, 3).contiguous().view(input.shape[1], -1) - mean = ( - flatten.mean(1) - .unsqueeze(1) - .unsqueeze(2) - .unsqueeze(3) - .permute(1, 0, 2, 3) - ) - std = ( - flatten.std(1) - .unsqueeze(1) - .unsqueeze(2) - .unsqueeze(3) - .permute(1, 0, 2, 3) - ) - - self.loc.data.copy_(-mean) - self.scale.data.copy_(1 / (std + 1e-6)) - - def forward(self, input, reverse=False): - if reverse: - return self.reverse(input) - if len(input.shape) == 2: - input = input[:,:,None,None] - squeeze = True - else: - squeeze = False - - _, _, height, width = input.shape - - if self.training and self.initialized.item() == 0: - self.initialize(input) - self.initialized.fill_(1) - - h = self.scale * (input + self.loc) - - if squeeze: - h = h.squeeze(-1).squeeze(-1) - - if self.logdet: - log_abs = torch.log(torch.abs(self.scale)) - logdet = height*width*torch.sum(log_abs) - logdet = logdet * torch.ones(input.shape[0]).to(input) - return h, logdet - - return h - - def reverse(self, output): - if self.training and self.initialized.item() == 0: - if not self.allow_reverse_init: - raise RuntimeError( - "Initializing ActNorm in reverse direction is " - "disabled by default. Use allow_reverse_init=True to enable." - ) - else: - self.initialize(output) - self.initialized.fill_(1) - - if len(output.shape) == 2: - output = output[:,:,None,None] - squeeze = True - else: - squeeze = False - - h = output / self.scale - self.loc - - if squeeze: - h = h.squeeze(-1).squeeze(-1) - return h - - -class AbstractEncoder(nn.Module): - def __init__(self): - super().__init__() - - def encode(self, *args, **kwargs): - raise NotImplementedError - - -class Labelator(AbstractEncoder): - """Net2Net Interface for Class-Conditional Model""" - def __init__(self, n_classes, quantize_interface=True): - super().__init__() - self.n_classes = n_classes - self.quantize_interface = quantize_interface - - def encode(self, c): - c = c[:,None] - if self.quantize_interface: - return c, None, [None, None, c.long()] - return c - - -class SOSProvider(AbstractEncoder): - # for unconditional training - def __init__(self, sos_token, quantize_interface=True): - super().__init__() - self.sos_token = sos_token - self.quantize_interface = quantize_interface - - def encode(self, x): - # get batch size from data and replicate sos_token - c = torch.ones(x.shape[0], 1)*self.sos_token - c = c.long().to(x.device) - if self.quantize_interface: - return c, None, [None, None, c] - return c diff --git a/spaces/EnigmaOfTheWorld/Power_AI_Point/README.md b/spaces/EnigmaOfTheWorld/Power_AI_Point/README.md deleted file mode 100644 index 2ec7437e43b2c4fc839835c1d7c5892ddbe5ef8d..0000000000000000000000000000000000000000 --- a/spaces/EnigmaOfTheWorld/Power_AI_Point/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: GenAI Point -emoji: 😻 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/EsoCode/text-generation-webui/docs/README.md b/spaces/EsoCode/text-generation-webui/docs/README.md deleted file mode 100644 index 06b73b8468ab263a230cb44ba45a6c95f00b2ada..0000000000000000000000000000000000000000 --- a/spaces/EsoCode/text-generation-webui/docs/README.md +++ /dev/null @@ -1,23 +0,0 @@ -# text-generation-webui documentation - -## Table of contents - -* [Audio Notification](Audio-Notification.md) -* [Chat mode](Chat-mode.md) -* [DeepSpeed](DeepSpeed.md) -* [Docker](Docker.md) -* [ExLlama](ExLlama.md) -* [Extensions](Extensions.md) -* [FlexGen](FlexGen.md) -* [Generation parameters](Generation-parameters.md) -* [GPTQ models (4 bit mode)](GPTQ-models-(4-bit-mode).md) -* [llama.cpp models](llama.cpp-models.md) -* [LLaMA model](LLaMA-model.md) -* [LoRA](LoRA.md) -* [Low VRAM guide](Low-VRAM-guide.md) -* [RWKV model](RWKV-model.md) -* [Spell book](Spell-book.md) -* [System requirements](System-requirements.md) -* [Training LoRAs](Training-LoRAs.md) -* [Windows installation guide](Windows-installation-guide.md) -* [WSL installation guide](WSL-installation-guide.md) diff --git a/spaces/EsoCode/text-generation-webui/extensions/multimodal/multimodal_embedder.py b/spaces/EsoCode/text-generation-webui/extensions/multimodal/multimodal_embedder.py deleted file mode 100644 index 626077cb80987d66af90f390e31aa2f2def76fec..0000000000000000000000000000000000000000 --- a/spaces/EsoCode/text-generation-webui/extensions/multimodal/multimodal_embedder.py +++ /dev/null @@ -1,178 +0,0 @@ -import base64 -import re -from dataclasses import dataclass -from io import BytesIO -from typing import Any, List, Optional - -import torch -from PIL import Image - -from extensions.multimodal.pipeline_loader import load_pipeline -from modules import shared -from modules.logging_colors import logger -from modules.text_generation import encode, get_max_prompt_length - - -@dataclass -class PromptPart: - text: str - image: Optional[Image.Image] = None - is_image: bool = False - input_ids: Optional[torch.Tensor] = None - embedding: Optional[torch.Tensor] = None - - -class MultimodalEmbedder: - def __init__(self, params: dict): - pipeline, source = load_pipeline(params) - self.pipeline = pipeline - logger.info(f'Multimodal: loaded pipeline {self.pipeline.name()} from pipelines/{source} ({self.pipeline.__class__.__name__})') - - def _split_prompt(self, prompt: str, load_images: bool = False) -> List[PromptPart]: - """Splits a prompt into a list of `PromptParts` to separate image data from text. - It will also append `image_start` and `image_end` before and after the image, and optionally parse and load the images, - if `load_images` is `True`. - """ - parts: List[PromptPart] = [] - curr = 0 - while True: - match = re.search(r'', prompt[curr:]) - if match is None: - # no more image tokens, append the rest of the prompt - if curr > 0: - # add image end token after last image - parts.append(PromptPart(text=self.pipeline.image_end() + prompt[curr:])) - else: - parts.append(PromptPart(text=prompt)) - break - # found an image, append image start token to the text - if match.start() > 0: - parts.append(PromptPart(text=prompt[curr:curr + match.start()] + self.pipeline.image_start())) - else: - parts.append(PromptPart(text=self.pipeline.image_start())) - # append the image - parts.append(PromptPart( - text=match.group(0), - image=Image.open(BytesIO(base64.b64decode(match.group(1)))) if load_images else None, - is_image=True - )) - curr += match.end() - return parts - - def _len_in_tokens_prompt_parts(self, parts: List[PromptPart]) -> int: - """Total length in tokens of all `parts`""" - tokens = 0 - for part in parts: - if part.is_image: - tokens += self.pipeline.num_image_embeds() - elif part.input_ids is not None: - tokens += len(part.input_ids) - else: - tokens += len(encode(part.text)[0]) - return tokens - - def len_in_tokens(self, prompt: str) -> int: - """Total length in tokens for a given text `prompt`""" - parts = self._split_prompt(prompt, False) - return self._len_in_tokens_prompt_parts(parts) - - def _encode_single_text(self, part: PromptPart, add_bos_token: bool) -> PromptPart: - """Encode a single prompt `part` to `input_ids`. Returns a `PromptPart`""" - if part.is_image: - placeholders = torch.ones((self.pipeline.num_image_embeds())) * self.pipeline.placeholder_token_id() - part.input_ids = placeholders.to(shared.model.device, dtype=torch.int64) - else: - part.input_ids = encode(part.text, add_bos_token=add_bos_token)[0].to(shared.model.device, dtype=torch.int64) - return part - - @staticmethod - def _num_images(parts: List[PromptPart]) -> int: - count = 0 - for part in parts: - if part.is_image: - count += 1 - return count - - def _encode_text(self, state, parts: List[PromptPart]) -> List[PromptPart]: - """Encode text to token_ids, also truncate the prompt, if necessary. - - The chat/instruct mode should make prompts that fit in get_max_prompt_length, but if max_new_tokens are set - such that the context + min_rows don't fit, we can get a prompt which is too long. - We can't truncate image embeddings, as it leads to broken generation, so remove the images instead and warn the user - """ - encoded: List[PromptPart] = [] - for i, part in enumerate(parts): - encoded.append(self._encode_single_text(part, i == 0 and state['add_bos_token'])) - - # truncation: - max_len = get_max_prompt_length(state) - removed_images = 0 - - # 1. remove entire text/image blocks - while self._len_in_tokens_prompt_parts(encoded[1:]) > max_len: - if encoded[0].is_image: - removed_images += 1 - encoded = encoded[1:] - - # 2. check if the last prompt part doesn't need to get truncated - if self._len_in_tokens_prompt_parts(encoded) > max_len: - if encoded[0].is_image: - # don't truncate image embeddings, just remove the image, otherwise generation will be broken - removed_images += 1 - encoded = encoded[1:] - elif len(encoded) > 1 and encoded[0].text.endswith(self.pipeline.image_start()): - # see if we can keep image_start token - len_image_start = len(encode(self.pipeline.image_start(), add_bos_token=state['add_bos_token'])[0]) - if self._len_in_tokens_prompt_parts(encoded[1:]) + len_image_start > max_len: - # we can't -> remove this text, and the image - encoded = encoded[2:] - removed_images += 1 - else: - # we can -> just truncate the text - trunc_len = self._len_in_tokens_prompt_parts(encoded) - max_len - encoded[0].input_ids = encoded[0].input_ids[trunc_len:] - elif len(encoded) > 0: - # only one text left, truncate it normally - trunc_len = self._len_in_tokens_prompt_parts(encoded) - max_len - encoded[0].input_ids = encoded[0].input_ids[trunc_len:] - - # notify user if we truncated an image - if removed_images > 0: - logger.warning(f"Multimodal: removed {removed_images} image(s) from prompt. Try decreasing max_new_tokens if generation is broken") - - return encoded - - def _embed(self, parts: List[PromptPart]) -> List[PromptPart]: - # batch images - image_indicies = [i for i, part in enumerate(parts) if part.is_image] - embedded = self.pipeline.embed_images([parts[i].image for i in image_indicies]) - for i, embeds in zip(image_indicies, embedded): - parts[i].embedding = embeds - # embed text - for (i, part) in enumerate(parts): - if not part.is_image: - parts[i].embedding = self.pipeline.embed_tokens(part.input_ids) - return parts - - def _remove_old_images(self, parts: List[PromptPart], params: dict) -> List[PromptPart]: - if params['add_all_images_to_prompt']: - return parts - already_added = False - for i, part in reversed(list(enumerate(parts))): - if part.is_image: - if already_added: - parts[i].embedding = self.pipeline.placeholder_embeddings() - else: - already_added = True - return parts - - def forward(self, prompt: str, state: Any, params: dict): - prompt_parts = self._split_prompt(prompt, True) - prompt_parts = self._encode_text(state, prompt_parts) - prompt_parts = self._embed(prompt_parts) - prompt_parts = self._remove_old_images(prompt_parts, params) - embeds = tuple(part.embedding for part in prompt_parts) - ids = tuple(part.input_ids for part in prompt_parts) - input_embeds = torch.cat(embeds, dim=0) - input_ids = torch.cat(ids, dim=0) - return prompt, input_ids, input_embeds, self._num_images(prompt_parts) diff --git a/spaces/Gabesantos1007/Dall-e/app.py b/spaces/Gabesantos1007/Dall-e/app.py deleted file mode 100644 index d64025b1a44ce9d83ca9cb3c9840aaa9e7c0eebb..0000000000000000000000000000000000000000 --- a/spaces/Gabesantos1007/Dall-e/app.py +++ /dev/null @@ -1,74 +0,0 @@ -import base64 -import streamlit as st -import openai -import os - -# openai.api_key = "" -openai.api_key = os.environ.get("OPENAI_API_KEY") - -st.set_page_config( - page_title="DALL·E Gerador de Imagens 🖼️", - page_icon="🎨", - layout="wide", -) -# Custom CSS styles -st.markdown( - """ - - """, - unsafe_allow_html=True -) - -st.title("DALL·E Gerador de Imagens 🖼️") - -# Prompt input -prompt = st.text_area("Entre o prompt:👇", height=5) - -# Size selection -size_options = ["256x256", "512x512", "1024x1024"] -selected_size = st.selectbox("Selecione o tamanho da imagem:", size_options) -# href = f'Download' -# st.markdown(href, unsafe_allow_html=True) - - -if st.button("Veja a mágica 🪄"): - # Generate image - try: - response = openai.Image.create( - prompt=prompt, - n=1, - size=selected_size, - response_format="b64_json", - ) - - # Display image - - if response["data"]: - image_data = base64.b64decode(response["data"][0]["b64_json"]) - st.image(image_data) - - # Download button - b64_image = base64.b64encode(image_data).decode() - href = f'Download' - st.markdown(href, unsafe_allow_html=True) - else: - st.warning("No image generated.") - except Exception as e: - st.error(e) - print(e) \ No newline at end of file diff --git a/spaces/Gauri54damle/sdxl-lora-multi-object/app.py b/spaces/Gauri54damle/sdxl-lora-multi-object/app.py deleted file mode 100644 index 6331091fc36baca7acbd56f36aaee7469deb138c..0000000000000000000000000000000000000000 --- a/spaces/Gauri54damle/sdxl-lora-multi-object/app.py +++ /dev/null @@ -1,137 +0,0 @@ -from email import generator -from diffusers import DiffusionPipeline - -import gradio as gr -import torch -from PIL import Image, ImageDraw, ImageFont -## VAE - Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. -from diffusers import AutoencoderKL - - - -model = "stabilityai/stable-diffusion-xl-base-1.0" -finetuningLayer = "Gauri54damle/sdxl-lora-McDBigMac-meal-model" - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -torch_dtype = torch.float16 if device.type == 'cuda' else torch.float32 - - - -import os -HF_API_TOKEN = os.getenv("HF_API_TOKEN") - -from huggingface_hub import login -login(token=HF_API_TOKEN) - - -vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch_dtype) -pipe = DiffusionPipeline.from_pretrained( - model, - vae=vae, - torch_dtype=torch_dtype, - use_safetensors=True -) -pipe.load_lora_weights(finetuningLayer) - -pipe = pipe.to(device) - - - - -def create_error_image(message): - # Create a blank image with white background - width, height = 512, 512 - image = Image.new('RGB', (width, height), 'white') - draw = ImageDraw.Draw(image) - - # Load a truetype or opentype font file - font = ImageFont.load_default() - - # Position and message - - draw.text((127,251), message, font=font, fill="black") - - return image - -def inference(model,finetuningLayer, prompt, guidance, steps, seed): - - - - if not prompt: - return create_error_image("Sorry, add your text prompt and try again!!") - else: - generator = torch.Generator(device).manual_seed(seed) - image = pipe( - prompt, - num_inference_steps=int(steps), - guidance_scale=guidance, - generator=generator).images[0] - - return image - - -css = """ - -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - """ -
    -
    -

    Finetuned Diffusion

    -
    -
    - """ - ) - with gr.Row(): - - with gr.Column(): - - model = gr.Dropdown(label="Base Model", choices=["stabilityai/stable-diffusion-xl-base-1.0"], default="stabilityai/stable-diffusion-xl-base-1.0") - finetuningLayer= gr.Dropdown(label="Finetuning Layer", choices=["Gauri54damle/sdxl-lora-multi-object"], default="Gauri54damle/sdxl-lora-multi-object") - - prompt = gr.Textbox(label="Prompt", placeholder="photo of burger called McDBigMac placed on serving tray with fries called McDFries- it is unique identifier need to be used to identify burger") - - with gr.Accordion("Advanced options", open=True): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=50, maximum=100, minimum=2) - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - run = gr.Button(value="Run") - gr.Markdown(f"Running on: {device}") - with gr.Column(): - image_out = gr.Image() - - ## Add prompt and press enter to run - ##prompt.submit(inference, inputs=[model, finetuningLayer,prompt, guidance, steps, seed], outputs=image_out) - - ## Click run button to run - run.click(inference, inputs=[model, finetuningLayer, prompt, guidance, steps, seed], outputs=image_out) - - - -demo.queue() -demo.launch() \ No newline at end of file diff --git a/spaces/Gen-Sim/Gen-Sim/misc/run_figure2_blender.sh b/spaces/Gen-Sim/Gen-Sim/misc/run_figure2_blender.sh deleted file mode 100644 index d42e7d34947c37ea36b0f49869fda7645920be97..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/misc/run_figure2_blender.sh +++ /dev/null @@ -1,10 +0,0 @@ -python cliport/demos.py n=3 task=build-bridge mode=test disp=False record.save_video=True +regenerate_data=True record.add_text=True +record.blender_render=True ; -python cliport/demos.py n=3 task=block_on_cylinder_on_pallet mode=test disp=False record.save_video=True +regenerate_data=True record.add_text=True +record.blender_render=True ; -python cliport/demos.py n=3 task=build-two-circles mode=test disp=False record.save_video=True +regenerate_data=True record.add_text=True +record.blender_render=True ; -python cliport/demos.py n=3 task=Four_corner_pyramid_challenge mode=test disp=False record.save_video=True +regenerate_data=True record.add_text=True +record.blender_render=True ; -python cliport/demos.py n=3 task=align_cylinders_in_zones mode=test disp=False record.save_video=True +regenerate_data=True record.add_text=True +record.blender_render=True ; -python cliport/demos.py n=3 task=build_car mode=test disp=False record.save_video=True +regenerate_data=True record.add_text=True +record.blender_render=True ; -python cliport/demos.py n=3 task=construct_corner_blocks mode=test disp=False record.save_video=True +regenerate_data=True record.add_text=True +record.blender_render=True ; -python cliport/demos.py n=3 task=color_ordered_insertion mode=test disp=False record.save_video=True +regenerate_data=True record.add_text=True +record.blender_render=True ; -python cliport/demos.py n=3 task=align_pair_colored_blocks_along_line mode=test disp=False record.save_video=True +regenerate_data=True record.add_text=True +record.blender_render=True ; -python cliport/demos.py n=3 task=palletizing_boxes mode=test disp=False record.save_video=True +regenerate_data=True record.add_text=True +record.blender_render=True ; diff --git a/spaces/Gradio-Blocks/StyleGAN-Human/model.py b/spaces/Gradio-Blocks/StyleGAN-Human/model.py deleted file mode 100644 index ae84a84f1827a190309d8cd5d57a84c408fb69ad..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/StyleGAN-Human/model.py +++ /dev/null @@ -1,80 +0,0 @@ -from __future__ import annotations - -import pathlib -import pickle -import sys - -import numpy as np -import torch -import torch.nn as nn -from huggingface_hub import hf_hub_download - -app_dir = pathlib.Path(__file__).parent -submodule_dir = app_dir / 'StyleGAN-Human' -sys.path.insert(0, submodule_dir.as_posix()) - - -class Model: - def __init__(self): - self.device = torch.device( - 'cuda:0' if torch.cuda.is_available() else 'cpu') - self.model = self.load_model('stylegan_human_v2_1024.pkl') - - def load_model(self, file_name: str) -> nn.Module: - path = hf_hub_download('public-data/StyleGAN-Human', - f'models/{file_name}') - with open(path, 'rb') as f: - model = pickle.load(f)['G_ema'] - model.eval() - model.to(self.device) - with torch.inference_mode(): - z = torch.zeros((1, model.z_dim)).to(self.device) - label = torch.zeros([1, model.c_dim], device=self.device) - model(z, label, force_fp32=True) - return model - - def generate_z(self, z_dim: int, seed: int) -> torch.Tensor: - return torch.from_numpy(np.random.RandomState(seed).randn( - 1, z_dim)).to(self.device).float() - - @torch.inference_mode() - def generate_single_image(self, seed: int, - truncation_psi: float) -> np.ndarray: - seed = int(np.clip(seed, 0, np.iinfo(np.uint32).max)) - - z = self.generate_z(self.model.z_dim, seed) - label = torch.zeros([1, self.model.c_dim], device=self.device) - - out = self.model(z, - label, - truncation_psi=truncation_psi, - force_fp32=True) - out = (out.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to( - torch.uint8) - return out[0].cpu().numpy() - - @torch.inference_mode() - def generate_interpolated_images( - self, seed0: int, psi0: float, seed1: int, psi1: float, - num_intermediate: int) -> list[np.ndarray]: - seed0 = int(np.clip(seed0, 0, np.iinfo(np.uint32).max)) - seed1 = int(np.clip(seed1, 0, np.iinfo(np.uint32).max)) - - z0 = self.generate_z(self.model.z_dim, seed0) - z1 = self.generate_z(self.model.z_dim, seed1) - vec = z1 - z0 - dvec = vec / (num_intermediate + 1) - zs = [z0 + dvec * i for i in range(num_intermediate + 2)] - dpsi = (psi1 - psi0) / (num_intermediate + 1) - psis = [psi0 + dpsi * i for i in range(num_intermediate + 2)] - - label = torch.zeros([1, self.model.c_dim], device=self.device) - - res = [] - for z, psi in zip(zs, psis): - out = self.model(z, label, truncation_psi=psi, force_fp32=True) - out = (out.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to( - torch.uint8) - out = out[0].cpu().numpy() - res.append(out) - return res diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_1x_coco.py deleted file mode 100644 index 0308a567c147413688c9da679d06f93b0e154d88..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_1x_coco.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py' -model = dict( - backbone=dict( - norm_cfg=dict(type='SyncBN', requires_grad=True), norm_eval=False)) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/samplers/random_sampler.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/samplers/random_sampler.py deleted file mode 100644 index f34b006e8bb0b55c74aa1c3b792f3664ada93162..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/samplers/random_sampler.py +++ /dev/null @@ -1,78 +0,0 @@ -import torch - -from ..builder import BBOX_SAMPLERS -from .base_sampler import BaseSampler - - -@BBOX_SAMPLERS.register_module() -class RandomSampler(BaseSampler): - """Random sampler. - - Args: - num (int): Number of samples - pos_fraction (float): Fraction of positive samples - neg_pos_up (int, optional): Upper bound number of negative and - positive samples. Defaults to -1. - add_gt_as_proposals (bool, optional): Whether to add ground truth - boxes as proposals. Defaults to True. - """ - - def __init__(self, - num, - pos_fraction, - neg_pos_ub=-1, - add_gt_as_proposals=True, - **kwargs): - from mmdet.core.bbox import demodata - super(RandomSampler, self).__init__(num, pos_fraction, neg_pos_ub, - add_gt_as_proposals) - self.rng = demodata.ensure_rng(kwargs.get('rng', None)) - - def random_choice(self, gallery, num): - """Random select some elements from the gallery. - - If `gallery` is a Tensor, the returned indices will be a Tensor; - If `gallery` is a ndarray or list, the returned indices will be a - ndarray. - - Args: - gallery (Tensor | ndarray | list): indices pool. - num (int): expected sample num. - - Returns: - Tensor or ndarray: sampled indices. - """ - assert len(gallery) >= num - - is_tensor = isinstance(gallery, torch.Tensor) - if not is_tensor: - if torch.cuda.is_available(): - device = torch.cuda.current_device() - else: - device = 'cpu' - gallery = torch.tensor(gallery, dtype=torch.long, device=device) - perm = torch.randperm(gallery.numel(), device=gallery.device)[:num] - rand_inds = gallery[perm] - if not is_tensor: - rand_inds = rand_inds.cpu().numpy() - return rand_inds - - def _sample_pos(self, assign_result, num_expected, **kwargs): - """Randomly sample some positive samples.""" - pos_inds = torch.nonzero(assign_result.gt_inds > 0, as_tuple=False) - if pos_inds.numel() != 0: - pos_inds = pos_inds.squeeze(1) - if pos_inds.numel() <= num_expected: - return pos_inds - else: - return self.random_choice(pos_inds, num_expected) - - def _sample_neg(self, assign_result, num_expected, **kwargs): - """Randomly sample some negative samples.""" - neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False) - if neg_inds.numel() != 0: - neg_inds = neg_inds.squeeze(1) - if len(neg_inds) <= num_expected: - return neg_inds - else: - return self.random_choice(neg_inds, num_expected) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/fovea.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/fovea.py deleted file mode 100644 index 22a578efffbd108db644d907bae95c7c8df31f2e..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/fovea.py +++ /dev/null @@ -1,17 +0,0 @@ -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class FOVEA(SingleStageDetector): - """Implementation of `FoveaBox `_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(FOVEA, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/pascal_context.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/pascal_context.py deleted file mode 100644 index 541a63c66a13fb16fd52921e755715ad8d078fdd..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/pascal_context.py +++ /dev/null @@ -1,103 +0,0 @@ -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class PascalContextDataset(CustomDataset): - """PascalContext dataset. - - In segmentation map annotation for PascalContext, 0 stands for background, - which is included in 60 categories. ``reduce_zero_label`` is fixed to - False. The ``img_suffix`` is fixed to '.jpg' and ``seg_map_suffix`` is - fixed to '.png'. - - Args: - split (str): Split txt file for PascalContext. - """ - - CLASSES = ('background', 'aeroplane', 'bag', 'bed', 'bedclothes', 'bench', - 'bicycle', 'bird', 'boat', 'book', 'bottle', 'building', 'bus', - 'cabinet', 'car', 'cat', 'ceiling', 'chair', 'cloth', - 'computer', 'cow', 'cup', 'curtain', 'dog', 'door', 'fence', - 'floor', 'flower', 'food', 'grass', 'ground', 'horse', - 'keyboard', 'light', 'motorbike', 'mountain', 'mouse', 'person', - 'plate', 'platform', 'pottedplant', 'road', 'rock', 'sheep', - 'shelves', 'sidewalk', 'sign', 'sky', 'snow', 'sofa', 'table', - 'track', 'train', 'tree', 'truck', 'tvmonitor', 'wall', 'water', - 'window', 'wood') - - PALETTE = [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50], - [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255], - [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7], - [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82], - [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3], - [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255], - [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220], - [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224], - [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255], - [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7], - [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153], - [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255], - [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0], - [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255], - [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255]] - - def __init__(self, split, **kwargs): - super(PascalContextDataset, self).__init__( - img_suffix='.jpg', - seg_map_suffix='.png', - split=split, - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) and self.split is not None - - -@DATASETS.register_module() -class PascalContextDataset59(CustomDataset): - """PascalContext dataset. - - In segmentation map annotation for PascalContext, 0 stands for background, - which is included in 60 categories. ``reduce_zero_label`` is fixed to - False. The ``img_suffix`` is fixed to '.jpg' and ``seg_map_suffix`` is - fixed to '.png'. - - Args: - split (str): Split txt file for PascalContext. - """ - - CLASSES = ('aeroplane', 'bag', 'bed', 'bedclothes', 'bench', 'bicycle', - 'bird', 'boat', 'book', 'bottle', 'building', 'bus', 'cabinet', - 'car', 'cat', 'ceiling', 'chair', 'cloth', 'computer', 'cow', - 'cup', 'curtain', 'dog', 'door', 'fence', 'floor', 'flower', - 'food', 'grass', 'ground', 'horse', 'keyboard', 'light', - 'motorbike', 'mountain', 'mouse', 'person', 'plate', 'platform', - 'pottedplant', 'road', 'rock', 'sheep', 'shelves', 'sidewalk', - 'sign', 'sky', 'snow', 'sofa', 'table', 'track', 'train', - 'tree', 'truck', 'tvmonitor', 'wall', 'water', 'window', 'wood') - - PALETTE = [[180, 120, 120], [6, 230, 230], [80, 50, 50], [4, 200, 3], - [120, 120, 80], [140, 140, 140], [204, 5, 255], [230, 230, 230], - [4, 250, 7], [224, 5, 255], [235, 255, 7], [150, 5, 61], - [120, 120, 70], [8, 255, 51], [255, 6, 82], [143, 255, 140], - [204, 255, 4], [255, 51, 7], [204, 70, 3], [0, 102, 200], - [61, 230, 250], [255, 6, 51], [11, 102, 255], [255, 7, 71], - [255, 9, 224], [9, 7, 230], [220, 220, 220], [255, 9, 92], - [112, 9, 255], [8, 255, 214], [7, 255, 224], [255, 184, 6], - [10, 255, 71], [255, 41, 10], [7, 255, 255], [224, 255, 8], - [102, 8, 255], [255, 61, 6], [255, 194, 7], [255, 122, 8], - [0, 255, 20], [255, 8, 41], [255, 5, 153], [6, 51, 255], - [235, 12, 255], [160, 150, 20], [0, 163, 255], [140, 140, 140], - [250, 10, 15], [20, 255, 0], [31, 255, 0], [255, 31, 0], - [255, 224, 0], [153, 255, 0], [0, 0, 255], [255, 71, 0], - [0, 235, 255], [0, 173, 255], [31, 0, 255]] - - def __init__(self, split, **kwargs): - super(PascalContextDataset59, self).__init__( - img_suffix='.jpg', - seg_map_suffix='.png', - split=split, - reduce_zero_label=True, - **kwargs) - assert osp.exists(self.img_dir) and self.split is not None diff --git a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/data/audio.py b/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/data/audio.py deleted file mode 100644 index 2048df6f175d7303bcf5c7b931922fd297908ead..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/data/audio.py +++ /dev/null @@ -1,215 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Audio IO methods are defined in this module (info, read, write), -We rely on av library for faster read when possible, otherwise on torchaudio. -""" - -from dataclasses import dataclass -from pathlib import Path -import logging -import typing as tp - -import numpy as np -import soundfile -import torch -from torch.nn import functional as F -import torchaudio as ta - -import av - -from .audio_utils import f32_pcm, i16_pcm, normalize_audio - - -_av_initialized = False - - -def _init_av(): - global _av_initialized - if _av_initialized: - return - logger = logging.getLogger('libav.mp3') - logger.setLevel(logging.ERROR) - _av_initialized = True - - -@dataclass(frozen=True) -class AudioFileInfo: - sample_rate: int - duration: float - channels: int - - -def _av_info(filepath: tp.Union[str, Path]) -> AudioFileInfo: - _init_av() - with av.open(str(filepath)) as af: - stream = af.streams.audio[0] - sample_rate = stream.codec_context.sample_rate - duration = float(stream.duration * stream.time_base) - channels = stream.channels - return AudioFileInfo(sample_rate, duration, channels) - - -def _soundfile_info(filepath: tp.Union[str, Path]) -> AudioFileInfo: - info = soundfile.info(filepath) - return AudioFileInfo(info.samplerate, info.duration, info.channels) - - -def audio_info(filepath: tp.Union[str, Path]) -> AudioFileInfo: - # torchaudio no longer returns useful duration informations for some formats like mp3s. - filepath = Path(filepath) - if filepath.suffix in ['.flac', '.ogg']: # TODO: Validate .ogg can be safely read with av_info - # ffmpeg has some weird issue with flac. - return _soundfile_info(filepath) - else: - return _av_info(filepath) - - -def _av_read(filepath: tp.Union[str, Path], seek_time: float = 0, duration: float = -1.) -> tp.Tuple[torch.Tensor, int]: - """FFMPEG-based audio file reading using PyAV bindings. - Soundfile cannot read mp3 and av_read is more efficient than torchaudio. - - Args: - filepath (str or Path): Path to audio file to read. - seek_time (float): Time at which to start reading in the file. - duration (float): Duration to read from the file. If set to -1, the whole file is read. - Returns: - Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate - """ - _init_av() - with av.open(str(filepath)) as af: - stream = af.streams.audio[0] - sr = stream.codec_context.sample_rate - num_frames = int(sr * duration) if duration >= 0 else -1 - frame_offset = int(sr * seek_time) - # we need a small negative offset otherwise we get some edge artifact - # from the mp3 decoder. - af.seek(int(max(0, (seek_time - 0.1)) / stream.time_base), stream=stream) - frames = [] - length = 0 - for frame in af.decode(streams=stream.index): - current_offset = int(frame.rate * frame.pts * frame.time_base) - strip = max(0, frame_offset - current_offset) - buf = torch.from_numpy(frame.to_ndarray()) - if buf.shape[0] != stream.channels: - buf = buf.view(-1, stream.channels).t() - buf = buf[:, strip:] - frames.append(buf) - length += buf.shape[1] - if num_frames > 0 and length >= num_frames: - break - assert frames - # If the above assert fails, it is likely because we seeked past the end of file point, - # in which case ffmpeg returns a single frame with only zeros, and a weird timestamp. - # This will need proper debugging, in due time. - wav = torch.cat(frames, dim=1) - assert wav.shape[0] == stream.channels - if num_frames > 0: - wav = wav[:, :num_frames] - return f32_pcm(wav), sr - - -def audio_read(filepath: tp.Union[str, Path], seek_time: float = 0., - duration: float = -1., pad: bool = False) -> tp.Tuple[torch.Tensor, int]: - """Read audio by picking the most appropriate backend tool based on the audio format. - - Args: - filepath (str or Path): Path to audio file to read. - seek_time (float): Time at which to start reading in the file. - duration (float): Duration to read from the file. If set to -1, the whole file is read. - pad (bool): Pad output audio if not reaching expected duration. - Returns: - Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate. - """ - fp = Path(filepath) - if fp.suffix in ['.flac', '.ogg']: # TODO: check if we can safely use av_read for .ogg - # There is some bug with ffmpeg and reading flac - info = _soundfile_info(filepath) - frames = -1 if duration <= 0 else int(duration * info.sample_rate) - frame_offset = int(seek_time * info.sample_rate) - wav, sr = soundfile.read(filepath, start=frame_offset, frames=frames, dtype=np.float32) - assert info.sample_rate == sr, f"Mismatch of sample rates {info.sample_rate} {sr}" - wav = torch.from_numpy(wav).t().contiguous() - if len(wav.shape) == 1: - wav = torch.unsqueeze(wav, 0) - elif ( - fp.suffix in ['.wav', '.mp3'] and fp.suffix[1:] in ta.utils.sox_utils.list_read_formats() - and duration <= 0 and seek_time == 0 - ): - # Torchaudio is faster if we load an entire file at once. - wav, sr = ta.load(fp) - else: - wav, sr = _av_read(filepath, seek_time, duration) - if pad and duration > 0: - expected_frames = int(duration * sr) - wav = F.pad(wav, (0, expected_frames - wav.shape[-1])) - return wav, sr - - -def audio_write(stem_name: tp.Union[str, Path], - wav: torch.Tensor, sample_rate: int, - format: str = 'wav', mp3_rate: int = 320, normalize: bool = True, - strategy: str = 'peak', peak_clip_headroom_db: float = 1, - rms_headroom_db: float = 18, loudness_headroom_db: float = 14, - loudness_compressor: bool = False, - log_clipping: bool = True, make_parent_dir: bool = True, - add_suffix: bool = True) -> Path: - """Convenience function for saving audio to disk. Returns the filename the audio was written to. - - Args: - stem_name (str or Path): Filename without extension which will be added automatically. - format (str): Either "wav" or "mp3". - mp3_rate (int): kbps when using mp3s. - normalize (bool): if `True` (default), normalizes according to the prescribed - strategy (see after). If `False`, the strategy is only used in case clipping - would happen. - strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak', - i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square - with extra headroom to avoid clipping. 'clip' just clips. - peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy. - rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger - than the `peak_clip` one to avoid further clipping. - loudness_headroom_db (float): Target loudness for loudness normalization. - loudness_compressor (bool): Uses tanh for soft clipping when strategy is 'loudness'. - when strategy is 'loudness'log_clipping (bool): If True, basic logging on stderr when clipping still - occurs despite strategy (only for 'rms'). - make_parent_dir (bool): Make parent directory if it doesn't exist. - Returns: - Path: Path of the saved audio. - """ - assert wav.dtype.is_floating_point, "wav is not floating point" - if wav.dim() == 1: - wav = wav[None] - elif wav.dim() > 2: - raise ValueError("Input wav should be at most 2 dimension.") - assert wav.isfinite().all() - wav = normalize_audio(wav, normalize, strategy, peak_clip_headroom_db, - rms_headroom_db, loudness_headroom_db, log_clipping=log_clipping, - sample_rate=sample_rate, stem_name=str(stem_name)) - kwargs: dict = {} - if format == 'mp3': - suffix = '.mp3' - kwargs.update({"compression": mp3_rate}) - elif format == 'wav': - wav = i16_pcm(wav) - suffix = '.wav' - kwargs.update({"encoding": "PCM_S", "bits_per_sample": 16}) - else: - raise RuntimeError(f"Invalid format {format}. Only wav or mp3 are supported.") - if not add_suffix: - suffix = '' - path = Path(str(stem_name) + suffix) - if make_parent_dir: - path.parent.mkdir(exist_ok=True, parents=True) - try: - ta.save(path, wav, sample_rate, **kwargs) - except Exception: - if path.exists(): - # we do not want to leave half written files around. - path.unlink() - raise - return path diff --git a/spaces/HLasse/textdescriptives/options.py b/spaces/HLasse/textdescriptives/options.py deleted file mode 100644 index 56e9a9fba21988f853f67e2f0e1553afd55b371a..0000000000000000000000000000000000000000 --- a/spaces/HLasse/textdescriptives/options.py +++ /dev/null @@ -1,113 +0,0 @@ -from typing import Dict, List, Set - -from spacy.cli.download import get_compatibility - - -def metrics_options() -> List[str]: - return [ - "descriptive_stats", - "readability", - "dependency_distance", - "pos_proportions", - "coherence", - "quality", - "information_theory", - ] - - -def language_options() -> Dict[str, str]: - return { - "Catalan": "ca", - "Chinese": "zh", - "Croatian": "hr", - "Danish": "da", - "Dutch": "nl", - "English": "en", - "Finnish": "fi", - "French": "fr", - "German": "de", - "Greek": "el", - "Italian": "it", - "Japanese": "ja", - "Korean": "ko", - "Lithuanian": "lt", - "Macedonian": "mk", - "Multi-language": "xx", - "Norwegian Bokmål": "nb", - "Polish": "pl", - "Portuguese": "pt", - "Romanian": "ro", - "Russian": "ru", - "Spanish": "es", - "Swedish": "sv", - "Ukrainian": "uk", - } - - -################# -# Model options # -################# - - -def all_model_size_options_pretty_to_short() -> Dict[str, str]: - return { - "Small": "sm", - "Medium": "md", - "Large": "lg", - # "Transformer": "trf" # Disabled for now - } - - -def all_model_size_options_short_to_pretty() -> Dict[str, str]: - return { - short: pretty - for pretty, short in all_model_size_options_pretty_to_short().items() - } - - -def available_model_size_options(lang) -> List[str]: - short_to_pretty = all_model_size_options_short_to_pretty() - if lang == "all": - return sorted(list(short_to_pretty.values())) - return sorted( - [ - short_to_pretty[short] - for short in ModelAvailabilityChecker.available_model_sizes_for_language( - lang - ) - ] - ) - - -class ModelAvailabilityChecker: - @staticmethod - def available_models() -> List[str]: - return list(get_compatibility().keys()) - - @staticmethod - def extract_language_and_size() -> List[List[str]]: - # [["ca", "sm"], ["en", "lg"], ...] - return list( - [ - list(map(m.split("_").__getitem__, [0, -1])) - for m in ModelAvailabilityChecker.available_models() - ] - ) - - @staticmethod - def model_is_available(lang: str, size: str) -> bool: - lang_and_size = set( - [ - "_".join(lang_size) - for lang_size in ModelAvailabilityChecker.extract_language_and_size() - ] - ) - return f"{lang}_{size}" in lang_and_size - - @staticmethod - def available_model_sizes_for_language(lang: str) -> Set[str]: - return set([ - size - for (lang_, size) in ModelAvailabilityChecker.extract_language_and_size() - if lang_ == lang and size in all_model_size_options_pretty_to_short().values() - ]) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/paraphraser/README.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/paraphraser/README.md deleted file mode 100644 index 3810311f30f99f0a07fd8e5d3723bffeba9948c3..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/paraphraser/README.md +++ /dev/null @@ -1,46 +0,0 @@ -# Paraphrasing with round-trip translation and mixture of experts - -Machine translation models can be used to paraphrase text by translating it to -an intermediate language and back (round-trip translation). - -This example shows how to paraphrase text by first passing it to an -English-French translation model, followed by a French-English [mixture of -experts translation model](/examples/translation_moe). - -##### 0. Setup - -Clone fairseq from source and install necessary dependencies: -```bash -git clone https://github.com/pytorch/fairseq.git -cd fairseq -pip install --editable . -pip install sacremoses sentencepiece -``` - -##### 1. Download models -```bash -wget https://dl.fbaipublicfiles.com/fairseq/models/paraphraser.en-fr.tar.gz -wget https://dl.fbaipublicfiles.com/fairseq/models/paraphraser.fr-en.hMoEup.tar.gz -tar -xzvf paraphraser.en-fr.tar.gz -tar -xzvf paraphraser.fr-en.hMoEup.tar.gz -``` - -##### 2. Paraphrase -```bash -python examples/paraphraser/paraphrase.py \ - --en2fr paraphraser.en-fr \ - --fr2en paraphraser.fr-en.hMoEup -# Example input: -# The new date for the Games, postponed for a year in response to the coronavirus pandemic, gives athletes time to recalibrate their training schedules. -# Example outputs: -# Delayed one year in response to the coronavirus pandemic, the new date of the Games gives athletes time to rebalance their training schedule. -# The new date of the Games, which was rescheduled one year in response to the coronavirus (CV) pandemic, gives athletes time to rebalance their training schedule. -# The new date of the Games, postponed one year in response to the coronavirus pandemic, provides athletes with time to rebalance their training schedule. -# The Games' new date, postponed one year in response to the coronavirus pandemic, gives athletes time to rebalance their training schedule. -# The new Games date, postponed one year in response to the coronavirus pandemic, gives the athletes time to rebalance their training schedule. -# The new date of the Games, which was postponed one year in response to the coronavirus pandemic, gives the athletes time to rebalance their training schedule. -# The new date of the Games, postponed one year in response to the coronavirus pandemic, gives athletes time to rebalance their training schedule. -# The new date of the Games, postponed one year in response to the coronavirus pandemic, gives athletes time to re-balance their training schedule. -# The new date of the Games, postponed one year in response to the coronavirus pandemic, gives the athletes time to rebalance their schedule of training. -# The new date of the Games, postponed one year in response to the pandemic of coronavirus, gives the athletes time to rebalance their training schedule. -``` diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/scripts/phonemize_with_sil.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/scripts/phonemize_with_sil.py deleted file mode 100644 index c6512d7322def67b27aba46e9e36da171db6963b..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/scripts/phonemize_with_sil.py +++ /dev/null @@ -1,83 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import numpy as np -import sys - - -def get_parser(): - parser = argparse.ArgumentParser( - description="converts words to phones adding optional silences around in between words" - ) - parser.add_argument( - "--sil-prob", - "-s", - type=float, - default=0, - help="probability of inserting silence between each word", - ) - parser.add_argument( - "--surround", - action="store_true", - help="if set, surrounds each example with silence", - ) - parser.add_argument( - "--lexicon", - help="lexicon to convert to phones", - required=True, - ) - - return parser - - -def main(): - parser = get_parser() - args = parser.parse_args() - - sil_prob = args.sil_prob - surround = args.surround - sil = "" - - wrd_to_phn = {} - - with open(args.lexicon, "r") as lf: - for line in lf: - items = line.rstrip().split() - assert len(items) > 1, line - assert items[0] not in wrd_to_phn, items - wrd_to_phn[items[0]] = items[1:] - - for line in sys.stdin: - words = line.strip().split() - - if not all(w in wrd_to_phn for w in words): - continue - - phones = [] - if surround: - phones.append(sil) - - sample_sil_probs = None - if sil_prob > 0 and len(words) > 1: - sample_sil_probs = np.random.random(len(words) - 1) - - for i, w in enumerate(words): - phones.extend(wrd_to_phn[w]) - if ( - sample_sil_probs is not None - and i < len(sample_sil_probs) - and sample_sil_probs[i] < sil_prob - ): - phones.append(sil) - - if surround: - phones.append(sil) - print(" ".join(phones)) - - -if __name__ == "__main__": - main() diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/noising.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/noising.py deleted file mode 100644 index 2b1cc347203bfbdc9f1cba29e2e36427b7b5be57..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/noising.py +++ /dev/null @@ -1,335 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from fairseq.data import data_utils - - -class WordNoising(object): - """Generate a noisy version of a sentence, without changing words themselves.""" - - def __init__(self, dictionary, bpe_cont_marker="@@", bpe_end_marker=None): - self.dictionary = dictionary - self.bpe_end = None - if bpe_cont_marker: - self.bpe_end = np.array( - [ - not self.dictionary[i].endswith(bpe_cont_marker) - for i in range(len(self.dictionary)) - ] - ) - elif bpe_end_marker: - self.bpe_end = np.array( - [ - self.dictionary[i].endswith(bpe_end_marker) - for i in range(len(self.dictionary)) - ] - ) - - self.get_word_idx = ( - self._get_bpe_word_idx if self.bpe_end is not None else self._get_token_idx - ) - - def noising(self, x, lengths, noising_prob=0.0): - raise NotImplementedError() - - def _get_bpe_word_idx(self, x): - """ - Given a list of BPE tokens, for every index in the tokens list, - return the index of the word grouping that it belongs to. - For example, for input x corresponding to ["how", "are", "y@@", "ou"], - return [[0], [1], [2], [2]]. - """ - # x: (T x B) - bpe_end = self.bpe_end[x] - - if x.size(0) == 1 and x.size(1) == 1: - # Special case when we only have one word in x. If x = [[N]], - # bpe_end is a scalar (bool) instead of a 2-dim array of bools, - # which makes the sum operation below fail. - return np.array([[0]]) - - # do a reduce front sum to generate word ids - word_idx = bpe_end[::-1].cumsum(0)[::-1] - word_idx = word_idx.max(0)[None, :] - word_idx - return word_idx - - def _get_token_idx(self, x): - """ - This is to extend noising functions to be able to apply to non-bpe - tokens, e.g. word or characters. - """ - x = torch.t(x) - word_idx = np.array([range(len(x_i)) for x_i in x]) - return np.transpose(word_idx) - - -class WordDropout(WordNoising): - """Randomly drop input words. If not passing blank_idx (default is None), - then dropped words will be removed. Otherwise, it will be replaced by the - blank_idx.""" - - def __init__( - self, - dictionary, - default_dropout_prob=0.1, - bpe_cont_marker="@@", - bpe_end_marker=None, - ): - super().__init__(dictionary, bpe_cont_marker, bpe_end_marker) - self.default_dropout_prob = default_dropout_prob - - def noising(self, x, lengths, dropout_prob=None, blank_idx=None): - if dropout_prob is None: - dropout_prob = self.default_dropout_prob - # x: (T x B), lengths: B - if dropout_prob == 0: - return x, lengths - - assert 0 < dropout_prob < 1 - - # be sure to drop entire words - word_idx = self.get_word_idx(x) - sentences = [] - modified_lengths = [] - for i in range(lengths.size(0)): - # Since dropout probabilities need to apply over non-pad tokens, - # it is not trivial to generate the keep mask without consider - # input lengths; otherwise, this could be done outside the loop - - # We want to drop whole words based on word_idx grouping - num_words = max(word_idx[:, i]) + 1 - - # ith example: [x0, x1, ..., eos, pad, ..., pad] - # We should only generate keep probs for non-EOS tokens. Thus if the - # input sentence ends in EOS, the last word idx is not included in - # the dropout mask generation and we append True to always keep EOS. - # Otherwise, just generate the dropout mask for all word idx - # positions. - has_eos = x[lengths[i] - 1, i] == self.dictionary.eos() - if has_eos: # has eos? - keep = np.random.rand(num_words - 1) >= dropout_prob - keep = np.append(keep, [True]) # keep EOS symbol - else: - keep = np.random.rand(num_words) >= dropout_prob - - words = x[: lengths[i], i].tolist() - - # TODO: speed up the following loop - # drop words from the input according to keep - new_s = [ - w if keep[word_idx[j, i]] else blank_idx for j, w in enumerate(words) - ] - new_s = [w for w in new_s if w is not None] - # we need to have at least one word in the sentence (more than the - # start / end sentence symbols) - if len(new_s) <= 1: - # insert at beginning in case the only token left is EOS - # EOS should be at end of list. - new_s.insert(0, words[np.random.randint(0, len(words))]) - assert len(new_s) >= 1 and ( - not has_eos # Either don't have EOS at end or last token is EOS - or (len(new_s) >= 2 and new_s[-1] == self.dictionary.eos()) - ), "New sentence is invalid." - sentences.append(new_s) - modified_lengths.append(len(new_s)) - # re-construct input - modified_lengths = torch.LongTensor(modified_lengths) - modified_x = torch.LongTensor( - modified_lengths.max(), modified_lengths.size(0) - ).fill_(self.dictionary.pad()) - for i in range(modified_lengths.size(0)): - modified_x[: modified_lengths[i], i].copy_(torch.LongTensor(sentences[i])) - - return modified_x, modified_lengths - - -class WordShuffle(WordNoising): - """Shuffle words by no more than k positions.""" - - def __init__( - self, - dictionary, - default_max_shuffle_distance=3, - bpe_cont_marker="@@", - bpe_end_marker=None, - ): - super().__init__(dictionary, bpe_cont_marker, bpe_end_marker) - self.default_max_shuffle_distance = 3 - - def noising(self, x, lengths, max_shuffle_distance=None): - if max_shuffle_distance is None: - max_shuffle_distance = self.default_max_shuffle_distance - # x: (T x B), lengths: B - if max_shuffle_distance == 0: - return x, lengths - - # max_shuffle_distance < 1 will return the same sequence - assert max_shuffle_distance > 1 - - # define noise word scores - noise = np.random.uniform( - 0, - max_shuffle_distance, - size=(x.size(0), x.size(1)), - ) - noise[0] = -1 # do not move start sentence symbol - # be sure to shuffle entire words - word_idx = self.get_word_idx(x) - x2 = x.clone() - for i in range(lengths.size(0)): - length_no_eos = lengths[i] - if x[lengths[i] - 1, i] == self.dictionary.eos(): - length_no_eos = lengths[i] - 1 - # generate a random permutation - scores = word_idx[:length_no_eos, i] + noise[word_idx[:length_no_eos, i], i] - # ensure no reordering inside a word - scores += 1e-6 * np.arange(length_no_eos.item()) - permutation = scores.argsort() - # shuffle words - x2[:length_no_eos, i].copy_( - x2[:length_no_eos, i][torch.from_numpy(permutation)] - ) - return x2, lengths - - -class UnsupervisedMTNoising(WordNoising): - """ - Implements the default configuration for noising in UnsupervisedMT - (github.com/facebookresearch/UnsupervisedMT) - """ - - def __init__( - self, - dictionary, - max_word_shuffle_distance, - word_dropout_prob, - word_blanking_prob, - bpe_cont_marker="@@", - bpe_end_marker=None, - ): - super().__init__(dictionary) - self.max_word_shuffle_distance = max_word_shuffle_distance - self.word_dropout_prob = word_dropout_prob - self.word_blanking_prob = word_blanking_prob - - self.word_dropout = WordDropout( - dictionary=dictionary, - bpe_cont_marker=bpe_cont_marker, - bpe_end_marker=bpe_end_marker, - ) - self.word_shuffle = WordShuffle( - dictionary=dictionary, - bpe_cont_marker=bpe_cont_marker, - bpe_end_marker=bpe_end_marker, - ) - - def noising(self, x, lengths): - # 1. Word Shuffle - noisy_src_tokens, noisy_src_lengths = self.word_shuffle.noising( - x=x, - lengths=lengths, - max_shuffle_distance=self.max_word_shuffle_distance, - ) - # 2. Word Dropout - noisy_src_tokens, noisy_src_lengths = self.word_dropout.noising( - x=noisy_src_tokens, - lengths=noisy_src_lengths, - dropout_prob=self.word_dropout_prob, - ) - # 3. Word Blanking - noisy_src_tokens, noisy_src_lengths = self.word_dropout.noising( - x=noisy_src_tokens, - lengths=noisy_src_lengths, - dropout_prob=self.word_blanking_prob, - blank_idx=self.dictionary.unk(), - ) - - return noisy_src_tokens - - -class NoisingDataset(torch.utils.data.Dataset): - def __init__( - self, - src_dataset, - src_dict, - seed, - noiser=None, - noising_class=UnsupervisedMTNoising, - **kwargs - ): - """ - Wrap a :class:`~torch.utils.data.Dataset` and apply noise to the - samples based on the supplied noising configuration. - - Args: - src_dataset (~torch.utils.data.Dataset): dataset to wrap. - to build self.src_dataset -- - a LanguagePairDataset with src dataset as the source dataset and - None as the target dataset. Should NOT have padding so that - src_lengths are accurately calculated by language_pair_dataset - collate function. - We use language_pair_dataset here to encapsulate the tgt_dataset - so we can re-use the LanguagePairDataset collater to format the - batches in the structure that SequenceGenerator expects. - src_dict (~fairseq.data.Dictionary): source dictionary - seed (int): seed to use when generating random noise - noiser (WordNoising): a pre-initialized :class:`WordNoising` - instance. If this is None, a new instance will be created using - *noising_class* and *kwargs*. - noising_class (class, optional): class to use to initialize a - default :class:`WordNoising` instance. - kwargs (dict, optional): arguments to initialize the default - :class:`WordNoising` instance given by *noiser*. - """ - self.src_dataset = src_dataset - self.src_dict = src_dict - self.seed = seed - self.noiser = ( - noiser - if noiser is not None - else noising_class( - dictionary=src_dict, - **kwargs, - ) - ) - self.sizes = src_dataset.sizes - - - def __getitem__(self, index): - """ - Returns a single noisy sample. Multiple samples are fed to the collater - create a noising dataset batch. - """ - src_tokens = self.src_dataset[index] - src_lengths = torch.LongTensor([len(src_tokens)]) - src_tokens = src_tokens.unsqueeze(0) - - # Transpose src tokens to fit expected shape of x in noising function - # (batch size, sequence length) -> (sequence length, batch size) - src_tokens_t = torch.t(src_tokens) - - with data_utils.numpy_seed(self.seed + index): - noisy_src_tokens = self.noiser.noising(src_tokens_t, src_lengths) - - # Transpose back to expected src_tokens format - # (sequence length, 1) -> (1, sequence length) - noisy_src_tokens = torch.t(noisy_src_tokens) - return noisy_src_tokens[0] - - def __len__(self): - """ - The length of the noising dataset is the length of src. - """ - return len(self.src_dataset) - - @property - def supports_prefetch(self): - return self.src_dataset.supports_prefetch - - def prefetch(self, indices): - if self.src_dataset.supports_prefetch: - self.src_dataset.prefetch(indices) diff --git a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/utils/inference/__init__.py b/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/utils/inference/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/install.sh b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/install.sh deleted file mode 100644 index 51e038d5a0098f21d4efd8051a15b7f0cdeb4b73..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/install.sh +++ /dev/null @@ -1,6 +0,0 @@ -cd src/glow_tts/monotonic_align/ -pip install . -cd ../../../ - -# torch -pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html diff --git a/spaces/Hoodady/3DFuse/my/utils/plot.py b/spaces/Hoodady/3DFuse/my/utils/plot.py deleted file mode 100644 index e4172311da88fbabcd107dd3f57b98db7638243a..0000000000000000000000000000000000000000 --- a/spaces/Hoodady/3DFuse/my/utils/plot.py +++ /dev/null @@ -1,9 +0,0 @@ -import numpy as np -import matplotlib.pyplot as plt - - -def mpl_fig_to_buffer(fig): - fig.canvas.draw() - plot = np.array(fig.canvas.renderer.buffer_rgba()) - plt.close(fig) - return plot diff --git a/spaces/ICML2022/OFA/fairseq/examples/language_model/README.md b/spaces/ICML2022/OFA/fairseq/examples/language_model/README.md deleted file mode 100644 index e78ea48e08dc99b69751923762107a8f8a9a5e3e..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/language_model/README.md +++ /dev/null @@ -1,123 +0,0 @@ -# Neural Language Modeling - -## Pre-trained models - -Model | Description | Dataset | Download ----|---|---|--- -`transformer_lm.gbw.adaptive_huge` | Adaptive Inputs
    ([Baevski and Auli, 2018](https://arxiv.org/abs/1809.10853))
    1026M params | [Google Billion Words](https://github.com/ciprian-chelba/1-billion-word-language-modeling-benchmark) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/lm/adaptive_lm_gbw_huge.tar.bz2) -`transformer_lm.wiki103.adaptive` | Adaptive Inputs
    ([Baevski and Auli, 2018](https://arxiv.org/abs/1809.10853))
    247M params | [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/lm/adaptive_lm_wiki103.v2.tar.bz2) -`transformer_lm.wmt19.en` | English LM
    ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) | [WMT News Crawl](http://data.statmt.org/news-crawl/) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.en.tar.gz) -`transformer_lm.wmt19.de` | German LM
    ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) | [WMT News Crawl](http://data.statmt.org/news-crawl/) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.de.tar.gz) -`transformer_lm.wmt19.ru` | Russian LM
    ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) | [WMT News Crawl](http://data.statmt.org/news-crawl/) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.ru.tar.gz) - -## Example usage - -We require a few additional Python dependencies for preprocessing: -```bash -pip install fastBPE sacremoses -``` - -To sample from a language model using PyTorch Hub: -```python -import torch - -# List available models -torch.hub.list('pytorch/fairseq') # [..., 'transformer_lm.wmt19.en', ...] - -# Load an English LM trained on WMT'19 News Crawl data -en_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt19.en', tokenizer='moses', bpe='fastbpe') -en_lm.eval() # disable dropout - -# Move model to GPU -en_lm.cuda() - -# Sample from the language model -en_lm.sample('Barack Obama', beam=1, sampling=True, sampling_topk=10, temperature=0.8) -# "Barack Obama is coming to Sydney and New Zealand (...)" - -# Compute perplexity for a sequence -en_lm.score('Barack Obama is coming to Sydney and New Zealand')['positional_scores'].mean().neg().exp() -# tensor(15.1474) - -# The same interface can be used with custom models as well -from fairseq.models.transformer_lm import TransformerLanguageModel -custom_lm = TransformerLanguageModel.from_pretrained('/path/to/model/dir', 'checkpoint100.pt', tokenizer='moses', bpe='fastbpe') -custom_lm.sample('Barack Obama', beam=5) -# "Barack Obama (...)" -``` - -## Training a transformer language model with the CLI tools - -### 1) Preprocess the data - -First download and prepare the [WikiText-103 dataset](https://www.salesforce.com/products/einstein/ai-research/the-wikitext-dependency-language-modeling-dataset/): -```bash -cd examples/language_model/ -bash prepare-wikitext-103.sh -cd ../.. -``` - -Next preprocess/binarize the data: -```bash -TEXT=examples/language_model/wikitext-103 -fairseq-preprocess \ - --only-source \ - --trainpref $TEXT/wiki.train.tokens \ - --validpref $TEXT/wiki.valid.tokens \ - --testpref $TEXT/wiki.test.tokens \ - --destdir data-bin/wikitext-103 \ - --workers 20 -``` - -### 2) Train a language model - -Next we'll train a basic transformer language model on wikitext-103. For more -advanced usage, see the [adaptive inputs README](README.adaptive_inputs.md). - -To train a basic LM (assumes 2 GPUs): -``` -$ fairseq-train --task language_modeling \ - data-bin/wikitext-103 \ - --save-dir checkpoints/transformer_wikitext-103 \ - --arch transformer_lm --share-decoder-input-output-embed \ - --dropout 0.1 \ - --optimizer adam --adam-betas '(0.9, 0.98)' --weight-decay 0.01 --clip-norm 0.0 \ - --lr 0.0005 --lr-scheduler inverse_sqrt --warmup-updates 4000 --warmup-init-lr 1e-07 \ - --tokens-per-sample 512 --sample-break-mode none \ - --max-tokens 2048 --update-freq 16 \ - --fp16 \ - --max-update 50000 -``` - -If you run out of memory, try reducing `--max-tokens` (max number of tokens per -batch) or `--tokens-per-sample` (max sequence length). You can also adjust -`--update-freq` to accumulate gradients and simulate training on a different -number of GPUs. - -### 3) Evaluate - -```bash -fairseq-eval-lm data-bin/wikitext-103 \ - --path checkpoints/transformer_wiki103/checkpoint_best.pt \ - --batch-size 2 \ - --tokens-per-sample 512 \ - --context-window 400 -# | Evaluated 245569 tokens in 56.1s (4379.02 tokens/s) -# | Loss: 3.4164, Perplexity: 30.46 -``` - -*Note:* The `--context-window` option controls how much context is provided to -each token when computing perplexity. When the window size is 0, the dataset is -chunked into segments of length 512 and perplexity is computed over each segment -normally. However, this results in worse (higher) perplexity since tokens that -appear earlier in each segment have less conditioning. When the maximum window -size is used (511 in this case), then we compute perplexity for each token -fully conditioned on 511 tokens of context. This slows down evaluation -significantly, since we must run a separate forward pass for every token in the -dataset, but results in better (lower) perplexity. - - -## Convolutional language models - -Please see the [convolutional LM README](README.conv.md) for instructions on -training convolutional language models. diff --git a/spaces/ICML2022/OFA/fairseq/examples/m2m_100/tokenizers/tokenize_indic.py b/spaces/ICML2022/OFA/fairseq/examples/m2m_100/tokenizers/tokenize_indic.py deleted file mode 100644 index a44fad07f7c718f99cccd445f33c62b0e3c562f4..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/m2m_100/tokenizers/tokenize_indic.py +++ /dev/null @@ -1,23 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -# Use: echo {text} | python tokenize_indic.py {language} - -import sys - -from indicnlp.normalize.indic_normalize import IndicNormalizerFactory -from indicnlp.tokenize.indic_tokenize import trivial_tokenize - - -factory = IndicNormalizerFactory() -normalizer = factory.get_normalizer( - sys.argv[1], remove_nuktas=False, nasals_mode="do_nothing" -) - -for line in sys.stdin: - normalized_line = normalizer.normalize(line.strip()) - tokenized_line = " ".join(trivial_tokenize(normalized_line, sys.argv[1])) - print(tokenized_line) diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_text_joint_to_text/scripts/g2p_encode.py b/spaces/ICML2022/OFA/fairseq/examples/speech_text_joint_to_text/scripts/g2p_encode.py deleted file mode 100644 index 9db779396f492e3f71b08d7b895beb81d8e46bc9..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/speech_text_joint_to_text/scripts/g2p_encode.py +++ /dev/null @@ -1,191 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import itertools -import logging -import re -import time - -from g2p_en import G2p - -logger = logging.getLogger(__name__) - -FAIL_SENT = "FAILED_SENTENCE" - - -def parse(): - parser = argparse.ArgumentParser() - parser.add_argument("--data-path", type=str, required=True) - parser.add_argument("--out-path", type=str, required=True) - parser.add_argument("--lower-case", action="store_true") - parser.add_argument("--do-filter", action="store_true") - parser.add_argument("--use-word-start", action="store_true") - parser.add_argument("--dup-vowel", default=1, type=int) - parser.add_argument("--dup-consonant", default=1, type=int) - parser.add_argument("--no-punc", action="store_true") - parser.add_argument("--reserve-word", type=str, default="") - parser.add_argument( - "--reserve-first-column", - action="store_true", - help="first column is sentence id", - ) - ### - parser.add_argument("--parallel-process-num", default=1, type=int) - parser.add_argument("--logdir", default="") - args = parser.parse_args() - return args - - -def process_sent(sent, g2p, res_wrds, args): - sents = pre_process_sent(sent, args.do_filter, args.lower_case, res_wrds) - pho_seqs = [do_g2p(g2p, s, res_wrds, i == 0) for i, s in enumerate(sents)] - pho_seq = ( - [FAIL_SENT] - if [FAIL_SENT] in pho_seqs - else list(itertools.chain.from_iterable(pho_seqs)) - ) - if args.no_punc: - pho_seq = remove_punc(pho_seq) - if args.dup_vowel > 1 or args.dup_consonant > 1: - pho_seq = dup_pho(pho_seq, args.dup_vowel, args.dup_consonant) - if args.use_word_start: - pho_seq = add_word_start(pho_seq) - return " ".join(pho_seq) - - -def remove_punc(sent): - ns = [] - regex = re.compile("[^a-zA-Z0-9 ]") - for p in sent: - if (not regex.search(p)) or p == FAIL_SENT: - if p == " " and (len(ns) == 0 or ns[-1] == " "): - continue - ns.append(p) - return ns - - -def do_g2p(g2p, sent, res_wrds, is_first_sent): - if sent in res_wrds: - pho_seq = [res_wrds[sent]] - else: - pho_seq = g2p(sent) - if not is_first_sent: - pho_seq = [" "] + pho_seq # add space to separate - return pho_seq - - -def pre_process_sent(sent, do_filter, lower_case, res_wrds): - if do_filter: - sent = re.sub("-", " ", sent) - sent = re.sub("—", " ", sent) - if len(res_wrds) > 0: - wrds = sent.split() - wrds = ["SPLIT_ME " + w + " SPLIT_ME" if w in res_wrds else w for w in wrds] - sents = [x.strip() for x in " ".join(wrds).split("SPLIT_ME") if x.strip() != ""] - else: - sents = [sent] - if lower_case: - sents = [s.lower() if s not in res_wrds else s for s in sents] - return sents - - -def dup_pho(sent, dup_v_num, dup_c_num): - """ - duplicate phoneme defined as cmudict - http://www.speech.cs.cmu.edu/cgi-bin/cmudict - """ - if dup_v_num == 1 and dup_c_num == 1: - return sent - ns = [] - for p in sent: - ns.append(p) - if re.search(r"\d$", p): - for i in range(1, dup_v_num): - ns.append(f"{p}-{i}P") - elif re.search(r"\w", p): - for i in range(1, dup_c_num): - ns.append(f"{p}-{i}P") - return ns - - -def add_word_start(sent): - ns = [] - do_add = True - ws = "▁" - for p in sent: - if do_add: - p = ws + p - do_add = False - if p == " ": - do_add = True - else: - ns.append(p) - return ns - - -def load_reserve_word(reserve_word): - if reserve_word == "": - return [] - with open(reserve_word, "r") as fp: - res_wrds = [x.strip().split() for x in fp.readlines() if x.strip() != ""] - assert sum([0 if len(x) == 2 else 1 for x in res_wrds]) == 0 - res_wrds = dict(res_wrds) - return res_wrds - - -def process_sents(sents, args): - g2p = G2p() - out_sents = [] - res_wrds = load_reserve_word(args.reserve_word) - for sent in sents: - col1 = "" - if args.reserve_first_column: - col1, sent = sent.split(None, 1) - sent = process_sent(sent, g2p, res_wrds, args) - if args.reserve_first_column and col1 != "": - sent = f"{col1} {sent}" - out_sents.append(sent) - return out_sents - - -def main(): - args = parse() - out_sents = [] - with open(args.data_path, "r") as fp: - sent_list = [x.strip() for x in fp.readlines()] - if args.parallel_process_num > 1: - try: - import submitit - except ImportError: - logger.warn( - "submitit is not found and only one job is used to process the data" - ) - submitit = None - - if args.parallel_process_num == 1 or submitit is None: - out_sents = process_sents(sent_list, args) - else: - # process sentences with parallel computation - lsize = len(sent_list) // args.parallel_process_num + 1 - executor = submitit.AutoExecutor(folder=args.logdir) - executor.update_parameters(timeout_min=1000, cpus_per_task=4) - jobs = [] - for i in range(args.parallel_process_num): - job = executor.submit( - process_sents, sent_list[lsize * i : lsize * (i + 1)], args - ) - jobs.append(job) - is_running = True - while is_running: - time.sleep(5) - is_running = sum([job.done() for job in jobs]) < len(jobs) - out_sents = list(itertools.chain.from_iterable([job.result() for job in jobs])) - with open(args.out_path, "w") as fp: - fp.write("\n".join(out_sents) + "\n") - - -if __name__ == "__main__": - main() diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_to_text/docs/mustc_example.md b/spaces/ICML2022/OFA/fairseq/examples/speech_to_text/docs/mustc_example.md deleted file mode 100644 index c95ef3e15660107c3384f87c1680f005044e7f3b..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/speech_to_text/docs/mustc_example.md +++ /dev/null @@ -1,155 +0,0 @@ -[[Back]](..) - -# S2T Example: Speech Translation (ST) on MuST-C - -[MuST-C](https://www.aclweb.org/anthology/N19-1202) is multilingual speech-to-text translation corpus with -8-language translations on English TED talks. We match the state-of-the-art performance in -[ESPNet-ST](https://arxiv.org/pdf/2004.10234.pdf) with a simpler model training pipeline. - -## Data Preparation -[Download](https://ict.fbk.eu/must-c) and unpack MuST-C data to a path -`${MUSTC_ROOT}/en-${TARGET_LANG_ID}`, then preprocess it with -```bash -# additional Python packages for S2T data processing/model training -pip install pandas torchaudio soundfile sentencepiece - -# Generate TSV manifests, features, vocabulary -# and configuration for each language -python examples/speech_to_text/prep_mustc_data.py \ - --data-root ${MUSTC_ROOT} --task asr \ - --vocab-type unigram --vocab-size 5000 -python examples/speech_to_text/prep_mustc_data.py \ - --data-root ${MUSTC_ROOT} --task st \ - --vocab-type unigram --vocab-size 8000 - -# Add vocabulary and configuration for joint data -# (based on the manifests and features generated above) -python examples/speech_to_text/prep_mustc_data.py \ - --data-root ${MUSTC_ROOT} --task asr --joint \ - --vocab-type unigram --vocab-size 10000 -python examples/speech_to_text/prep_mustc_data.py \ - --data-root ${MUSTC_ROOT} --task st --joint \ - --vocab-type unigram --vocab-size 10000 -``` -The generated files (manifest, features, vocabulary and data configuration) will be added to -`${MUSTC_ROOT}/en-${TARGET_LANG_ID}` (per-language data) and `MUSTC_ROOT` (joint data). - -Download our vocabulary files if you want to use our pre-trained models: -- ASR: [En-De](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_de_asr_vocab_unigram5000.zip), [En-Nl](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_nl_asr_vocab_unigram5000.zip), [En-Es](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_es_asr_vocab_unigram5000.zip), [En-Fr](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_fr_asr_vocab_unigram5000.zip), [En-It](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_it_asr_vocab_unigram5000.zip), [En-Pt](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_pt_asr_vocab_unigram5000.zip), [En-Ro](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ro_asr_vocab_unigram5000.zip), [En-Ru](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ru_asr_vocab_unigram5000.zip), [Joint](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_joint_asr_vocab_unigram10000.zip) -- ST: [En-De](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_de_st_vocab_unigram8000.zip), [En-Nl](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_nl_st_vocab_unigram8000.zip), [En-Es](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_es_st_vocab_unigram8000.zip), [En-Fr](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_fr_st_vocab_unigram8000.zip), [En-It](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_it_st_vocab_unigram8000.zip), [En-Pt](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_pt_st_vocab_unigram8000.zip), [En-Ro](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ro_st_vocab_unigram8000.zip), [En-Ru](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ru_st_vocab_unigram8000.zip), [Multilingual](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_multilingual_st_vocab_unigram10000.zip) - -## ASR -#### Training -En-De as example: -```bash -fairseq-train ${MUSTC_ROOT}/en-de \ - --config-yaml config_asr.yaml --train-subset train_asr --valid-subset dev_asr \ - --save-dir ${ASR_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-update 100000 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \ - --arch s2t_transformer_s --optimizer adam --lr 1e-3 --lr-scheduler inverse_sqrt \ - --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 -``` -For joint model (using ASR data from all 8 directions): -```bash -fairseq-train ${MUSTC_ROOT} \ - --config-yaml config_asr.yaml \ - --train-subset train_de_asr,train_nl_asr,train_es_asr,train_fr_asr,train_it_asr,train_pt_asr,train_ro_asr,train_ru_asr \ - --valid-subset dev_de_asr,dev_nl_asr,dev_es_asr,dev_fr_asr,dev_it_asr,dev_pt_asr,dev_ro_asr,dev_ru_asr \ - --save-dir ${JOINT_ASR_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-update 100000 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \ - --arch s2t_transformer_s --optimizer adam --lr 1e-3 --lr-scheduler inverse_sqrt \ - --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 -``` -where `ASR_SAVE_DIR` (`JOINT_ASR_SAVE_DIR`) is the checkpoint root path. We set `--update-freq 8` to simulate 8 GPUs -with 1 GPU. You may want to update it accordingly when using more than 1 GPU. - -#### Inference & Evaluation -```bash -CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt -python scripts/average_checkpoints.py \ - --inputs ${ASR_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}" -fairseq-generate ${MUSTC_ROOT}/en-de \ - --config-yaml config_asr.yaml --gen-subset tst-COMMON_asr --task speech_to_text \ - --path ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} --max-tokens 50000 --beam 5 \ - --scoring wer --wer-tokenizer 13a --wer-lowercase --wer-remove-punct - -# For models trained on joint data -python scripts/average_checkpoints.py \ - --inputs ${JOINT_ASR_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${JOINT_ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}" -for LANG in de nl es fr it pt ro ru; do - fairseq-generate ${MUSTC_ROOT} \ - --config-yaml config_asr.yaml --gen-subset tst-COMMON_${LANG}_asr --task speech_to_text \ - --path ${JOINT_ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} --max-tokens 50000 --beam 5 \ - --scoring wer --wer-tokenizer 13a --wer-lowercase --wer-remove-punct -done -``` -#### Results -| Data | --arch | Params | En-De | En-Nl | En-Es | En-Fr | En-It | En-Pt | En-Ro | En-Ru | Model | -|---|---|---|---|---|---|---|---|---|---|---|---| -| Single | s2t_transformer_s | 31M | [18.2](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_de_asr_transformer_s.pt) | [17.6](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_nl_asr_transformer_s.pt) | [17.7](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_es_asr_transformer_s.pt) | [17.2](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_fr_asr_transformer_s.pt) | [17.9](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_it_asr_transformer_s.pt) | [19.1](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_pt_asr_transformer_s.pt) | [18.1](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ro_asr_transformer_s.pt) | [17.7](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ru_asr_transformer_s.pt) | (<-Download) | -| Joint | s2t_transformer_m | 76M | 16.8 | 16.7 | 16.9 | 16.9 | 17.0 | 17.4 | 17.0 | 16.9 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_joint_asr_transformer_m.pt) | - -## ST -#### Training -En-De as example: -```bash -fairseq-train ${MUSTC_ROOT}/en-de \ - --config-yaml config_st.yaml --train-subset train_st --valid-subset dev_st \ - --save-dir ${ST_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-update 100000 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \ - --arch s2t_transformer_s --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt \ - --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 \ - --load-pretrained-encoder-from ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} -``` -For multilingual model (all 8 directions): -```bash -fairseq-train ${MUSTC_ROOT} \ - --config-yaml config_st.yaml \ - --train-subset train_de_st,train_nl_st,train_es_st,train_fr_st,train_it_st,train_pt_st,train_ro_st,train_ru_st \ - --valid-subset dev_de_st,dev_nl_st,dev_es_st,dev_fr_st,dev_it_st,dev_pt_st,dev_ro_st,dev_ru_st \ - --save-dir ${MULTILINGUAL_ST_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-update 100000 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \ - --arch s2t_transformer_s --ignore-prefix-size 1 --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt \ - --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 \ - --load-pretrained-encoder-from ${JOINT_ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} -``` -where `ST_SAVE_DIR` (`MULTILINGUAL_ST_SAVE_DIR`) is the checkpoint root path. The ST encoder is pre-trained by ASR -for faster training and better performance: `--load-pretrained-encoder-from <(JOINT_)ASR checkpoint path>`. We set -`--update-freq 8` to simulate 8 GPUs with 1 GPU. You may want to update it accordingly when using more than 1 GPU. -For multilingual models, we prepend target language ID token as target BOS, which should be excluded from -the training loss via `--ignore-prefix-size 1`. - -#### Inference & Evaluation -Average the last 10 checkpoints and evaluate on the `tst-COMMON` split: -```bash -CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt -python scripts/average_checkpoints.py \ - --inputs ${ST_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${ST_SAVE_DIR}/${CHECKPOINT_FILENAME}" -fairseq-generate ${MUSTC_ROOT}/en-de \ - --config-yaml config_st.yaml --gen-subset tst-COMMON_st --task speech_to_text \ - --path ${ST_SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --max-tokens 50000 --beam 5 --scoring sacrebleu - -# For multilingual models -python scripts/average_checkpoints.py \ - --inputs ${MULTILINGUAL_ST_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${MULTILINGUAL_ST_SAVE_DIR}/${CHECKPOINT_FILENAME}" -for LANG in de nl es fr it pt ro ru; do - fairseq-generate ${MUSTC_ROOT} \ - --config-yaml config_st.yaml --gen-subset tst-COMMON_${LANG}_st --task speech_to_text \ - --prefix-size 1 --path ${MULTILINGUAL_ST_SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --max-tokens 50000 --beam 5 --scoring sacrebleu -done -``` -For multilingual models, we force decoding from the target language ID token (as BOS) via `--prefix-size 1`. - -#### Results -| Data | --arch | Params | En-De | En-Nl | En-Es | En-Fr | En-It | En-Pt | En-Ro | En-Ru | Model | -|---|---|---|---|---|---|---|---|---|---|---|---| -| Bilingual | s2t_transformer_s | 31M | [22.7](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_de_st_transformer_s.pt) | [27.3](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_nl_st_transformer_s.pt) | [27.2](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_es_st_transformer_s.pt) | [32.9](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_fr_st_transformer_s.pt) | [22.7](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_it_st_transformer_s.pt) | [28.1](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_pt_st_transformer_s.pt) | [21.9](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ro_st_transformer_s.pt) | [15.3](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ru_st_transformer_s.pt) | (<-Download) | -| Multilingual | s2t_transformer_m | 76M | 24.5 | 28.6 | 28.2 | 34.9 | 24.6 | 31.1 | 23.8 | 16.0 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_multilingual_st_transformer_m.pt) | - -[[Back]](..) diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/loggers/comet/__init__.py b/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/loggers/comet/__init__.py deleted file mode 100644 index b0318f88d6a63a6ba37fd2bf7ec4869084a45966..0000000000000000000000000000000000000000 --- a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/loggers/comet/__init__.py +++ /dev/null @@ -1,508 +0,0 @@ -import glob -import json -import logging -import os -import sys -from pathlib import Path - -logger = logging.getLogger(__name__) - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[3] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH - -try: - import comet_ml - - # Project Configuration - config = comet_ml.config.get_config() - COMET_PROJECT_NAME = config.get_string(os.getenv("COMET_PROJECT_NAME"), "comet.project_name", default="yolov5") -except (ModuleNotFoundError, ImportError): - comet_ml = None - COMET_PROJECT_NAME = None - -import PIL -import torch -import torchvision.transforms as T -import yaml - -from utils.dataloaders import img2label_paths -from utils.general import check_dataset, scale_boxes, xywh2xyxy -from utils.metrics import box_iou - -COMET_PREFIX = "comet://" - -COMET_MODE = os.getenv("COMET_MODE", "online") - -# Model Saving Settings -COMET_MODEL_NAME = os.getenv("COMET_MODEL_NAME", "yolov5") - -# Dataset Artifact Settings -COMET_UPLOAD_DATASET = os.getenv("COMET_UPLOAD_DATASET", "false").lower() == "true" - -# Evaluation Settings -COMET_LOG_CONFUSION_MATRIX = os.getenv("COMET_LOG_CONFUSION_MATRIX", "true").lower() == "true" -COMET_LOG_PREDICTIONS = os.getenv("COMET_LOG_PREDICTIONS", "true").lower() == "true" -COMET_MAX_IMAGE_UPLOADS = int(os.getenv("COMET_MAX_IMAGE_UPLOADS", 100)) - -# Confusion Matrix Settings -CONF_THRES = float(os.getenv("CONF_THRES", 0.001)) -IOU_THRES = float(os.getenv("IOU_THRES", 0.6)) - -# Batch Logging Settings -COMET_LOG_BATCH_METRICS = os.getenv("COMET_LOG_BATCH_METRICS", "false").lower() == "true" -COMET_BATCH_LOGGING_INTERVAL = os.getenv("COMET_BATCH_LOGGING_INTERVAL", 1) -COMET_PREDICTION_LOGGING_INTERVAL = os.getenv("COMET_PREDICTION_LOGGING_INTERVAL", 1) -COMET_LOG_PER_CLASS_METRICS = os.getenv("COMET_LOG_PER_CLASS_METRICS", "false").lower() == "true" - -RANK = int(os.getenv("RANK", -1)) - -to_pil = T.ToPILImage() - - -class CometLogger: - """Log metrics, parameters, source code, models and much more - with Comet - """ - - def __init__(self, opt, hyp, run_id=None, job_type="Training", **experiment_kwargs) -> None: - self.job_type = job_type - self.opt = opt - self.hyp = hyp - - # Comet Flags - self.comet_mode = COMET_MODE - - self.save_model = opt.save_period > -1 - self.model_name = COMET_MODEL_NAME - - # Batch Logging Settings - self.log_batch_metrics = COMET_LOG_BATCH_METRICS - self.comet_log_batch_interval = COMET_BATCH_LOGGING_INTERVAL - - # Dataset Artifact Settings - self.upload_dataset = self.opt.upload_dataset if self.opt.upload_dataset else COMET_UPLOAD_DATASET - self.resume = self.opt.resume - - # Default parameters to pass to Experiment objects - self.default_experiment_kwargs = { - "log_code": False, - "log_env_gpu": True, - "log_env_cpu": True, - "project_name": COMET_PROJECT_NAME,} - self.default_experiment_kwargs.update(experiment_kwargs) - self.experiment = self._get_experiment(self.comet_mode, run_id) - - self.data_dict = self.check_dataset(self.opt.data) - self.class_names = self.data_dict["names"] - self.num_classes = self.data_dict["nc"] - - self.logged_images_count = 0 - self.max_images = COMET_MAX_IMAGE_UPLOADS - - if run_id is None: - self.experiment.log_other("Created from", "YOLOv5") - if not isinstance(self.experiment, comet_ml.OfflineExperiment): - workspace, project_name, experiment_id = self.experiment.url.split("/")[-3:] - self.experiment.log_other( - "Run Path", - f"{workspace}/{project_name}/{experiment_id}", - ) - self.log_parameters(vars(opt)) - self.log_parameters(self.opt.hyp) - self.log_asset_data( - self.opt.hyp, - name="hyperparameters.json", - metadata={"type": "hyp-config-file"}, - ) - self.log_asset( - f"{self.opt.save_dir}/opt.yaml", - metadata={"type": "opt-config-file"}, - ) - - self.comet_log_confusion_matrix = COMET_LOG_CONFUSION_MATRIX - - if hasattr(self.opt, "conf_thres"): - self.conf_thres = self.opt.conf_thres - else: - self.conf_thres = CONF_THRES - if hasattr(self.opt, "iou_thres"): - self.iou_thres = self.opt.iou_thres - else: - self.iou_thres = IOU_THRES - - self.log_parameters({"val_iou_threshold": self.iou_thres, "val_conf_threshold": self.conf_thres}) - - self.comet_log_predictions = COMET_LOG_PREDICTIONS - if self.opt.bbox_interval == -1: - self.comet_log_prediction_interval = 1 if self.opt.epochs < 10 else self.opt.epochs // 10 - else: - self.comet_log_prediction_interval = self.opt.bbox_interval - - if self.comet_log_predictions: - self.metadata_dict = {} - self.logged_image_names = [] - - self.comet_log_per_class_metrics = COMET_LOG_PER_CLASS_METRICS - - self.experiment.log_others({ - "comet_mode": COMET_MODE, - "comet_max_image_uploads": COMET_MAX_IMAGE_UPLOADS, - "comet_log_per_class_metrics": COMET_LOG_PER_CLASS_METRICS, - "comet_log_batch_metrics": COMET_LOG_BATCH_METRICS, - "comet_log_confusion_matrix": COMET_LOG_CONFUSION_MATRIX, - "comet_model_name": COMET_MODEL_NAME,}) - - # Check if running the Experiment with the Comet Optimizer - if hasattr(self.opt, "comet_optimizer_id"): - self.experiment.log_other("optimizer_id", self.opt.comet_optimizer_id) - self.experiment.log_other("optimizer_objective", self.opt.comet_optimizer_objective) - self.experiment.log_other("optimizer_metric", self.opt.comet_optimizer_metric) - self.experiment.log_other("optimizer_parameters", json.dumps(self.hyp)) - - def _get_experiment(self, mode, experiment_id=None): - if mode == "offline": - if experiment_id is not None: - return comet_ml.ExistingOfflineExperiment( - previous_experiment=experiment_id, - **self.default_experiment_kwargs, - ) - - return comet_ml.OfflineExperiment(**self.default_experiment_kwargs,) - - else: - try: - if experiment_id is not None: - return comet_ml.ExistingExperiment( - previous_experiment=experiment_id, - **self.default_experiment_kwargs, - ) - - return comet_ml.Experiment(**self.default_experiment_kwargs) - - except ValueError: - logger.warning("COMET WARNING: " - "Comet credentials have not been set. " - "Comet will default to offline logging. " - "Please set your credentials to enable online logging.") - return self._get_experiment("offline", experiment_id) - - return - - def log_metrics(self, log_dict, **kwargs): - self.experiment.log_metrics(log_dict, **kwargs) - - def log_parameters(self, log_dict, **kwargs): - self.experiment.log_parameters(log_dict, **kwargs) - - def log_asset(self, asset_path, **kwargs): - self.experiment.log_asset(asset_path, **kwargs) - - def log_asset_data(self, asset, **kwargs): - self.experiment.log_asset_data(asset, **kwargs) - - def log_image(self, img, **kwargs): - self.experiment.log_image(img, **kwargs) - - def log_model(self, path, opt, epoch, fitness_score, best_model=False): - if not self.save_model: - return - - model_metadata = { - "fitness_score": fitness_score[-1], - "epochs_trained": epoch + 1, - "save_period": opt.save_period, - "total_epochs": opt.epochs,} - - model_files = glob.glob(f"{path}/*.pt") - for model_path in model_files: - name = Path(model_path).name - - self.experiment.log_model( - self.model_name, - file_or_folder=model_path, - file_name=name, - metadata=model_metadata, - overwrite=True, - ) - - def check_dataset(self, data_file): - with open(data_file) as f: - data_config = yaml.safe_load(f) - - if data_config['path'].startswith(COMET_PREFIX): - path = data_config['path'].replace(COMET_PREFIX, "") - data_dict = self.download_dataset_artifact(path) - - return data_dict - - self.log_asset(self.opt.data, metadata={"type": "data-config-file"}) - - return check_dataset(data_file) - - def log_predictions(self, image, labelsn, path, shape, predn): - if self.logged_images_count >= self.max_images: - return - detections = predn[predn[:, 4] > self.conf_thres] - iou = box_iou(labelsn[:, 1:], detections[:, :4]) - mask, _ = torch.where(iou > self.iou_thres) - if len(mask) == 0: - return - - filtered_detections = detections[mask] - filtered_labels = labelsn[mask] - - image_id = path.split("/")[-1].split(".")[0] - image_name = f"{image_id}_curr_epoch_{self.experiment.curr_epoch}" - if image_name not in self.logged_image_names: - native_scale_image = PIL.Image.open(path) - self.log_image(native_scale_image, name=image_name) - self.logged_image_names.append(image_name) - - metadata = [] - for cls, *xyxy in filtered_labels.tolist(): - metadata.append({ - "label": f"{self.class_names[int(cls)]}-gt", - "score": 100, - "box": { - "x": xyxy[0], - "y": xyxy[1], - "x2": xyxy[2], - "y2": xyxy[3]},}) - for *xyxy, conf, cls in filtered_detections.tolist(): - metadata.append({ - "label": f"{self.class_names[int(cls)]}", - "score": conf * 100, - "box": { - "x": xyxy[0], - "y": xyxy[1], - "x2": xyxy[2], - "y2": xyxy[3]},}) - - self.metadata_dict[image_name] = metadata - self.logged_images_count += 1 - - return - - def preprocess_prediction(self, image, labels, shape, pred): - nl, _ = labels.shape[0], pred.shape[0] - - # Predictions - if self.opt.single_cls: - pred[:, 5] = 0 - - predn = pred.clone() - scale_boxes(image.shape[1:], predn[:, :4], shape[0], shape[1]) - - labelsn = None - if nl: - tbox = xywh2xyxy(labels[:, 1:5]) # target boxes - scale_boxes(image.shape[1:], tbox, shape[0], shape[1]) # native-space labels - labelsn = torch.cat((labels[:, 0:1], tbox), 1) # native-space labels - scale_boxes(image.shape[1:], predn[:, :4], shape[0], shape[1]) # native-space pred - - return predn, labelsn - - def add_assets_to_artifact(self, artifact, path, asset_path, split): - img_paths = sorted(glob.glob(f"{asset_path}/*")) - label_paths = img2label_paths(img_paths) - - for image_file, label_file in zip(img_paths, label_paths): - image_logical_path, label_logical_path = map(lambda x: os.path.relpath(x, path), [image_file, label_file]) - - try: - artifact.add(image_file, logical_path=image_logical_path, metadata={"split": split}) - artifact.add(label_file, logical_path=label_logical_path, metadata={"split": split}) - except ValueError as e: - logger.error('COMET ERROR: Error adding file to Artifact. Skipping file.') - logger.error(f"COMET ERROR: {e}") - continue - - return artifact - - def upload_dataset_artifact(self): - dataset_name = self.data_dict.get("dataset_name", "yolov5-dataset") - path = str((ROOT / Path(self.data_dict["path"])).resolve()) - - metadata = self.data_dict.copy() - for key in ["train", "val", "test"]: - split_path = metadata.get(key) - if split_path is not None: - metadata[key] = split_path.replace(path, "") - - artifact = comet_ml.Artifact(name=dataset_name, artifact_type="dataset", metadata=metadata) - for key in metadata.keys(): - if key in ["train", "val", "test"]: - if isinstance(self.upload_dataset, str) and (key != self.upload_dataset): - continue - - asset_path = self.data_dict.get(key) - if asset_path is not None: - artifact = self.add_assets_to_artifact(artifact, path, asset_path, key) - - self.experiment.log_artifact(artifact) - - return - - def download_dataset_artifact(self, artifact_path): - logged_artifact = self.experiment.get_artifact(artifact_path) - artifact_save_dir = str(Path(self.opt.save_dir) / logged_artifact.name) - logged_artifact.download(artifact_save_dir) - - metadata = logged_artifact.metadata - data_dict = metadata.copy() - data_dict["path"] = artifact_save_dir - - metadata_names = metadata.get("names") - if type(metadata_names) == dict: - data_dict["names"] = {int(k): v for k, v in metadata.get("names").items()} - elif type(metadata_names) == list: - data_dict["names"] = {int(k): v for k, v in zip(range(len(metadata_names)), metadata_names)} - else: - raise "Invalid 'names' field in dataset yaml file. Please use a list or dictionary" - - data_dict = self.update_data_paths(data_dict) - return data_dict - - def update_data_paths(self, data_dict): - path = data_dict.get("path", "") - - for split in ["train", "val", "test"]: - if data_dict.get(split): - split_path = data_dict.get(split) - data_dict[split] = (f"{path}/{split_path}" if isinstance(split, str) else [ - f"{path}/{x}" for x in split_path]) - - return data_dict - - def on_pretrain_routine_end(self, paths): - if self.opt.resume: - return - - for path in paths: - self.log_asset(str(path)) - - if self.upload_dataset: - if not self.resume: - self.upload_dataset_artifact() - - return - - def on_train_start(self): - self.log_parameters(self.hyp) - - def on_train_epoch_start(self): - return - - def on_train_epoch_end(self, epoch): - self.experiment.curr_epoch = epoch - - return - - def on_train_batch_start(self): - return - - def on_train_batch_end(self, log_dict, step): - self.experiment.curr_step = step - if self.log_batch_metrics and (step % self.comet_log_batch_interval == 0): - self.log_metrics(log_dict, step=step) - - return - - def on_train_end(self, files, save_dir, last, best, epoch, results): - if self.comet_log_predictions: - curr_epoch = self.experiment.curr_epoch - self.experiment.log_asset_data(self.metadata_dict, "image-metadata.json", epoch=curr_epoch) - - for f in files: - self.log_asset(f, metadata={"epoch": epoch}) - self.log_asset(f"{save_dir}/results.csv", metadata={"epoch": epoch}) - - if not self.opt.evolve: - model_path = str(best if best.exists() else last) - name = Path(model_path).name - if self.save_model: - self.experiment.log_model( - self.model_name, - file_or_folder=model_path, - file_name=name, - overwrite=True, - ) - - # Check if running Experiment with Comet Optimizer - if hasattr(self.opt, 'comet_optimizer_id'): - metric = results.get(self.opt.comet_optimizer_metric) - self.experiment.log_other('optimizer_metric_value', metric) - - self.finish_run() - - def on_val_start(self): - return - - def on_val_batch_start(self): - return - - def on_val_batch_end(self, batch_i, images, targets, paths, shapes, outputs): - if not (self.comet_log_predictions and ((batch_i + 1) % self.comet_log_prediction_interval == 0)): - return - - for si, pred in enumerate(outputs): - if len(pred) == 0: - continue - - image = images[si] - labels = targets[targets[:, 0] == si, 1:] - shape = shapes[si] - path = paths[si] - predn, labelsn = self.preprocess_prediction(image, labels, shape, pred) - if labelsn is not None: - self.log_predictions(image, labelsn, path, shape, predn) - - return - - def on_val_end(self, nt, tp, fp, p, r, f1, ap, ap50, ap_class, confusion_matrix): - if self.comet_log_per_class_metrics: - if self.num_classes > 1: - for i, c in enumerate(ap_class): - class_name = self.class_names[c] - self.experiment.log_metrics( - { - 'mAP@.5': ap50[i], - 'mAP@.5:.95': ap[i], - 'precision': p[i], - 'recall': r[i], - 'f1': f1[i], - 'true_positives': tp[i], - 'false_positives': fp[i], - 'support': nt[c]}, - prefix=class_name) - - if self.comet_log_confusion_matrix: - epoch = self.experiment.curr_epoch - class_names = list(self.class_names.values()) - class_names.append("background") - num_classes = len(class_names) - - self.experiment.log_confusion_matrix( - matrix=confusion_matrix.matrix, - max_categories=num_classes, - labels=class_names, - epoch=epoch, - column_label='Actual Category', - row_label='Predicted Category', - file_name=f"confusion-matrix-epoch-{epoch}.json", - ) - - def on_fit_epoch_end(self, result, epoch): - self.log_metrics(result, epoch=epoch) - - def on_model_save(self, last, epoch, final_epoch, best_fitness, fi): - if ((epoch + 1) % self.opt.save_period == 0 and not final_epoch) and self.opt.save_period != -1: - self.log_model(last.parent, self.opt, epoch, fi, best_model=best_fitness == fi) - - def on_params_update(self, params): - self.log_parameters(params) - - def finish_run(self): - self.experiment.end() diff --git a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/spaghetti/custom_types.py b/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/spaghetti/custom_types.py deleted file mode 100644 index 9e29951ed9cf690a34bb99e92b8a0ebe59f457a2..0000000000000000000000000000000000000000 --- a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/spaghetti/custom_types.py +++ /dev/null @@ -1,53 +0,0 @@ -# import open3d -import enum -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as nnf -# from .constants import DEBUG -from typing import Tuple, List, Union, Callable, Type, Iterator, Dict, Set, Optional, Any, Sized, Iterable -from types import DynamicClassAttribute -from enum import Enum, unique -import torch.optim.optimizer -import torch.utils.data - -# if DEBUG: -# seed = 99 -# torch.manual_seed(seed) -# np.random.seed(seed) - -N = type(None) -V = np.array -ARRAY = np.ndarray -ARRAYS = Union[Tuple[ARRAY, ...], List[ARRAY]] -VS = Union[Tuple[V, ...], List[V]] -VN = Union[V, N] -VNS = Union[VS, N] -T = torch.Tensor -TS = Union[Tuple[T, ...], List[T]] -TN = Optional[T] -TNS = Union[Tuple[TN, ...], List[TN]] -TSN = Optional[TS] -TA = Union[T, ARRAY] - -V_Mesh = Tuple[ARRAY, ARRAY] -T_Mesh = Tuple[T, Optional[T]] -T_Mesh_T = Union[T_Mesh, T] -COLORS = Union[T, ARRAY, Tuple[int, int, int]] - -D = torch.device -CPU = torch.device('cpu') - - -def get_device(device_id: int) -> D: - if not torch.cuda.is_available(): - return CPU - device_id = min(torch.cuda.device_count() - 1, device_id) - return torch.device(f'cuda:{device_id}') - - -CUDA = get_device -Optimizer = torch.optim.Adam -Dataset = torch.utils.data.Dataset -DataLoader = torch.utils.data.DataLoader -Subset = torch.utils.data.Subset diff --git a/spaces/KPCGD/bingo/src/lib/hooks/use-bing.ts b/spaces/KPCGD/bingo/src/lib/hooks/use-bing.ts deleted file mode 100644 index dcdb1667ced0cba299b0825c0e91c4732411308c..0000000000000000000000000000000000000000 --- a/spaces/KPCGD/bingo/src/lib/hooks/use-bing.ts +++ /dev/null @@ -1,173 +0,0 @@ -'use client' - -import { useState, useCallback, useEffect, useMemo } from 'react' -import { useAtom, useAtomValue } from 'jotai' -import { chatFamily, bingConversationStyleAtom, GreetMessages, hashAtom, voiceAtom } from '@/state' -import { setConversationMessages } from './chat-history' -import { ChatMessageModel, BotId, FileItem } from '@/lib/bots/bing/types' -import { nanoid } from '../utils' -import { TTS } from '../bots/bing/tts' - -export function useBing(botId: BotId = 'bing') { - const chatAtom = useMemo(() => chatFamily({ botId, page: 'singleton' }), [botId]) - const [enableTTS] = useAtom(voiceAtom) - const speaker = useMemo(() => new TTS(), []) - const [hash, setHash] = useAtom(hashAtom) - const bingConversationStyle = useAtomValue(bingConversationStyleAtom) - const [chatState, setChatState] = useAtom(chatAtom) - const [input, setInput] = useState('') - const [attachmentList, setAttachmentList] = useState([]) - - const updateMessage = useCallback( - (messageId: string, updater: (message: ChatMessageModel) => void) => { - setChatState((draft) => { - const message = draft.messages.find((m) => m.id === messageId) - if (message) { - updater(message) - } - }) - }, - [setChatState], - ) - - const sendMessage = useCallback( - async (input: string, options = {}) => { - const botMessageId = nanoid() - const imageUrl = attachmentList?.[0]?.status === 'loaded' ? attachmentList[0].url : undefined - setChatState((draft) => { - const text = imageUrl ? `${input}\n\n![image](${imageUrl})` : input - draft.messages.push({ id: nanoid(), text, author: 'user' }, { id: botMessageId, text: '', author: 'bot' }) - setAttachmentList([]) - }) - const abortController = new AbortController() - setChatState((draft) => { - draft.generatingMessageId = botMessageId - draft.abortController = abortController - }) - speaker.reset() - await chatState.bot.sendMessage({ - prompt: input, - imageUrl: /\?bcid=([^&]+)/.test(imageUrl ?? '') ? `https://www.bing.com/images/blob?bcid=${RegExp.$1}` : imageUrl, - options: { - ...options, - bingConversationStyle, - }, - signal: abortController.signal, - onEvent(event) { - if (event.type === 'UPDATE_ANSWER') { - updateMessage(botMessageId, (message) => { - if (event.data.text.length > message.text.length) { - message.text = event.data.text - } - - if (event.data.spokenText && enableTTS) { - speaker.speak(event.data.spokenText) - } - - message.throttling = event.data.throttling || message.throttling - message.sourceAttributions = event.data.sourceAttributions || message.sourceAttributions - message.suggestedResponses = event.data.suggestedResponses || message.suggestedResponses - }) - } else if (event.type === 'ERROR') { - updateMessage(botMessageId, (message) => { - message.error = event.error - }) - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } else if (event.type === 'DONE') { - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } - }, - }) - }, - [botId, attachmentList, chatState.bot, setChatState, updateMessage], - ) - - const uploadImage = useCallback(async (imgUrl: string) => { - setAttachmentList([{ url: imgUrl, status: 'loading' }]) - const response = await chatState.bot.uploadImage(imgUrl, bingConversationStyle) - if (response?.blobId) { - setAttachmentList([{ url: `/api/blob?bcid=${response.blobId}`, status: 'loaded' }]) - } else { - setAttachmentList([{ url: imgUrl, status: 'error' }]) - } - }, [chatState.bot]) - - const resetConversation = useCallback(() => { - chatState.bot.resetConversation() - speaker.abort() - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - draft.messages = [{ author: 'bot', text: GreetMessages[Math.floor(GreetMessages.length * Math.random())], id: nanoid() }] - draft.conversationId = nanoid() - }) - }, [chatState.bot, setChatState]) - - const stopGenerating = useCallback(() => { - chatState.abortController?.abort() - if (chatState.generatingMessageId) { - updateMessage(chatState.generatingMessageId, (message) => { - if (!message.text && !message.error) { - message.text = 'Cancelled' - } - }) - } - setChatState((draft) => { - draft.generatingMessageId = '' - }) - }, [chatState.abortController, chatState.generatingMessageId, setChatState, updateMessage]) - - useEffect(() => { - if (chatState.messages.length) { - setConversationMessages(botId, chatState.conversationId, chatState.messages) - } - }, [botId, chatState.conversationId, chatState.messages]) - - useEffect(() => { - if (hash === 'reset') { - resetConversation() - setHash('') - } - }, [hash, setHash]) - - const chat = useMemo( - () => ({ - botId, - bot: chatState.bot, - isSpeaking: speaker.isSpeaking, - messages: chatState.messages, - sendMessage, - setInput, - input, - resetConversation, - generating: !!chatState.generatingMessageId, - stopGenerating, - uploadImage, - setAttachmentList, - attachmentList, - }), - [ - botId, - bingConversationStyle, - chatState.bot, - chatState.generatingMessageId, - chatState.messages, - speaker.isSpeaking, - setInput, - input, - setAttachmentList, - attachmentList, - resetConversation, - sendMessage, - stopGenerating, - ], - ) - - return chat -} diff --git a/spaces/Kaludi/Food-Category-Classification_App/app.py b/spaces/Kaludi/Food-Category-Classification_App/app.py deleted file mode 100644 index 1fef378a26fa7bdfb44114f8b72d4e97cecd7991..0000000000000000000000000000000000000000 --- a/spaces/Kaludi/Food-Category-Classification_App/app.py +++ /dev/null @@ -1,20 +0,0 @@ -import gradio as gr -from transformers import pipeline - -examples = ["examples/example_0.jpg", - "examples/example_1.jpg", - "examples/example_2.jpg", - "examples/example_3.jpg", - "examples/example_4.jpg", - "examples/example_5.jpg", - "examples/example_6.jpg", - "examples/example_7.jpg"] - -pipe = pipeline(task="image-classification", - model="Kaludi/food-category-classification-v2.0") -gr.Interface.from_pipeline(pipe, - title="Food Category Classification App", - description = "This is a Food Category Image Classifier model that has been trained by Kaludi to recognize 12 different categories of foods, which includes Bread, Dairy, Dessert, Egg, Fried Food, Fruit, Meat, Noodles, Rice, Seafood, Soup, and Vegetable. It can accurately classify an image of food into one of these categories by analyzing its visual features. This model can be used by food bloggers, restaurants, and recipe websites to quickly categorize and sort their food images, making it easier to manage their content and provide a better user experience.", - article = "

    Github | HuggingFace

    ", - examples=examples, - ).launch() \ No newline at end of file diff --git a/spaces/Kelas/translation/README.md b/spaces/Kelas/translation/README.md deleted file mode 100644 index 3a390763d3983bea3ba3242524cda083b0bd0cbb..0000000000000000000000000000000000000000 --- a/spaces/Kelas/translation/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Translation -emoji: 🚀 -colorFrom: gray -colorTo: red -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: cc-by-sa-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Kevin676/AutoGPT/run_continuous.bat b/spaces/Kevin676/AutoGPT/run_continuous.bat deleted file mode 100644 index 812aa01c1c5506c452665610c0e9e83a17c426f2..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/AutoGPT/run_continuous.bat +++ /dev/null @@ -1,3 +0,0 @@ -@echo off -set argument=--continuous -call run.bat %argument% diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/backbones/trident_resnet.py b/spaces/KyanChen/RSPrompter/mmdet/models/backbones/trident_resnet.py deleted file mode 100644 index 22c76354522ff8533b094df6858ec361ba400c1e..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/backbones/trident_resnet.py +++ /dev/null @@ -1,298 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as cp -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmengine.model import BaseModule -from torch.nn.modules.utils import _pair - -from mmdet.models.backbones.resnet import Bottleneck, ResNet -from mmdet.registry import MODELS - - -class TridentConv(BaseModule): - """Trident Convolution Module. - - Args: - in_channels (int): Number of channels in input. - out_channels (int): Number of channels in output. - kernel_size (int): Size of convolution kernel. - stride (int, optional): Convolution stride. Default: 1. - trident_dilations (tuple[int, int, int], optional): Dilations of - different trident branch. Default: (1, 2, 3). - test_branch_idx (int, optional): In inference, all 3 branches will - be used if `test_branch_idx==-1`, otherwise only branch with - index `test_branch_idx` will be used. Default: 1. - bias (bool, optional): Whether to use bias in convolution or not. - Default: False. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - trident_dilations=(1, 2, 3), - test_branch_idx=1, - bias=False, - init_cfg=None): - super(TridentConv, self).__init__(init_cfg) - self.num_branch = len(trident_dilations) - self.with_bias = bias - self.test_branch_idx = test_branch_idx - self.stride = _pair(stride) - self.kernel_size = _pair(kernel_size) - self.paddings = _pair(trident_dilations) - self.dilations = trident_dilations - self.in_channels = in_channels - self.out_channels = out_channels - self.bias = bias - - self.weight = nn.Parameter( - torch.Tensor(out_channels, in_channels, *self.kernel_size)) - if bias: - self.bias = nn.Parameter(torch.Tensor(out_channels)) - else: - self.bias = None - - def extra_repr(self): - tmpstr = f'in_channels={self.in_channels}' - tmpstr += f', out_channels={self.out_channels}' - tmpstr += f', kernel_size={self.kernel_size}' - tmpstr += f', num_branch={self.num_branch}' - tmpstr += f', test_branch_idx={self.test_branch_idx}' - tmpstr += f', stride={self.stride}' - tmpstr += f', paddings={self.paddings}' - tmpstr += f', dilations={self.dilations}' - tmpstr += f', bias={self.bias}' - return tmpstr - - def forward(self, inputs): - if self.training or self.test_branch_idx == -1: - outputs = [ - F.conv2d(input, self.weight, self.bias, self.stride, padding, - dilation) for input, dilation, padding in zip( - inputs, self.dilations, self.paddings) - ] - else: - assert len(inputs) == 1 - outputs = [ - F.conv2d(inputs[0], self.weight, self.bias, self.stride, - self.paddings[self.test_branch_idx], - self.dilations[self.test_branch_idx]) - ] - - return outputs - - -# Since TridentNet is defined over ResNet50 and ResNet101, here we -# only support TridentBottleneckBlock. -class TridentBottleneck(Bottleneck): - """BottleBlock for TridentResNet. - - Args: - trident_dilations (tuple[int, int, int]): Dilations of different - trident branch. - test_branch_idx (int): In inference, all 3 branches will be used - if `test_branch_idx==-1`, otherwise only branch with index - `test_branch_idx` will be used. - concat_output (bool): Whether to concat the output list to a Tensor. - `True` only in the last Block. - """ - - def __init__(self, trident_dilations, test_branch_idx, concat_output, - **kwargs): - - super(TridentBottleneck, self).__init__(**kwargs) - self.trident_dilations = trident_dilations - self.num_branch = len(trident_dilations) - self.concat_output = concat_output - self.test_branch_idx = test_branch_idx - self.conv2 = TridentConv( - self.planes, - self.planes, - kernel_size=3, - stride=self.conv2_stride, - bias=False, - trident_dilations=self.trident_dilations, - test_branch_idx=test_branch_idx, - init_cfg=dict( - type='Kaiming', - distribution='uniform', - mode='fan_in', - override=dict(name='conv2'))) - - def forward(self, x): - - def _inner_forward(x): - num_branch = ( - self.num_branch - if self.training or self.test_branch_idx == -1 else 1) - identity = x - if not isinstance(x, list): - x = (x, ) * num_branch - identity = x - if self.downsample is not None: - identity = [self.downsample(b) for b in x] - - out = [self.conv1(b) for b in x] - out = [self.norm1(b) for b in out] - out = [self.relu(b) for b in out] - - if self.with_plugins: - for k in range(len(out)): - out[k] = self.forward_plugin(out[k], - self.after_conv1_plugin_names) - - out = self.conv2(out) - out = [self.norm2(b) for b in out] - out = [self.relu(b) for b in out] - if self.with_plugins: - for k in range(len(out)): - out[k] = self.forward_plugin(out[k], - self.after_conv2_plugin_names) - - out = [self.conv3(b) for b in out] - out = [self.norm3(b) for b in out] - - if self.with_plugins: - for k in range(len(out)): - out[k] = self.forward_plugin(out[k], - self.after_conv3_plugin_names) - - out = [ - out_b + identity_b for out_b, identity_b in zip(out, identity) - ] - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = [self.relu(b) for b in out] - if self.concat_output: - out = torch.cat(out, dim=0) - return out - - -def make_trident_res_layer(block, - inplanes, - planes, - num_blocks, - stride=1, - trident_dilations=(1, 2, 3), - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None, - test_branch_idx=-1): - """Build Trident Res Layers.""" - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = [] - conv_stride = stride - downsample.extend([ - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=conv_stride, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1] - ]) - downsample = nn.Sequential(*downsample) - - layers = [] - for i in range(num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride if i == 0 else 1, - trident_dilations=trident_dilations, - downsample=downsample if i == 0 else None, - style=style, - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - dcn=dcn, - plugins=plugins, - test_branch_idx=test_branch_idx, - concat_output=True if i == num_blocks - 1 else False)) - inplanes = planes * block.expansion - return nn.Sequential(*layers) - - -@MODELS.register_module() -class TridentResNet(ResNet): - """The stem layer, stage 1 and stage 2 in Trident ResNet are identical to - ResNet, while in stage 3, Trident BottleBlock is utilized to replace the - normal BottleBlock to yield trident output. Different branch shares the - convolution weight but uses different dilations to achieve multi-scale - output. - - / stage3(b0) \ - x - stem - stage1 - stage2 - stage3(b1) - output - \ stage3(b2) / - - Args: - depth (int): Depth of resnet, from {50, 101, 152}. - num_branch (int): Number of branches in TridentNet. - test_branch_idx (int): In inference, all 3 branches will be used - if `test_branch_idx==-1`, otherwise only branch with index - `test_branch_idx` will be used. - trident_dilations (tuple[int]): Dilations of different trident branch. - len(trident_dilations) should be equal to num_branch. - """ # noqa - - def __init__(self, depth, num_branch, test_branch_idx, trident_dilations, - **kwargs): - - assert num_branch == len(trident_dilations) - assert depth in (50, 101, 152) - super(TridentResNet, self).__init__(depth, **kwargs) - assert self.num_stages == 3 - self.test_branch_idx = test_branch_idx - self.num_branch = num_branch - - last_stage_idx = self.num_stages - 1 - stride = self.strides[last_stage_idx] - dilation = trident_dilations - dcn = self.dcn if self.stage_with_dcn[last_stage_idx] else None - if self.plugins is not None: - stage_plugins = self.make_stage_plugins(self.plugins, - last_stage_idx) - else: - stage_plugins = None - planes = self.base_channels * 2**last_stage_idx - res_layer = make_trident_res_layer( - TridentBottleneck, - inplanes=(self.block.expansion * self.base_channels * - 2**(last_stage_idx - 1)), - planes=planes, - num_blocks=self.stage_blocks[last_stage_idx], - stride=stride, - trident_dilations=dilation, - style=self.style, - with_cp=self.with_cp, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - dcn=dcn, - plugins=stage_plugins, - test_branch_idx=self.test_branch_idx) - - layer_name = f'layer{last_stage_idx + 1}' - - self.__setattr__(layer_name, res_layer) - self.res_layers.pop(last_stage_idx) - self.res_layers.insert(last_stage_idx, layer_name) - - self._freeze_stages() diff --git a/spaces/L0SG/BigVGAN/app.py b/spaces/L0SG/BigVGAN/app.py deleted file mode 100644 index fb2352dc06ab67c9c387db6f8d496a55bbdbbae3..0000000000000000000000000000000000000000 --- a/spaces/L0SG/BigVGAN/app.py +++ /dev/null @@ -1,319 +0,0 @@ -import gradio as gr -from huggingface_hub import hf_hub_download -import json -import torch -import os -from env import AttrDict -from meldataset import mel_spectrogram, MAX_WAV_VALUE -from models import BigVGAN as Generator -import librosa -import numpy as np -from utils import plot_spectrogram, plot_spectrogram_clipped -import PIL - -torch.backends.cudnn.benchmark = False - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print("Loading '{}'".format(filepath)) - checkpoint_dict = torch.load(filepath, map_location=device) - print("Complete.") - return checkpoint_dict - - -def inference_gradio(input, model_choice): # input is audio waveform in [T, channel] - sr, audio = input # unpack input to sampling rate and audio itself - audio = np.transpose(audio) # transpose to [channel, T] for librosa - audio = audio / MAX_WAV_VALUE # convert int16 to float range used by BigVGAN - - h = list_config[model_choice] - model = list_model[model_choice] - - if sr != h.sampling_rate: # convert audio to model's sampling rate - audio = librosa.resample(audio, sr, h.sampling_rate) - if len(audio.shape) == 2: # stereo - audio = librosa.to_mono(audio) # convert to mono if stereo - audio = librosa.util.normalize(audio) * 0.95 - output, spec_gen = inference_model(audio, h, model) # output is generated audio in ndarray - - spec_plot_gen = plot_spectrogram(spec_gen.numpy()) - - output_video = gr.make_waveform((h.sampling_rate, output)) - output_image_gen = PIL.Image.frombytes('RGB', - spec_plot_gen.canvas.get_width_height(), - spec_plot_gen.canvas.tostring_rgb()) - - return output_video, output_image_gen - - -def inference_model(audio_input, h, model): - def get_mel(x): - return mel_spectrogram(x, h.n_fft, h.num_mels, h.sampling_rate, h.hop_size, h.win_size, h.fmin, h.fmax) - - with torch.no_grad(): - wav = torch.FloatTensor(audio_input).to(device) - # compute mel spectrogram from the ground truth audio - spec_gt = get_mel(wav.unsqueeze(0)) - - y_g_hat = model(spec_gt) - - audio_gen = y_g_hat.squeeze() - spec_gen = get_mel(audio_gen.unsqueeze(0)) - audio_gen = audio_gen * MAX_WAV_VALUE - audio_gen = audio_gen.cpu().numpy().astype('int16') - - return audio_gen, spec_gen[0].cpu() - - -css = """ - a { - color: inherit; - text-decoration: underline; - } - .gradio-container { - font-family: 'IBM Plex Sans', sans-serif; - } - .gr-button { - color: white; - border-color: #000000; - background: #000000; - } - input[type='range'] { - accent-color: #000000; - } - .dark input[type='range'] { - accent-color: #dfdfdf; - } - .container { - max-width: 730px; - margin: auto; - padding-top: 1.5rem; - } - #gallery { - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; - } - #gallery>div>.h-full { - min-height: 20rem; - } - .details:hover { - text-decoration: underline; - } - .gr-button { - white-space: nowrap; - } - .gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; - } - #advanced-btn { - font-size: .7rem !important; - line-height: 19px; - margin-top: 12px; - margin-bottom: 12px; - padding: 2px 8px; - border-radius: 14px !important; - } - #advanced-options { - margin-bottom: 20px; - } - .footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } - .acknowledgments h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; - } - #container-advanced-btns{ - display: flex; - flex-wrap: wrap; - justify-content: space-between; - align-items: center; - } - .animate-spin { - animation: spin 1s linear infinite; - } - @keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } - } - #share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; - margin-top: 10px; - margin-left: auto; - } - #share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;right:0; - } - #share-btn * { - all: unset; - } - #share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; - } - #share-btn-container .wrap { - display: none !important; - } - .gr-form{ - flex: 1 1 50%; border-top-right-radius: 0; border-bottom-right-radius: 0; - } - #prompt-container{ - gap: 0; - } - #generated_id{ - min-height: 700px - } - #setting_id{ - margin-bottom: 12px; - text-align: center; - font-weight: 900; - } -""" - -######################## script for loading the models ######################## - -model_path = "L0SG/BigVGAN" -list_model_name = ["bigvgan_24khz_100band", - "bigvgan_base_24khz_100band", - "bigvgan_22khz_80band", - "bigvgan_base_22khz_80band"] - -list_model = [] -list_config = [] - -for model_name in list_model_name: - model_file = hf_hub_download(model_path, "{}/g_05000000".format(model_name), - use_auth_token="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") - config_file = hf_hub_download(model_path, "{}/config.json".format(model_name), - use_auth_token="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") - with open(config_file) as f: - data = f.read() - - json_config = json.loads(data) - h = AttrDict(json_config) - - torch.manual_seed(h.seed) - if torch.cuda.is_available(): - torch.cuda.manual_seed(h.seed) - device = torch.device('cuda') - else: - device = torch.device('cpu') - - generator = Generator(h).to(device) - state_dict_g = load_checkpoint(model_file, device) - generator.load_state_dict(state_dict_g['generator']) - generator.eval() - generator.remove_weight_norm() - - list_model.append(generator) - list_config.append(h) - -######################## script for gradio UI ######################## - -iface = gr.Blocks(css=css) - -with iface: - gr.HTML( - """ -
    -
    -

    - BigVGAN: A Universal Neural Vocoder with Large-Scale Training -

    -
    -

    - [Paper] [Code] [Demo] [Project page] -

    -
    - """ - ) - - gr.HTML( - """ - -

    Select the model and submit the audio waveform. BigVGAN generates audio waveform using the mel spectrogram of the input.

    -
      -
    • bigvgan_24khz_100band: 112M / 24kHz / 100-band mel spectrogram / fmax=12000
    • -
    • bigvgan_base_24khz_100band: 14M / 24kHz / 100-band mel spectrogram / fmax=12000
    • -
    • bigvgan_22khz_80band: 112M / 22.05kHz / 80-band mel spectrogram / fmax=8000
    • -
    • bigvgan_base_22khz_80band: 14M / 22.05kHz / 80-band mel spectrogram / fmax=8000
    • -
    -

    NOTE: All models are trained using speech audio datasets ONLY! (24kHz models: LibriTTS, 22kHz models: LibriTTS + VCTK + LJSpeech).

    - - """) - - with gr.Group(): - with gr.Box(): - model_choice = gr.Radio(label="Select the model. Default: bigvgan_24khz_100band", - value="bigvgan_24khz_100band", - choices=[m for m in list_model_name], - type="index", - interactive=True) - audio_input = gr.Audio(label="Input Audio", - elem_id="input-audio", - interactive=True) - button = gr.Button("Submit").style(full_width=True) - output_video = gr.Video(label="Output Audio", - elem_id="output-video") - output_image_gen = gr.Image(label="Output Mel Spectrogram", - elem_id="output-image-gen") - button.click(inference_gradio, - inputs=[audio_input, model_choice], - outputs=[output_video, output_image_gen]) - - gr.Examples( - [ - [os.path.join(os.path.dirname(__file__), "examples/jensen.wav"), "bigvgan_24khz_100band"], - [os.path.join(os.path.dirname(__file__), "examples/libritts.wav"), "bigvgan_24khz_100band"], - [os.path.join(os.path.dirname(__file__), "examples/queen.wav"), "bigvgan_24khz_100band"], - [os.path.join(os.path.dirname(__file__), "examples/dance.wav"), "bigvgan_24khz_100band"], - [os.path.join(os.path.dirname(__file__), "examples/megalovania.wav"), "bigvgan_24khz_100band"], - ], - fn=inference_gradio, - inputs=[audio_input, model_choice], - outputs=[output_video, output_image_gen], - cache_examples=True - ) - -iface.queue(concurrency_count=3) -iface.launch() diff --git a/spaces/LUCKky/QQsign/Dockerfile b/spaces/LUCKky/QQsign/Dockerfile deleted file mode 100644 index 535624113f3b520e4829240a48bd3652430de828..0000000000000000000000000000000000000000 --- a/spaces/LUCKky/QQsign/Dockerfile +++ /dev/null @@ -1,23 +0,0 @@ -FROM openjdk:17-slim - -# 设置时区 -ENV TZ Asia/Shanghai - -# 设置工作目录 -WORKDIR /app - -# 复制文件到工作目录 -COPY bin /app/bin -COPY lib /app/lib -COPY txlib /app/txlib - -# 设置命令 -RUN chmod -R 777 /tmp -RUN chmod -R 777 /app -RUN sed 's/"key": ".*"/"key": "'"$KEY_VALUE"'"/' txlib/$TXLIB_VERSION/config.json > /app/txlib/$TXLIB_VERSION/config.json - -# 运行 -CMD bash bin/unidbg-fetch-qsign --basePath=txlib/$TXLIB_VERSION - -# 暴露端口 -EXPOSE 7860 \ No newline at end of file diff --git a/spaces/LaynzKunz/REMAKE-AI-COVER/README.md b/spaces/LaynzKunz/REMAKE-AI-COVER/README.md deleted file mode 100644 index 7f2b5687c9b2337af9abc6c98f27e1b63e4487b8..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/REMAKE-AI-COVER/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: REMAKE AI COVER -emoji: 🚀 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: true -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Lianjd/stock_dashboard/backtrader/indicators/directionalmove.py b/spaces/Lianjd/stock_dashboard/backtrader/indicators/directionalmove.py deleted file mode 100644 index 21b1f0a6c8910d05680f4a9189cdb46a699057ac..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/indicators/directionalmove.py +++ /dev/null @@ -1,383 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - -from . import Indicator, And, If, MovAv, ATR - - -class UpMove(Indicator): - ''' - Defined by J. Welles Wilder, Jr. in 1978 in his book *"New Concepts in - Technical Trading Systems"* as part of the Directional Move System to - calculate Directional Indicators. - - Positive if the given data has moved higher than the previous day - - Formula: - - upmove = data - data(-1) - - See: - - https://en.wikipedia.org/wiki/Average_directional_movement_index - ''' - lines = ('upmove',) - - def __init__(self): - self.lines.upmove = self.data - self.data(-1) - super(UpMove, self).__init__() - - -class DownMove(Indicator): - ''' - Defined by J. Welles Wilder, Jr. in 1978 in his book *"New Concepts in - Technical Trading Systems"* as part of the Directional Move System to - calculate Directional Indicators. - - Positive if the given data has moved lower than the previous day - - Formula: - - downmove = data(-1) - data - - See: - - https://en.wikipedia.org/wiki/Average_directional_movement_index - ''' - lines = ('downmove',) - - def __init__(self): - self.lines.downmove = self.data(-1) - self.data - super(DownMove, self).__init__() - - -class _DirectionalIndicator(Indicator): - ''' - This class serves as the root base class for all "Directional Movement - System" related indicators, given that the calculations are first common - and then derived from the common calculations. - - It can calculate the +DI and -DI values (using kwargs as the hint as to - what to calculate) but doesn't assign them to lines. This is left for - sublcases of this class. - ''' - params = (('period', 14), ('movav', MovAv.Smoothed)) - - plotlines = dict(plusDI=dict(_name='+DI'), minusDI=dict(_name='-DI')) - - def _plotlabel(self): - plabels = [self.p.period] - plabels += [self.p.movav] * self.p.notdefault('movav') - return plabels - - def __init__(self, _plus=True, _minus=True): - atr = ATR(self.data, period=self.p.period, movav=self.p.movav) - - upmove = self.data.high - self.data.high(-1) - downmove = self.data.low(-1) - self.data.low - - if _plus: - plus = And(upmove > downmove, upmove > 0.0) - plusDM = If(plus, upmove, 0.0) - plusDMav = self.p.movav(plusDM, period=self.p.period) - - self.DIplus = 100.0 * plusDMav / atr - - if _minus: - minus = And(downmove > upmove, downmove > 0.0) - minusDM = If(minus, downmove, 0.0) - minusDMav = self.p.movav(minusDM, period=self.p.period) - - self.DIminus = 100.0 * minusDMav / atr - - super(_DirectionalIndicator, self).__init__() - - -class DirectionalIndicator(_DirectionalIndicator): - ''' - Defined by J. Welles Wilder, Jr. in 1978 in his book *"New Concepts in - Technical Trading Systems"*. - - Intended to measure trend strength - - This indicator shows +DI, -DI: - - Use PlusDirectionalIndicator (PlusDI) to get +DI - - Use MinusDirectionalIndicator (MinusDI) to get -DI - - Use AverageDirectionalIndex (ADX) to get ADX - - Use AverageDirectionalIndexRating (ADXR) to get ADX, ADXR - - Use DirectionalMovementIndex (DMI) to get ADX, +DI, -DI - - Use DirectionalMovement (DM) to get ADX, ADXR, +DI, -DI - - Formula: - - upmove = high - high(-1) - - downmove = low(-1) - low - - +dm = upmove if upmove > downmove and upmove > 0 else 0 - - -dm = downmove if downmove > upmove and downmove > 0 else 0 - - +di = 100 * MovingAverage(+dm, period) / atr(period) - - -di = 100 * MovingAverage(-dm, period) / atr(period) - - The moving average used is the one originally defined by Wilder, - the SmoothedMovingAverage - - See: - - https://en.wikipedia.org/wiki/Average_directional_movement_index - ''' - alias = ('DI',) - lines = ('plusDI', 'minusDI',) - - def __init__(self): - super(DirectionalIndicator, self).__init__() - - self.lines.plusDI = self.DIplus - self.lines.minusDI = self.DIminus - - -class PlusDirectionalIndicator(_DirectionalIndicator): - ''' - Defined by J. Welles Wilder, Jr. in 1978 in his book *"New Concepts in - Technical Trading Systems"*. - - Intended to measure trend strength - - This indicator shows +DI: - - Use MinusDirectionalIndicator (MinusDI) to get -DI - - Use Directional Indicator (DI) to get +DI, -DI - - Use AverageDirectionalIndex (ADX) to get ADX - - Use AverageDirectionalIndexRating (ADXR) to get ADX, ADXR - - Use DirectionalMovementIndex (DMI) to get ADX, +DI, -DI - - Use DirectionalMovement (DM) to get ADX, ADXR, +DI, -DI - - Formula: - - upmove = high - high(-1) - - downmove = low(-1) - low - - +dm = upmove if upmove > downmove and upmove > 0 else 0 - - +di = 100 * MovingAverage(+dm, period) / atr(period) - - The moving average used is the one originally defined by Wilder, - the SmoothedMovingAverage - - See: - - https://en.wikipedia.org/wiki/Average_directional_movement_index - ''' - alias = (('PlusDI', '+DI'),) - lines = ('plusDI',) - - plotinfo = dict(plotname='+DirectionalIndicator') - - def __init__(self): - super(PlusDirectionalIndicator, self).__init__(_minus=False) - - self.lines.plusDI = self.DIplus - - -class MinusDirectionalIndicator(_DirectionalIndicator): - ''' - Defined by J. Welles Wilder, Jr. in 1978 in his book *"New Concepts in - Technical Trading Systems"*. - - Intended to measure trend strength - - This indicator shows -DI: - - Use PlusDirectionalIndicator (PlusDI) to get +DI - - Use Directional Indicator (DI) to get +DI, -DI - - Use AverageDirectionalIndex (ADX) to get ADX - - Use AverageDirectionalIndexRating (ADXR) to get ADX, ADXR - - Use DirectionalMovementIndex (DMI) to get ADX, +DI, -DI - - Use DirectionalMovement (DM) to get ADX, ADXR, +DI, -DI - - Formula: - - upmove = high - high(-1) - - downmove = low(-1) - low - - -dm = downmove if downmove > upmove and downmove > 0 else 0 - - -di = 100 * MovingAverage(-dm, period) / atr(period) - - The moving average used is the one originally defined by Wilder, - the SmoothedMovingAverage - - See: - - https://en.wikipedia.org/wiki/Average_directional_movement_index - ''' - alias = (('MinusDI', '-DI'),) - lines = ('minusDI',) - - plotinfo = dict(plotname='-DirectionalIndicator') - - def __init__(self): - super(MinusDirectionalIndicator, self).__init__(_plus=False) - - self.lines.minusDI = self.DIminus - - -class AverageDirectionalMovementIndex(_DirectionalIndicator): - ''' - Defined by J. Welles Wilder, Jr. in 1978 in his book *"New Concepts in - Technical Trading Systems"*. - - Intended to measure trend strength - - This indicator only shows ADX: - - Use PlusDirectionalIndicator (PlusDI) to get +DI - - Use MinusDirectionalIndicator (MinusDI) to get -DI - - Use Directional Indicator (DI) to get +DI, -DI - - Use AverageDirectionalIndexRating (ADXR) to get ADX, ADXR - - Use DirectionalMovementIndex (DMI) to get ADX, +DI, -DI - - Use DirectionalMovement (DM) to get ADX, ADXR, +DI, -DI - - Formula: - - upmove = high - high(-1) - - downmove = low(-1) - low - - +dm = upmove if upmove > downmove and upmove > 0 else 0 - - -dm = downmove if downmove > upmove and downmove > 0 else 0 - - +di = 100 * MovingAverage(+dm, period) / atr(period) - - -di = 100 * MovingAverage(-dm, period) / atr(period) - - dx = 100 * abs(+di - -di) / (+di + -di) - - adx = MovingAverage(dx, period) - - The moving average used is the one originally defined by Wilder, - the SmoothedMovingAverage - - See: - - https://en.wikipedia.org/wiki/Average_directional_movement_index - ''' - alias = ('ADX',) - - lines = ('adx',) - - plotlines = dict(adx=dict(_name='ADX')) - - def __init__(self): - super(AverageDirectionalMovementIndex, self).__init__() - - dx = abs(self.DIplus - self.DIminus) / (self.DIplus + self.DIminus) - self.lines.adx = 100.0 * self.p.movav(dx, period=self.p.period) - - -class AverageDirectionalMovementIndexRating(AverageDirectionalMovementIndex): - ''' - Defined by J. Welles Wilder, Jr. in 1978 in his book *"New Concepts in - Technical Trading Systems"*. - - Intended to measure trend strength. - - ADXR is the average of ADX with a value period bars ago - - This indicator shows the ADX and ADXR: - - Use PlusDirectionalIndicator (PlusDI) to get +DI - - Use MinusDirectionalIndicator (MinusDI) to get -DI - - Use Directional Indicator (DI) to get +DI, -DI - - Use AverageDirectionalIndex (ADX) to get ADX - - Use DirectionalMovementIndex (DMI) to get ADX, +DI, -DI - - Use DirectionalMovement (DM) to get ADX, ADXR, +DI, -DI - - Formula: - - upmove = high - high(-1) - - downmove = low(-1) - low - - +dm = upmove if upmove > downmove and upmove > 0 else 0 - - -dm = downmove if downmove > upmove and downmove > 0 else 0 - - +di = 100 * MovingAverage(+dm, period) / atr(period) - - -di = 100 * MovingAverage(-dm, period) / atr(period) - - dx = 100 * abs(+di - -di) / (+di + -di) - - adx = MovingAverage(dx, period) - - adxr = (adx + adx(-period)) / 2 - - The moving average used is the one originally defined by Wilder, - the SmoothedMovingAverage - - See: - - https://en.wikipedia.org/wiki/Average_directional_movement_index - ''' - alias = ('ADXR',) - - lines = ('adxr',) - plotlines = dict(adxr=dict(_name='ADXR')) - - def __init__(self): - super(AverageDirectionalMovementIndexRating, self).__init__() - - self.lines.adxr = (self.l.adx + self.l.adx(-self.p.period)) / 2.0 - - -class DirectionalMovementIndex(AverageDirectionalMovementIndex, - DirectionalIndicator): - ''' - Defined by J. Welles Wilder, Jr. in 1978 in his book *"New Concepts in - Technical Trading Systems"*. - - Intended to measure trend strength - - This indicator shows the ADX, +DI, -DI: - - Use PlusDirectionalIndicator (PlusDI) to get +DI - - Use MinusDirectionalIndicator (MinusDI) to get -DI - - Use Directional Indicator (DI) to get +DI, -DI - - Use AverageDirectionalIndex (ADX) to get ADX - - Use AverageDirectionalIndexRating (ADXRating) to get ADX, ADXR - - Use DirectionalMovement (DM) to get ADX, ADXR, +DI, -DI - - Formula: - - upmove = high - high(-1) - - downmove = low(-1) - low - - +dm = upmove if upmove > downmove and upmove > 0 else 0 - - -dm = downmove if downmove > upmove and downmove > 0 else 0 - - +di = 100 * MovingAverage(+dm, period) / atr(period) - - -di = 100 * MovingAverage(-dm, period) / atr(period) - - dx = 100 * abs(+di - -di) / (+di + -di) - - adx = MovingAverage(dx, period) - - The moving average used is the one originally defined by Wilder, - the SmoothedMovingAverage - - See: - - https://en.wikipedia.org/wiki/Average_directional_movement_index - ''' - alias = ('DMI',) - - -class DirectionalMovement(AverageDirectionalMovementIndexRating, - DirectionalIndicator): - ''' - Defined by J. Welles Wilder, Jr. in 1978 in his book *"New Concepts in - Technical Trading Systems"*. - - Intended to measure trend strength - - This indicator shows ADX, ADXR, +DI, -DI. - - - Use PlusDirectionalIndicator (PlusDI) to get +DI - - Use MinusDirectionalIndicator (MinusDI) to get -DI - - Use Directional Indicator (DI) to get +DI, -DI - - Use AverageDirectionalIndex (ADX) to get ADX - - Use AverageDirectionalIndexRating (ADXR) to get ADX, ADXR - - Use DirectionalMovementIndex (DMI) to get ADX, +DI, -DI - - Formula: - - upmove = high - high(-1) - - downmove = low(-1) - low - - +dm = upmove if upmove > downmove and upmove > 0 else 0 - - -dm = downmove if downmove > upmove and downmove > 0 else 0 - - +di = 100 * MovingAverage(+dm, period) / atr(period) - - -di = 100 * MovingAverage(-dm, period) / atr(period) - - dx = 100 * abs(+di - -di) / (+di + -di) - - adx = MovingAverage(dx, period) - - The moving average used is the one originally defined by Wilder, - the SmoothedMovingAverage - - See: - - https://en.wikipedia.org/wiki/Average_directional_movement_index - ''' - alias = ('DM',) diff --git a/spaces/Lilflerkin/WellNexus/app.py b/spaces/Lilflerkin/WellNexus/app.py deleted file mode 100644 index a9a23043e252575a632d6a4f11738b4a3853d8a7..0000000000000000000000000000000000000000 --- a/spaces/Lilflerkin/WellNexus/app.py +++ /dev/null @@ -1,73 +0,0 @@ -import gradio as gr -import pandas as pd -import numpy as np -from joblib import load - - -def predict_disease_from_symptom(symptom_list): - symptoms = {'itching': 0, 'skin_rash': 0, 'nodal_skin_eruptions': 0, 'continuous_sneezing': 0, - 'shivering': 0, 'chills': 0, 'joint_pain': 0, 'stomach_pain': 0, 'acidity': 0, 'ulcers_on_tongue': 0, - 'muscle_wasting': 0, 'vomiting': 0, 'burning_micturition': 0, 'spotting_ urination': 0, 'fatigue': 0, - 'weight_gain': 0, 'anxiety': 0, 'cold_hands_and_feets': 0, 'mood_swings': 0, 'weight_loss': 0, - 'restlessness': 0, 'lethargy': 0, 'patches_in_throat': 0, 'irregular_sugar_level': 0, 'cough': 0, - 'high_fever': 0, 'sunken_eyes': 0, 'breathlessness': 0, 'sweating': 0, 'dehydration': 0, - 'indigestion': 0, 'headache': 0, 'yellowish_skin': 0, 'dark_urine': 0, 'nausea': 0, 'loss_of_appetite': 0, - 'pain_behind_the_eyes': 0, 'back_pain': 0, 'constipation': 0, 'abdominal_pain': 0, 'diarrhoea': 0, 'mild_fever': 0, - 'yellow_urine': 0, 'yellowing_of_eyes': 0, 'acute_liver_failure': 0, 'fluid_overload': 0, 'swelling_of_stomach': 0, - 'swelled_lymph_nodes': 0, 'malaise': 0, 'blurred_and_distorted_vision': 0, 'phlegm': 0, 'throat_irritation': 0, - 'redness_of_eyes': 0, 'sinus_pressure': 0, 'runny_nose': 0, 'congestion': 0, 'chest_pain': 0, 'weakness_in_limbs': 0, - 'fast_heart_rate': 0, 'pain_during_bowel_movements': 0, 'pain_in_anal_region': 0, 'bloody_stool': 0, - 'irritation_in_anus': 0, 'neck_pain': 0, 'dizziness': 0, 'cramps': 0, 'bruising': 0, 'obesity': 0, 'swollen_legs': 0, - 'swollen_blood_vessels': 0, 'puffy_face_and_eyes': 0, 'enlarged_thyroid': 0, 'brittle_nails': 0, 'swollen_extremeties': 0, - 'excessive_hunger': 0, 'extra_marital_contacts': 0, 'drying_and_tingling_lips': 0, 'slurred_speech': 0, - 'knee_pain': 0, 'hip_joint_pain': 0, 'muscle_weakness': 0, 'stiff_neck': 0, 'swelling_joints': 0, 'movement_stiffness': 0, - 'spinning_movements': 0, 'loss_of_balance': 0, 'unsteadiness': 0, 'weakness_of_one_body_side': 0, 'loss_of_smell': 0, - 'bladder_discomfort': 0, 'foul_smell_of urine': 0, 'continuous_feel_of_urine': 0, 'passage_of_gases': 0, 'internal_itching': 0, - 'toxic_look_(typhos)': 0, 'depression': 0, 'irritability': 0, 'muscle_pain': 0, 'altered_sensorium': 0, - 'red_spots_over_body': 0, 'belly_pain': 0, 'abnormal_menstruation': 0, 'dischromic _patches': 0, 'watering_from_eyes': 0, - 'increased_appetite': 0, 'polyuria': 0, 'family_history': 0, 'mucoid_sputum': 0, 'rusty_sputum': 0, 'lack_of_concentration': 0, - 'visual_disturbances': 0, 'receiving_blood_transfusion': 0, 'receiving_unsterile_injections': 0, 'coma': 0, - 'stomach_bleeding': 0, 'distention_of_abdomen': 0, 'history_of_alcohol_consumption': 0, 'fluid_overload.1': 0, - 'blood_in_sputum': 0, 'prominent_veins_on_calf': 0, 'palpitations': 0, 'painful_walking': 0, 'pus_filled_pimples': 0, - 'blackheads': 0, 'scurring': 0, 'skin_peeling': 0, 'silver_like_dusting': 0, 'small_dents_in_nails': 0, 'inflammatory_nails': 0, - 'blister': 0, 'red_sore_around_nose': 0, 'yellow_crust_ooze': 0} - - for s in symptom_list: - symptoms[s] = 1 - - - df_test = pd.DataFrame(columns=list(symptoms.keys())) - df_test.loc[0] = np.array(list(symptoms.values())) - - - clf = load(str("./saved_model/random_forest.joblib")) - result = clf.predict(df_test) - - - del df_test - - return f"{result[0]}" - - -iface = gr.Interface( - predict_disease_from_symptom, - [ - gr.inputs.CheckboxGroup(['itching', 'skin_rash', 'nodal_skin_eruptions', 'continuous_sneezing', 'shivering', 'chills', 'joint_pain', 'stomach_pain', 'acidity', 'ulcers_on_tongue', - 'muscle_wasting', 'vomiting', 'burning_micturition', 'spotting_ urination', 'fatigue', 'weight_gain', 'anxiety', 'cold_hands_and_feets', 'mood_swings', 'weight_loss', - 'restlessness', 'lethargy', 'patches_in_throat', 'irregular_sugar_level', 'cough', 'high_fever', 'sunken_eyes', 'breathlessness', 'sweating', 'dehydration', - 'indigestion', 'headache', 'yellowish_skin', 'dark_urine', 'nausea', 'loss_of_appetite', 'pain_behind_the_eyes', 'back_pain', 'constipation', 'abdominal_pain', 'diarrhoea', 'mild_fever', - 'yellow_urine', 'yellowing_of_eyes', 'acute_liver_failure', 'fluid_overload', 'swelling_of_stomach', 'swelled_lymph_nodes', 'malaise', 'blurred_and_distorted_vision', 'phlegm', 'throat_irritation', - 'redness_of_eyes', 'sinus_pressure', 'runny_nose', 'congestion', 'chest_pain', 'weakness_in_limbs', 'fast_heart_rate', 'pain_during_bowel_movements', 'pain_in_anal_region', 'bloody_stool', - 'irritation_in_anus', 'neck_pain', 'dizziness', 'cramps', 'bruising', 'obesity', 'swollen_legs', 'swollen_blood_vessels', 'puffy_face_and_eyes', 'enlarged_thyroid', 'brittle_nails', 'swollen_extremeties', - 'excessive_hunger', 'extra_marital_contacts', 'drying_and_tingling_lips', 'slurred_speech', 'knee_pain', 'hip_joint_pain', 'muscle_weakness', 'stiff_neck', 'swelling_joints', 'movement_stiffness', - 'spinning_movements', 'loss_of_balance', 'unsteadiness', 'weakness_of_one_body_side', 'loss_of_smell', 'bladder_discomfort', 'foul_smell_of urine', 'continuous_feel_of_urine', 'passage_of_gases', 'internal_itching', - 'toxic_look_(typhos)', 'depression', 'irritability', 'muscle_pain', 'altered_sensorium', 'red_spots_over_body', 'belly_pain', 'abnormal_menstruation', 'dischromic _patches', 'watering_from_eyes', - 'increased_appetite', 'polyuria', 'family_history', 'mucoid_sputum', 'rusty_sputum', 'lack_of_concentration', 'visual_disturbances', 'receiving_blood_transfusion', 'receiving_unsterile_injections', 'coma', - 'stomach_bleeding', 'distention_of_abdomen', 'history_of_alcohol_consumption', 'fluid_overload.1', 'blood_in_sputum', 'prominent_veins_on_calf', 'palpitations', 'painful_walking', 'pus_filled_pimples', - 'blackheads', 'scurring', 'skin_peeling', 'silver_like_dusting', 'small_dents_in_nails', 'inflammatory_nails', 'blister', 'red_sore_around_nose', 'yellow_crust_ooze']), - ], - "text", - description="Select a symptom from the list and click submit to get predicted Disease as the Output." -) - -iface.launch(inline=False) \ No newline at end of file diff --git a/spaces/ML701G7/taim-gan/src/test_project/example.py b/spaces/ML701G7/taim-gan/src/test_project/example.py deleted file mode 100644 index 073fbb6ff1f6d54a671927d7e61d93f6e0ba7417..0000000000000000000000000000000000000000 --- a/spaces/ML701G7/taim-gan/src/test_project/example.py +++ /dev/null @@ -1,18 +0,0 @@ -"""doing some stuff here""" - - -class Foo: - """sample text""" - - def __init__(self, first_var: int, second_var: int) -> None: - """init the bar""" - self.first = first_var - self.second = second_var - - def get_bar(self) -> int: - """return bar""" - return self.first - - def get_foo(self) -> int: - """return bar""" - return self.second diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/eval.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/eval.py deleted file mode 100644 index 2f1310a62b91392ba4aa205b21e916be894d3bdc..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/eval.py +++ /dev/null @@ -1,368 +0,0 @@ -import argparse -import datetime -import logging -import inspect -import math -import os -from typing import Dict, Optional, Tuple -from omegaconf import OmegaConf - -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -import numpy as np -from PIL import Image - -import diffusers -import transformers -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import set_seed -from diffusers import AutoencoderKL, DDPMScheduler, DDIMScheduler, PNDMScheduler, ControlNetModel, PriorTransformer, UnCLIPScheduler -from diffusers.pipelines.stable_diffusion.stable_unclip_image_normalizer import StableUnCLIPImageNormalizer -from diffusers.optimization import get_scheduler -from diffusers.utils import check_min_version -from diffusers.utils.import_utils import is_xformers_available -from tqdm.auto import tqdm -from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer, CLIPVisionModelWithProjection, CLIPTextModelWithProjection - -from makeaprotagonist.models.unet import UNet3DConditionModel -from makeaprotagonist.dataset.dataset import MakeAProtagonistDataset -from makeaprotagonist.pipelines.pipeline_stable_unclip_controlavideo import MakeAProtagonistStableUnCLIPPipeline, MultiControlNetModel -from makeaprotagonist.util import save_videos_grid, ddim_inversion_unclip, ddim_inversion_prior -from einops import rearrange -from makeaprotagonist.args_util import DictAction, config_merge_dict -import ipdb -import random -from glob import glob -import sys - - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.15.0.dev0") - -logger = get_logger(__name__, log_level="INFO") - - -def main( - pretrained_model_path: str, - controlnet_pretrained_model_path: str, - output_dir: str, - train_data: Dict, - validation_data: Dict, - validation_steps: int = 100, - trainable_modules: Tuple[str] = ( - "attn1.to_q", - "attn2.to_q", - "attn_temp", - ), - trainable_params: Tuple[str] = (), - train_batch_size: int = 1, - max_train_steps: int = 500, - learning_rate: float = 3e-5, - scale_lr: bool = False, - lr_scheduler: str = "constant", - lr_warmup_steps: int = 0, - adam_beta1: float = 0.9, - adam_beta2: float = 0.999, - adam_weight_decay: float = 1e-2, - adam_epsilon: float = 1e-08, - max_grad_norm: float = 1.0, - gradient_accumulation_steps: int = 1, - gradient_checkpointing: bool = True, - checkpointing_steps: int = 500, - resume_from_checkpoint: Optional[str] = None, - mixed_precision: Optional[str] = "fp16", - use_8bit_adam: bool = False, - enable_xformers_memory_efficient_attention: bool = True, - seed: Optional[int] = None, - adapter_config=None, # the config for adapter - use_temporal_conv=False, ## use temporal conv in resblocks -): - *_, config = inspect.getargvalues(inspect.currentframe()) - - accelerator = Accelerator( - gradient_accumulation_steps=gradient_accumulation_steps, - mixed_precision=mixed_precision, - ) - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - transformers.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - transformers.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if seed is not None: - set_seed(seed) - - # Handle the output folder creation - if accelerator.is_main_process: - # now = datetime.datetime.now().strftime("%Y-%m-%dT%H-%M-%S") - # output_dir = os.path.join(output_dir, now) - os.makedirs(output_dir, exist_ok=True) - os.makedirs(f"{output_dir}/samples", exist_ok=True) - os.makedirs(f"{output_dir}/inv_latents", exist_ok=True) - OmegaConf.save(config, os.path.join(output_dir, 'config.yaml')) - - prior_model_id = "kakaobrain/karlo-v1-alpha" - data_type = torch.float16 - prior = PriorTransformer.from_pretrained(prior_model_id, subfolder="prior", torch_dtype=data_type) - - prior_text_model_id = "openai/clip-vit-large-patch14" - prior_tokenizer = CLIPTokenizer.from_pretrained(prior_text_model_id) - prior_text_model = CLIPTextModelWithProjection.from_pretrained(prior_text_model_id, torch_dtype=data_type) - prior_scheduler = UnCLIPScheduler.from_pretrained(prior_model_id, subfolder="prior_scheduler") - prior_scheduler = DDPMScheduler.from_config(prior_scheduler.config) - - - # image encoding components - feature_extractor = CLIPImageProcessor.from_pretrained(pretrained_model_path, subfolder="feature_extractor") - image_encoder = CLIPVisionModelWithProjection.from_pretrained(pretrained_model_path, subfolder="image_encoder") - # image noising components - image_normalizer = StableUnCLIPImageNormalizer.from_pretrained(pretrained_model_path, subfolder="image_normalizer") - image_noising_scheduler = DDPMScheduler.from_pretrained(pretrained_model_path, subfolder="image_noising_scheduler") - # regular denoising components - tokenizer = CLIPTokenizer.from_pretrained(pretrained_model_path, subfolder="tokenizer") - text_encoder = CLIPTextModel.from_pretrained(pretrained_model_path, subfolder="text_encoder") - unet = UNet3DConditionModel.from_pretrained_2d(pretrained_model_path, subfolder="unet", use_temporal_conv=use_temporal_conv) - - - # vae - vae = AutoencoderKL.from_pretrained(pretrained_model_path, subfolder="vae") - ## controlnet - assert not isinstance(controlnet_pretrained_model_path, str) - controlnet = MultiControlNetModel( [ControlNetModel.from_pretrained(_control_model_path) for _control_model_path in controlnet_pretrained_model_path] ) - - # Freeze vae and text_encoder and adapter - vae.requires_grad_(False) - text_encoder.requires_grad_(False) - - ## freeze image embed - image_encoder.requires_grad_(False) - - unet.requires_grad_(False) - ## freeze controlnet - controlnet.requires_grad_(False) - - ## freeze prior - prior.requires_grad_(False) - prior_text_model.requires_grad_(False) - - - if enable_xformers_memory_efficient_attention: - if is_xformers_available(): - unet.enable_xformers_memory_efficient_attention() - controlnet.enable_xformers_memory_efficient_attention() - else: - raise ValueError("xformers is not available. Make sure it is installed correctly") - - if gradient_checkpointing: - unet.enable_gradient_checkpointing() - - if scale_lr: - learning_rate = ( - learning_rate * gradient_accumulation_steps * train_batch_size * accelerator.num_processes - ) - - # Get the training dataset - train_dataset = MakeAProtagonistDataset(**train_data) - - # Preprocessing the dataset - train_dataset.prompt_ids = tokenizer( - train_dataset.prompt, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt" - ).input_ids[0] - - train_dataset.preprocess_img_embedding(feature_extractor, image_encoder) - # DataLoaders creation: - train_dataloader = torch.utils.data.DataLoader( - train_dataset, batch_size=train_batch_size, num_workers=0, - ) - - prior_val_scheduler = DDIMScheduler.from_config(prior_scheduler.config) if validation_data.get("prior_val_scheduler", "") == "DDIM" else prior_scheduler - # ipdb.set_trace() - validation_pipeline = MakeAProtagonistStableUnCLIPPipeline( - prior_tokenizer=prior_tokenizer, - prior_text_encoder=prior_text_model, - prior=prior, - prior_scheduler=prior_val_scheduler, - feature_extractor=feature_extractor, - image_encoder=image_encoder, - image_normalizer=image_normalizer, - image_noising_scheduler=image_noising_scheduler, - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - controlnet=controlnet, - scheduler=DDIMScheduler.from_pretrained(pretrained_model_path, subfolder="scheduler") - ) - - - validation_pipeline.enable_vae_slicing() - ddim_inv_scheduler = DDIMScheduler.from_pretrained(pretrained_model_path, subfolder='scheduler') - ddim_inv_scheduler.set_timesteps(validation_data.num_inv_steps) - - ddim_inv_prior_scheduler = None - if validation_data.get("use_prior_inv_latent", False): - ddim_inv_prior_scheduler = DDIMScheduler.from_config(prior_scheduler.config) - ddim_inv_prior_scheduler.set_timesteps(validation_data.prior_num_inv_steps) - - unet, train_dataloader = accelerator.prepare( - unet, train_dataloader - ) - - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - weight_dtype = torch.float32 - if accelerator.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif accelerator.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move text_encode and vae to gpu and cast to weight_dtype - text_encoder.to(accelerator.device, dtype=weight_dtype) - vae.to(accelerator.device, dtype=weight_dtype) - image_encoder.to(accelerator.device, dtype=weight_dtype) - ## note controlnet use the unet dtype - controlnet.to(accelerator.device, dtype=weight_dtype) - ## prior - prior.to(accelerator.device, dtype=weight_dtype) - prior_text_model.to(accelerator.device, dtype=weight_dtype) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("text2video-fine-tune") - - global_step = 0 - # Potentially load in the weights and states from a previous save - if resume_from_checkpoint: - ## resume_from_checkpoint is the path to the checkpoint-300 dir - accelerator.load_state(resume_from_checkpoint) - path = os.path.basename(resume_from_checkpoint) - global_step = int(path.split("-")[1]) - - - if not "noise_level" in validation_data: - validation_data.noise_level = train_data.noise_level - if not "noise_level_inv" in validation_data: - validation_data.noise_level_inv = validation_data.noise_level - # Checks if the accelerator has performed an optimization step behind the scenes - - if accelerator.is_main_process: - - batch = next(iter(train_dataloader)) - - # ipdb.set_trace() - pixel_values = batch["pixel_values"].to(weight_dtype) - video_length = pixel_values.shape[1] - pixel_values = rearrange(pixel_values, "b f c h w -> (b f) c h w") - latents = vae.encode(pixel_values).latent_dist.sample() - latents = rearrange(latents, "(b f) c h w -> b c f h w", f=video_length) - latents = latents * vae.config.scaling_factor - - - # ControlNet - # ipdb.set_trace() - conditions = [_condition.to(weight_dtype) for _, _condition in batch["conditions"].items()] # b f c h w - masks = batch["masks"].to(weight_dtype) # b,f,1,h,w - # ipdb.set_trace() - if not validation_data.get("use_masks", False): - masks = torch.ones_like(masks) - # conditions = rearrange(conditions, "b f c h w -> (b f) c h w") ## here is rgb - ## NOTE in this pretrained model, the config is also rgb - ## https://huggingface.co/thibaud/controlnet-sd21-openpose-diffusers/blob/main/config.json - - # ipdb.set_trace() - ddim_inv_latent = None - if validation_data.use_inv_latent: # - emb_dim = train_dataset.img_embeddings[0].size(0) - key_frame_embed = torch.zeros((1, emb_dim)).to(device=latents.device, dtype=latents.dtype) ## this is dim 0 - ddim_inv_latent = ddim_inversion_unclip( - validation_pipeline, ddim_inv_scheduler, video_latent=latents, - num_inv_steps=validation_data.num_inv_steps, prompt="", image_embed=key_frame_embed, noise_level=validation_data.noise_level, seed=seed)[-1].to(weight_dtype) - - set_noise = validation_data.pop("noise_level") - v_noise = set_noise - - if not validation_data.get("interpolate_embed_weight", False): - validation_data.interpolate_embed_weight = 0 - - - samples = [] - - generator = torch.Generator(device=accelerator.device) - generator.manual_seed(seed) - - for idx, prompt in enumerate(validation_data.prompts): - - _ref_image = Image.open(validation_data.ref_images[idx]) - image_embed = None - ## prior latents - prior_embeds = None - prior_denoised_embeds = None - if validation_data.get("source_background", False): - ## using source background and changing the protagonist - prior_denoised_embeds = train_dataset.img_embeddings[0][None].to(device=latents.device, dtype=latents.dtype) # 1, 768 for UnCLIP-small - - if validation_data.get("source_protagonist", False): - # using source protagonist and changing the background - sample_indices = batch["sample_indices"][0] - image_embed = [train_dataset.img_embeddings[idx] for idx in sample_indices] - image_embed = torch.stack(image_embed, dim=0).to(device=latents.device, dtype=latents.dtype) # F, 768 for UnCLIP-small # F,C - _ref_image = None - - sample = validation_pipeline(image=_ref_image, prompt=prompt, control_image=conditions, generator=generator, latents=ddim_inv_latent, image_embeds=image_embed, noise_level=v_noise, masks=masks, prior_latents=prior_embeds, prior_denoised_embeds=prior_denoised_embeds, **validation_data).videos - - save_videos_grid(sample, f"{output_dir}/samples/sample-{global_step}-seed{seed}/{idx}-{prompt}.gif") - samples.append(sample) - - # - samples = [sample.float() for sample in samples] - samples = torch.concat(samples) - save_path = f"{output_dir}/samples/sample-{global_step}-s{validation_data.start_step}-e{validation_data.end_step}-seed{seed}.gif" # noise level and noise level for inv - save_videos_grid(samples, save_path, n_rows=len(samples)) - logger.info(f"Saved samples to {save_path}") - - - - accelerator.end_training() - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--config", type=str, default="./configs/tuneavideo.yaml") - parser.add_argument( - '--options', - nargs='+', - action=DictAction, ##NOTE cannot support multi-level config change - help="--options is deprecated in favor of --cfg_options' and it will " - 'not be supported in version v0.22.0. Override some settings in the ' - 'used config, the key-value pair in xxx=yyy format will be merged ' - 'into config file. If the value to be overwritten is a list, it ' - 'should be like key="[a,b]" or key=a,b It also allows nested ' - 'list/tuple values, e.g. key="[(a,b),(c,d)]" Note that the quotation ' - 'marks are necessary and that no white space is allowed.') - - args = parser.parse_args() - - ## read from cmd line - # ipdb.set_trace() - # Load the YAML configuration file - config = OmegaConf.load(args.config) - # Merge the command-line arguments with the configuration file - if args.options is not None: - # config = OmegaConf.merge(config, args.options) - config_merge_dict(args.options, config) - - main(**config) diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/inference/predictors/__init__.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/inference/predictors/__init__.py deleted file mode 100644 index 04b8b8618cd33efabdaec69328de2f5a8a58d2f9..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/inference/predictors/__init__.py +++ /dev/null @@ -1,95 +0,0 @@ -from .base import BasePredictor -from .brs import InputBRSPredictor, FeatureBRSPredictor, HRNetFeatureBRSPredictor -from .brs_functors import InputOptimizer, ScaleBiasOptimizer -from ..transforms import ZoomIn -from ...model.is_hrnet_model import DistMapsHRNetModel - - -def get_predictor(net, brs_mode, device, - prob_thresh=0.49, - with_flip=True, - zoom_in_params=dict(), - predictor_params=None, - brs_opt_func_params=None, - lbfgs_params=None): - lbfgs_params_ = { - 'm': 20, - 'factr': 0, - 'pgtol': 1e-8, - 'maxfun': 20, - } - - predictor_params_ = { - 'optimize_after_n_clicks': 1 - } - - if zoom_in_params is not None: - zoom_in = ZoomIn(**zoom_in_params) - else: - zoom_in = None - - if lbfgs_params is not None: - lbfgs_params_.update(lbfgs_params) - lbfgs_params_['maxiter'] = 2 * lbfgs_params_['maxfun'] - - if brs_opt_func_params is None: - brs_opt_func_params = dict() - - if brs_mode == 'NoBRS': - if predictor_params is not None: - predictor_params_.update(predictor_params) - predictor = BasePredictor(net, device, zoom_in=zoom_in, with_flip=with_flip, **predictor_params_) - elif brs_mode.startswith('f-BRS'): - predictor_params_.update({ - 'net_clicks_limit': 8, - }) - if predictor_params is not None: - predictor_params_.update(predictor_params) - - insertion_mode = { - 'f-BRS-A': 'after_c4', - 'f-BRS-B': 'after_aspp', - 'f-BRS-C': 'after_deeplab' - }[brs_mode] - - opt_functor = ScaleBiasOptimizer(prob_thresh=prob_thresh, - with_flip=with_flip, - optimizer_params=lbfgs_params_, - **brs_opt_func_params) - - if isinstance(net, DistMapsHRNetModel): - FeaturePredictor = HRNetFeatureBRSPredictor - insertion_mode = {'after_c4': 'A', 'after_aspp': 'A', 'after_deeplab': 'C'}[insertion_mode] - else: - FeaturePredictor = FeatureBRSPredictor - - predictor = FeaturePredictor(net, device, - opt_functor=opt_functor, - with_flip=with_flip, - insertion_mode=insertion_mode, - zoom_in=zoom_in, - **predictor_params_) - elif brs_mode == 'RGB-BRS' or brs_mode == 'DistMap-BRS': - use_dmaps = brs_mode == 'DistMap-BRS' - - predictor_params_.update({ - 'net_clicks_limit': 5, - }) - if predictor_params is not None: - predictor_params_.update(predictor_params) - - opt_functor = InputOptimizer(prob_thresh=prob_thresh, - with_flip=with_flip, - optimizer_params=lbfgs_params_, - **brs_opt_func_params) - - predictor = InputBRSPredictor(net, device, - optimize_target='dmaps' if use_dmaps else 'rgb', - opt_functor=opt_functor, - with_flip=with_flip, - zoom_in=zoom_in, - **predictor_params_) - else: - raise NotImplementedError - - return predictor diff --git a/spaces/Makiing/coolb-in-gtest/src/pages/api/kblob.ts b/spaces/Makiing/coolb-in-gtest/src/pages/api/kblob.ts deleted file mode 100644 index 0ce7e6063cdc06838e76f1cff1d5982d34ef52de..0000000000000000000000000000000000000000 --- a/spaces/Makiing/coolb-in-gtest/src/pages/api/kblob.ts +++ /dev/null @@ -1,56 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import FormData from 'form-data' -import { fetch } from '@/lib/isomorphic' -import { KBlobRequest } from '@/lib/bots/bing/types' - -const API_DOMAIN = 'https://bing.vcanbb.top' - -export const config = { - api: { - bodyParser: { - sizeLimit: '10mb' // Set desired value here - } - } -} - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const { knowledgeRequest, imageBase64 } = req.body as KBlobRequest - - const formData = new FormData() - formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest)) - if (imageBase64) { - formData.append('imageBase64', imageBase64) - } - - const response = await fetch(`${API_DOMAIN}/images/kblob`, - { - method: 'POST', - body: formData.getBuffer(), - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referer": `${API_DOMAIN}/web/index.html`, - "Referrer-Policy": "origin-when-cross-origin", - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - ...formData.getHeaders() - } - } - ).then(res => res.text()) - - res.writeHead(200, { - 'Content-Type': 'application/json', - }) - res.end(response || JSON.stringify({ result: { value: 'UploadFailed', message: '请更换 IP 或代理后重试' } })) - } catch (e) { - return res.json({ - result: { - value: 'UploadFailed', - message: `${e}` - } - }) - } -} diff --git a/spaces/MarcusSu1216/XingTong/inference/infer_tool.py b/spaces/MarcusSu1216/XingTong/inference/infer_tool.py deleted file mode 100644 index def9246201c607f06a3e240feef7f46af9d9fef1..0000000000000000000000000000000000000000 --- a/spaces/MarcusSu1216/XingTong/inference/infer_tool.py +++ /dev/null @@ -1,355 +0,0 @@ -import hashlib -import io -import json -import logging -import os -import time -from pathlib import Path -from inference import slicer - -import librosa -import numpy as np -# import onnxruntime -import parselmouth -import soundfile -import torch -import hashlib -import io -import json -import logging -import os -import time -from pathlib import Path -from inference import slicer - -import librosa -import numpy as np -# import onnxruntime -import parselmouth -import soundfile -import torch -import torchaudio - -import cluster -from hubert import hubert_model -import utils -from models import SynthesizerTrn - -logging.getLogger('matplotlib').setLevel(logging.WARNING) - - -def read_temp(file_name): - if not os.path.exists(file_name): - with open(file_name, "w") as f: - f.write(json.dumps({"info": "temp_dict"})) - return {} - else: - try: - with open(file_name, "r") as f: - data = f.read() - data_dict = json.loads(data) - if os.path.getsize(file_name) > 50 * 1024 * 1024: - f_name = file_name.replace("\\", "/").split("/")[-1] - print(f"clean {f_name}") - for wav_hash in list(data_dict.keys()): - if int(time.time()) - int(data_dict[wav_hash]["time"]) > 14 * 24 * 3600: - del data_dict[wav_hash] - except Exception as e: - print(e) - print(f"{file_name} error,auto rebuild file") - data_dict = {"info": "temp_dict"} - return data_dict - - -def write_temp(file_name, data): - with open(file_name, "w") as f: - f.write(json.dumps(data)) - - -def timeit(func): - def run(*args, **kwargs): - t = time.time() - res = func(*args, **kwargs) - print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t)) - return res - - return run - - -def format_wav(audio_path): - if Path(audio_path).suffix == '.wav': - return - raw_audio, raw_sample_rate = librosa.load(audio_path, mono=True, sr=None) - soundfile.write(Path(audio_path).with_suffix(".wav"), raw_audio, raw_sample_rate) - - -def get_end_file(dir_path, end): - file_lists = [] - for root, dirs, files in os.walk(dir_path): - files = [f for f in files if f[0] != '.'] - dirs[:] = [d for d in dirs if d[0] != '.'] - for f_file in files: - if f_file.endswith(end): - file_lists.append(os.path.join(root, f_file).replace("\\", "/")) - return file_lists - - -def get_md5(content): - return hashlib.new("md5", content).hexdigest() - -def fill_a_to_b(a, b): - if len(a) < len(b): - for _ in range(0, len(b) - len(a)): - a.append(a[0]) - -def mkdir(paths: list): - for path in paths: - if not os.path.exists(path): - os.mkdir(path) - -def pad_array(arr, target_length): - current_length = arr.shape[0] - if current_length >= target_length: - return arr - else: - pad_width = target_length - current_length - pad_left = pad_width // 2 - pad_right = pad_width - pad_left - padded_arr = np.pad(arr, (pad_left, pad_right), 'constant', constant_values=(0, 0)) - return padded_arr - -def split_list_by_n(list_collection, n, pre=0): - for i in range(0, len(list_collection), n): - yield list_collection[i-pre if i-pre>=0 else i: i + n] - - -class F0FilterException(Exception): - pass - -class Svc(object): - def __init__(self, net_g_path, config_path, - device=None, - cluster_model_path="logs/44k/kmeans_10000.pt", - nsf_hifigan_enhance = False - ): - self.net_g_path = net_g_path - if device is None: - self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") - else: - self.dev = torch.device(device) - self.net_g_ms = None - self.hps_ms = utils.get_hparams_from_file(config_path) - self.target_sample = self.hps_ms.data.sampling_rate - self.hop_size = self.hps_ms.data.hop_length - self.spk2id = self.hps_ms.spk - self.nsf_hifigan_enhance = nsf_hifigan_enhance - # 加载hubert - self.hubert_model = utils.get_hubert_model().to(self.dev) - self.load_model() - if os.path.exists(cluster_model_path): - self.cluster_model = cluster.get_cluster_model(cluster_model_path) - if self.nsf_hifigan_enhance: - from modules.enhancer import Enhancer - self.enhancer = Enhancer('nsf-hifigan', 'pretrain/nsf_hifigan/model',device=self.dev) - - def load_model(self): - # 获取模型配置 - self.net_g_ms = SynthesizerTrn( - self.hps_ms.data.filter_length // 2 + 1, - self.hps_ms.train.segment_size // self.hps_ms.data.hop_length, - **self.hps_ms.model) - _ = utils.load_checkpoint(self.net_g_path, self.net_g_ms, None) - if "half" in self.net_g_path and torch.cuda.is_available(): - _ = self.net_g_ms.half().eval().to(self.dev) - else: - _ = self.net_g_ms.eval().to(self.dev) - - - - def get_unit_f0(self, in_path, tran, cluster_infer_ratio, speaker, f0_filter ,F0_mean_pooling): - - wav, sr = librosa.load(in_path, sr=self.target_sample) - - if F0_mean_pooling == True: - f0, uv = utils.compute_f0_uv_torchcrepe(torch.FloatTensor(wav), sampling_rate=self.target_sample, hop_length=self.hop_size,device=self.dev) - if f0_filter and sum(f0) == 0: - raise F0FilterException("未检测到人声") - f0 = torch.FloatTensor(list(f0)) - uv = torch.FloatTensor(list(uv)) - if F0_mean_pooling == False: - f0 = utils.compute_f0_parselmouth(wav, sampling_rate=self.target_sample, hop_length=self.hop_size) - if f0_filter and sum(f0) == 0: - raise F0FilterException("未检测到人声") - f0, uv = utils.interpolate_f0(f0) - f0 = torch.FloatTensor(f0) - uv = torch.FloatTensor(uv) - - f0 = f0 * 2 ** (tran / 12) - f0 = f0.unsqueeze(0).to(self.dev) - uv = uv.unsqueeze(0).to(self.dev) - - wav16k = librosa.resample(wav, orig_sr=self.target_sample, target_sr=16000) - wav16k = torch.from_numpy(wav16k).to(self.dev) - c = utils.get_hubert_content(self.hubert_model, wav_16k_tensor=wav16k) - c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[1]) - - if cluster_infer_ratio !=0: - cluster_c = cluster.get_cluster_center_result(self.cluster_model, c.cpu().numpy().T, speaker).T - cluster_c = torch.FloatTensor(cluster_c).to(self.dev) - c = cluster_infer_ratio * cluster_c + (1 - cluster_infer_ratio) * c - - c = c.unsqueeze(0) - return c, f0, uv - - def infer(self, speaker, tran, raw_path, - cluster_infer_ratio=0, - auto_predict_f0=False, - noice_scale=0.4, - f0_filter=False, - F0_mean_pooling=False, - enhancer_adaptive_key = 0 - ): - - speaker_id = self.spk2id.__dict__.get(speaker) - if not speaker_id and type(speaker) is int: - if len(self.spk2id.__dict__) >= speaker: - speaker_id = speaker - sid = torch.LongTensor([int(speaker_id)]).to(self.dev).unsqueeze(0) - c, f0, uv = self.get_unit_f0(raw_path, tran, cluster_infer_ratio, speaker, f0_filter,F0_mean_pooling) - if "half" in self.net_g_path and torch.cuda.is_available(): - c = c.half() - with torch.no_grad(): - start = time.time() - audio = self.net_g_ms.infer(c, f0=f0, g=sid, uv=uv, predict_f0=auto_predict_f0, noice_scale=noice_scale)[0,0].data.float() - if self.nsf_hifigan_enhance: - audio, _ = self.enhancer.enhance( - audio[None,:], - self.target_sample, - f0[:,:,None], - self.hps_ms.data.hop_length, - adaptive_key = enhancer_adaptive_key) - use_time = time.time() - start - print("vits use time:{}".format(use_time)) - return audio, audio.shape[-1] - - def clear_empty(self): - # 清理显存 - torch.cuda.empty_cache() - - def slice_inference(self, - raw_audio_path, - spk, - tran, - slice_db, - cluster_infer_ratio, - auto_predict_f0, - noice_scale, - pad_seconds=0.5, - clip_seconds=0, - lg_num=0, - lgr_num =0.75, - F0_mean_pooling = False, - enhancer_adaptive_key = 0 - ): - wav_path = raw_audio_path - chunks = slicer.cut(wav_path, db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - per_size = int(clip_seconds*audio_sr) - lg_size = int(lg_num*audio_sr) - lg_size_r = int(lg_size*lgr_num) - lg_size_c_l = (lg_size-lg_size_r)//2 - lg_size_c_r = lg_size-lg_size_r-lg_size_c_l - lg = np.linspace(0,1,lg_size_r) if lg_size!=0 else 0 - - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - # padd - length = int(np.ceil(len(data) / audio_sr * self.target_sample)) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - audio.extend(list(pad_array(_audio, length))) - continue - if per_size != 0: - datas = split_list_by_n(data, per_size,lg_size) - else: - datas = [data] - for k,dat in enumerate(datas): - per_length = int(np.ceil(len(dat) / audio_sr * self.target_sample)) if clip_seconds!=0 else length - if clip_seconds!=0: print(f'###=====segment clip start, {round(len(dat) / audio_sr, 3)}s======') - # padd - pad_len = int(audio_sr * pad_seconds) - dat = np.concatenate([np.zeros([pad_len]), dat, np.zeros([pad_len])]) - raw_path = io.BytesIO() - soundfile.write(raw_path, dat, audio_sr, format="wav") - raw_path.seek(0) - out_audio, out_sr = self.infer(spk, tran, raw_path, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale, - F0_mean_pooling = F0_mean_pooling, - enhancer_adaptive_key = enhancer_adaptive_key - ) - _audio = out_audio.cpu().numpy() - pad_len = int(self.target_sample * pad_seconds) - _audio = _audio[pad_len:-pad_len] - _audio = pad_array(_audio, per_length) - if lg_size!=0 and k!=0: - lg1 = audio[-(lg_size_r+lg_size_c_r):-lg_size_c_r] if lgr_num != 1 else audio[-lg_size:] - lg2 = _audio[lg_size_c_l:lg_size_c_l+lg_size_r] if lgr_num != 1 else _audio[0:lg_size] - lg_pre = lg1*(1-lg)+lg2*lg - audio = audio[0:-(lg_size_r+lg_size_c_r)] if lgr_num != 1 else audio[0:-lg_size] - audio.extend(lg_pre) - _audio = _audio[lg_size_c_l+lg_size_r:] if lgr_num != 1 else _audio[lg_size:] - audio.extend(list(_audio)) - return np.array(audio) - -class RealTimeVC: - def __init__(self): - self.last_chunk = None - self.last_o = None - self.chunk_len = 16000 # 区块长度 - self.pre_len = 3840 # 交叉淡化长度,640的倍数 - - """输入输出都是1维numpy 音频波形数组""" - - def process(self, svc_model, speaker_id, f_pitch_change, input_wav_path, - cluster_infer_ratio=0, - auto_predict_f0=False, - noice_scale=0.4, - f0_filter=False): - - import maad - audio, sr = torchaudio.load(input_wav_path) - audio = audio.cpu().numpy()[0] - temp_wav = io.BytesIO() - if self.last_chunk is None: - input_wav_path.seek(0) - - audio, sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale, - f0_filter=f0_filter) - - audio = audio.cpu().numpy() - self.last_chunk = audio[-self.pre_len:] - self.last_o = audio - return audio[-self.chunk_len:] - else: - audio = np.concatenate([self.last_chunk, audio]) - soundfile.write(temp_wav, audio, sr, format="wav") - temp_wav.seek(0) - - audio, sr = svc_model.infer(speaker_id, f_pitch_change, temp_wav, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale, - f0_filter=f0_filter) - - audio = audio.cpu().numpy() - ret = maad.util.crossfade(self.last_o, audio, self.pre_len) - self.last_chunk = audio[-self.pre_len:] - self.last_o = audio - return ret[self.chunk_len:2 * self.chunk_len] \ No newline at end of file diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/lraspp_m-v3-d8.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/lraspp_m-v3-d8.py deleted file mode 100644 index 93258242a90695cc94a7c6bd41562d6a75988771..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/lraspp_m-v3-d8.py +++ /dev/null @@ -1,25 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', eps=0.001, requires_grad=True) -model = dict( - type='EncoderDecoder', - backbone=dict( - type='MobileNetV3', - arch='large', - out_indices=(1, 3, 16), - norm_cfg=norm_cfg), - decode_head=dict( - type='LRASPPHead', - in_channels=(16, 24, 960), - in_index=(0, 1, 2), - channels=128, - input_transform='multiple_select', - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU'), - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/lib/model/HGPIFuNet.py b/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/lib/model/HGPIFuNet.py deleted file mode 100644 index 4771715345afcf326b3b0e64717517801fe75a1c..0000000000000000000000000000000000000000 --- a/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/lib/model/HGPIFuNet.py +++ /dev/null @@ -1,142 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from .BasePIFuNet import BasePIFuNet -from .SurfaceClassifier import SurfaceClassifier -from .DepthNormalizer import DepthNormalizer -from .HGFilters import * -from ..net_util import init_net - - -class HGPIFuNet(BasePIFuNet): - ''' - HG PIFu network uses Hourglass stacks as the image filter. - It does the following: - 1. Compute image feature stacks and store it in self.im_feat_list - self.im_feat_list[-1] is the last stack (output stack) - 2. Calculate calibration - 3. If training, it index on every intermediate stacks, - If testing, it index on the last stack. - 4. Classification. - 5. During training, error is calculated on all stacks. - ''' - - def __init__(self, - opt, - projection_mode='orthogonal', - error_term=nn.MSELoss(), - ): - super(HGPIFuNet, self).__init__( - projection_mode=projection_mode, - error_term=error_term) - - self.name = 'hgpifu' - - self.opt = opt - self.num_views = self.opt.num_views - - self.image_filter = HGFilter(opt) - - self.surface_classifier = SurfaceClassifier( - filter_channels=self.opt.mlp_dim, - num_views=self.opt.num_views, - no_residual=self.opt.no_residual, - last_op=nn.Sigmoid()) - - self.normalizer = DepthNormalizer(opt) - - # This is a list of [B x Feat_i x H x W] features - self.im_feat_list = [] - self.tmpx = None - self.normx = None - - self.intermediate_preds_list = [] - - init_net(self) - - def filter(self, images): - ''' - Filter the input images - store all intermediate features. - :param images: [B, C, H, W] input images - ''' - self.im_feat_list, self.tmpx, self.normx = self.image_filter(images) - # If it is not in training, only produce the last im_feat - if not self.training: - self.im_feat_list = [self.im_feat_list[-1]] - - def query(self, points, calibs, transforms=None, labels=None): - ''' - Given 3D points, query the network predictions for each point. - Image features should be pre-computed before this call. - store all intermediate features. - query() function may behave differently during training/testing. - :param points: [B, 3, N] world space coordinates of points - :param calibs: [B, 3, 4] calibration matrices for each image - :param transforms: Optional [B, 2, 3] image space coordinate transforms - :param labels: Optional [B, Res, N] gt labeling - :return: [B, Res, N] predictions for each point - ''' - if labels is not None: - self.labels = labels - - xyz = self.projection(points, calibs, transforms) - xy = xyz[:, :2, :] - z = xyz[:, 2:3, :] - - in_img = (xy[:, 0] >= -1.0) & (xy[:, 0] <= 1.0) & (xy[:, 1] >= -1.0) & (xy[:, 1] <= 1.0) - - z_feat = self.normalizer(z, calibs=calibs) - - if self.opt.skip_hourglass: - tmpx_local_feature = self.index(self.tmpx, xy) - - self.intermediate_preds_list = [] - - for im_feat in self.im_feat_list: - # [B, Feat_i + z, N] - point_local_feat_list = [self.index(im_feat, xy), z_feat] - - if self.opt.skip_hourglass: - point_local_feat_list.append(tmpx_local_feature) - - point_local_feat = torch.cat(point_local_feat_list, 1) - - # out of image plane is always set to 0 - pred = in_img[:,None].float() * self.surface_classifier(point_local_feat) - self.intermediate_preds_list.append(pred) - - self.preds = self.intermediate_preds_list[-1] - - def get_im_feat(self): - ''' - Get the image filter - :return: [B, C_feat, H, W] image feature after filtering - ''' - return self.im_feat_list[-1] - - def get_error(self): - ''' - Hourglass has its own intermediate supervision scheme - ''' - error = 0 - for preds in self.intermediate_preds_list: - error += self.error_term(preds, self.labels) - error /= len(self.intermediate_preds_list) - - return error - - def forward(self, images, points, calibs, transforms=None, labels=None): - # Get image feature - self.filter(images) - - # Phase 2: point query - self.query(points=points, calibs=calibs, transforms=transforms, labels=labels) - - # get the prediction - res = self.get_preds() - - # get the error - error = self.get_error() - - return res, error \ No newline at end of file diff --git a/spaces/MingGatsby/multi-query-sentiment/app.py b/spaces/MingGatsby/multi-query-sentiment/app.py deleted file mode 100644 index a327fe6f15c6a5b70e73be0cca82b75924cec475..0000000000000000000000000000000000000000 --- a/spaces/MingGatsby/multi-query-sentiment/app.py +++ /dev/null @@ -1,61 +0,0 @@ -from pathlib import Path - -from htmltools import HTMLDependency, tags -from shiny import App, reactive, ui - -from query import query_output_server, query_output_ui - -button_style = {"style": "margin: 15px"} - -www_dir = Path(__file__).parent / "www" -app_ui = ui.page_fluid( - HTMLDependency( - "bootstrap", - version="9.99", - source={"subdir": str(www_dir)}, - script={"src": "bootstrap.bundle.min.js"}, - stylesheet={"href": "theme.css"}, - ), - ui.row( - ui.column( - 2, - ui.row( - button_style, - ui.input_action_button("add_query", "Add Query"), - ), - ui.row( - button_style, - ui.input_action_button("remove_query", "Remove Query"), - ), - ), - ui.column( - 10, - ui.tags.div(query_output_ui("initial_query"), id="module_container"), - ), - ), -) - - -def server(input, output, session): - mod_counter = reactive.Value(0) - - query_output_server("initial_query") - - @reactive.Effect - @reactive.event(input.add_query) - def _(): - counter = mod_counter.get() + 1 - mod_counter.set(counter) - id = "query_" + str(counter) - ui.insert_ui( - selector="#module_container", where="afterBegin", ui=query_output_ui(id) - ) - query_output_server(id) - - @reactive.Effect - @reactive.event(input.remove_query) - def _(): - ui.remove_ui(selector=f"#module_container .row:first-child") - - -app = App(app_ui, server) diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/kie/extractors/__init__.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/kie/extractors/__init__.py deleted file mode 100644 index 914d0f6903cefec1236107346e59901ac9d64fd4..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/kie/extractors/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .sdmgr import SDMGR - -__all__ = ['SDMGR'] diff --git a/spaces/MrSalman/Image_captioning/app.py b/spaces/MrSalman/Image_captioning/app.py deleted file mode 100644 index 39c09845f447b3e8027561ff5d050583d97e6b5c..0000000000000000000000000000000000000000 --- a/spaces/MrSalman/Image_captioning/app.py +++ /dev/null @@ -1,47 +0,0 @@ -# impoprt packages -import torch -import requests -from PIL import Image -from transformers import BlipProcessor, BlipForConditionalGeneration, AutoTokenizer, pipeline -import sentencepiece -import gradio as gr - -# Image captioning model -processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") -model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base") - -# Translate en to ar -model_translater = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-ar") - -# conditional image captioning (with prefix-) -def image_captioning(image, prefix="a "): - """ Return text (As str) to describe an image """ - # Process the image - inputs = processor(image, prefix, return_tensors="pt") - - # Generate text to describe the image - output = model.generate(**inputs) - - # Decode the output - output = processor.decode(output[0], skip_special_tokens=True, max_length=80) - return output - -def translate_text(text, to="ar"): - """ Return translated text """ - translated_text = model_translater(str(text)) - return translated_text[0]['translation_text'] - -def image_captioning_ar(image, prefix = "a "): - if image: - text = image_captioning(image, prefix=prefix) - return text, translate_text(text) - return null - -input_image = gr.inputs.Image(type="pil", label = 'Upload your image') -imageCaptioning_interface = gr.Interface( - fn = image_captioning_ar, - inputs=input_image, - outputs=[gr.outputs.Textbox(label="Caption (en)"), gr.outputs.Textbox(label="Caption (ar)")], - title = 'Image captioning', -) -imageCaptioning_interface.launch() \ No newline at end of file diff --git a/spaces/Mrchuw/text-to-image_6_by_6/css.css b/spaces/Mrchuw/text-to-image_6_by_6/css.css deleted file mode 100644 index 45350b7c27b8177a67a10d66e3c5090df2cbdab5..0000000000000000000000000000000000000000 --- a/spaces/Mrchuw/text-to-image_6_by_6/css.css +++ /dev/null @@ -1,113 +0,0 @@ -.app.svelte-p7tiy3.svelte-p7tiy3{ - background:None; -} -.unpadded_box.large.svelte-1vhybi6{ - background:#6fbcffa8; - min-height:100%; -} -span.svelte-1l2rj76{ - color:white;!important; -} -div.svelte-1fwqiwq .block{ - background:#4d8df1; -} -.lg.svelte-1h4gtph{ - background:#4d8df1; - color:white; - height:100px; -} -#restart{ - position: relative; - font-family: "Poppins",sans-serif; - text-align: center; - border-radius: 8px; - background: #0063f787; - border-style: solid; - border-width: 1px; - border-color: #ffffff; - width: 100%; - height: 50%; - max-height: 200px; - padding: 0px 10px; - transform: translate(-50%,0%); - left: 50%; -} -#head{ - color:white; - margin-top:15px; - margin-bottom:5px; -} -#cont{ - color: white; - margin-top: 5px; - margin-bottom: 15px; - font-size: 1.1rem; -} - -.lds-ellipsis { - display: inline-block; - position: relative; - width: 80px; - height: 80px; - -} -.lds-ellipsis div { - position: absolute; - z-index:199999; - - top: 33px; - width: 13px; - height: 13px; - border-radius: 50%; - background: blue; - animation-timing-function: cubic-bezier(0, 1, 1, 0); -} -.lds-ellipsis div:nth-child(1) { - left: 8px; - animation: lds-ellipsis1 0.6s infinite; -} -.lds-ellipsis div:nth-child(2) { - left: 8px; - animation: lds-ellipsis2 0.6s infinite; -} -.lds-ellipsis div:nth-child(3) { - left: 32px; - animation: lds-ellipsis2 0.6s infinite; -} -.lds-ellipsis div:nth-child(4) { - left: 56px; - animation: lds-ellipsis3 0.6s infinite; -} -@keyframes lds-ellipsis1 { - 0% { - transform: scale(0); - } - 100% { - transform: scale(1); - } -} -@keyframes lds-ellipsis3 { - 0% { - transform: scale(1); - } - 100% { - transform: scale(0); - }frames lds-ellipsis2 { - 0% { - transform: translate(0, 0); - } - 100% { - transform: translate(24px, 0); - } -} - -} -@keyframes lds-ellipsis2 { - 0% { - transform: translate(0, 0); - } - 100% { - transform: translate(24px, 0); - } -} - diff --git a/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/sequence_layers_test.py b/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/sequence_layers_test.py deleted file mode 100644 index fd41e2d824c014084129707631d45de334ec741b..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/sequence_layers_test.py +++ /dev/null @@ -1,112 +0,0 @@ -# Copyright 2017 The TensorFlow Authors All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -"""Tests for sequence_layers.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import numpy as np -import tensorflow as tf -from tensorflow.contrib import slim - -import model -import sequence_layers - - -def fake_net(batch_size, num_features, feature_size): - return tf.convert_to_tensor( - np.random.uniform(size=(batch_size, num_features, feature_size)), - dtype=tf.float32) - - -def fake_labels(batch_size, seq_length, num_char_classes): - labels_np = tf.convert_to_tensor( - np.random.randint( - low=0, high=num_char_classes, size=(batch_size, seq_length))) - return slim.one_hot_encoding(labels_np, num_classes=num_char_classes) - - -def create_layer(layer_class, batch_size, seq_length, num_char_classes): - model_params = model.ModelParams( - num_char_classes=num_char_classes, - seq_length=seq_length, - num_views=1, - null_code=num_char_classes) - net = fake_net( - batch_size=batch_size, num_features=seq_length * 5, feature_size=6) - labels_one_hot = fake_labels(batch_size, seq_length, num_char_classes) - layer_params = sequence_layers.SequenceLayerParams( - num_lstm_units=10, weight_decay=0.00004, lstm_state_clip_value=10.0) - return layer_class(net, labels_one_hot, model_params, layer_params) - - -class SequenceLayersTest(tf.test.TestCase): - def test_net_slice_char_logits_with_correct_shape(self): - batch_size = 2 - seq_length = 4 - num_char_classes = 3 - - layer = create_layer(sequence_layers.NetSlice, batch_size, seq_length, - num_char_classes) - char_logits = layer.create_logits() - - self.assertEqual( - tf.TensorShape([batch_size, seq_length, num_char_classes]), - char_logits.get_shape()) - - def test_net_slice_with_autoregression_char_logits_with_correct_shape(self): - batch_size = 2 - seq_length = 4 - num_char_classes = 3 - - layer = create_layer(sequence_layers.NetSliceWithAutoregression, - batch_size, seq_length, num_char_classes) - char_logits = layer.create_logits() - - self.assertEqual( - tf.TensorShape([batch_size, seq_length, num_char_classes]), - char_logits.get_shape()) - - def test_attention_char_logits_with_correct_shape(self): - batch_size = 2 - seq_length = 4 - num_char_classes = 3 - - layer = create_layer(sequence_layers.Attention, batch_size, seq_length, - num_char_classes) - char_logits = layer.create_logits() - - self.assertEqual( - tf.TensorShape([batch_size, seq_length, num_char_classes]), - char_logits.get_shape()) - - def test_attention_with_autoregression_char_logits_with_correct_shape(self): - batch_size = 2 - seq_length = 4 - num_char_classes = 3 - - layer = create_layer(sequence_layers.AttentionWithAutoregression, - batch_size, seq_length, num_char_classes) - char_logits = layer.create_logits() - - self.assertEqual( - tf.TensorShape([batch_size, seq_length, num_char_classes]), - char_logits.get_shape()) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/NN520/AI/src/lib/bots/bing/tts.ts b/spaces/NN520/AI/src/lib/bots/bing/tts.ts deleted file mode 100644 index cd10b7d1d7581bf9cf46ff6755fcca550c558c9b..0000000000000000000000000000000000000000 --- a/spaces/NN520/AI/src/lib/bots/bing/tts.ts +++ /dev/null @@ -1,82 +0,0 @@ -import { sleep } from './utils' - -const synth = window.speechSynthesis - -export class TTS { - currentText = '' - speakText = '' - private controller = new AbortController() - speaking = false - get isSpeaking() { - return this.speaking - } - finished = false - constructor() {} - abort = () => { - this.controller.abort() - } - - reset = () => { - this.speaking = false - this.finished = true - this.currentText = '' - this.speakText = '' - this.abort() - } - - speak = (text: string) => { - if (!synth || text?.trim()?.length < 2) { - return - } - this.currentText = text.replace(/[^\u4e00-\u9fa5_a-zA-Z0-9,。?,:;\.,:]+/g, '') - this.finished = false - this.loop() - } - - private async doSpeek() { - return new Promise((resolve) => { - const endIndex = this.finished ? this.currentText.length : - Math.max( - this.currentText.lastIndexOf('。'), - this.currentText.lastIndexOf(';'), - this.currentText.lastIndexOf('、'), - this.currentText.lastIndexOf('?'), - this.currentText.lastIndexOf('\n') - ) - const startIndex = this.speakText.length ? Math.max(0, this.currentText.lastIndexOf(this.speakText) + this.speakText.length) : 0 - - if (startIndex >= endIndex) { - return resolve(true) - } - const text = this.currentText.slice(startIndex, endIndex) - this.speakText = text - const utterThis = new SpeechSynthesisUtterance(text) - this.controller.signal.onabort = () => { - synth.cancel() - this.finished = true - resolve(false) - } - - utterThis.onend = function (event) { - resolve(true) - } - - utterThis.onerror = function (event) { - resolve(false) - } - - const voice = synth.getVoices().find(v => v.name.includes('Microsoft Yunxi Online')) ?? null - utterThis.voice = voice - synth.speak(utterThis) - }) - } - - private async loop() { - if (this.speaking) return - this.speaking = true - while(!this.finished) { - await Promise.all([sleep(1000), this.doSpeek()]) - } - this.speaking = false - } -} diff --git a/spaces/Ntabukiraniro/Recipe/utils/ims2file.py b/spaces/Ntabukiraniro/Recipe/utils/ims2file.py deleted file mode 100644 index 13007007fd936b4a02b500bb480a4dae84e6785e..0000000000000000000000000000000000000000 --- a/spaces/Ntabukiraniro/Recipe/utils/ims2file.py +++ /dev/null @@ -1,92 +0,0 @@ -import pickle -from tqdm import tqdm -import os -import numpy as np -from PIL import Image -import argparse -import lmdb -from torchvision import transforms - - -MAX_SIZE = 1e12 - - -def load_and_resize(root, path, imscale): - - transf_list = [] - transf_list.append(transforms.Resize(imscale)) - transf_list.append(transforms.CenterCrop(imscale)) - transform = transforms.Compose(transf_list) - - img = Image.open(os.path.join(root, path[0], path[1], path[2], path[3], path)).convert('RGB') - img = transform(img) - - return img - - -def main(args): - - parts = {} - datasets = {} - imname2pos = {'train': {}, 'val': {}, 'test': {}} - for split in ['train', 'val', 'test']: - datasets[split] = pickle.load(open(os.path.join(args.save_dir, args.suff + 'recipe1m_' + split + '.pkl'), 'rb')) - - parts[split] = lmdb.open(os.path.join(args.save_dir, 'lmdb_'+split), map_size=int(MAX_SIZE)) - with parts[split].begin() as txn: - present_entries = [key for key, _ in txn.cursor()] - j = 0 - for i, entry in tqdm(enumerate(datasets[split])): - impaths = entry['images'][0:5] - - for n, p in enumerate(impaths): - if n == args.maxnumims: - break - if p.encode() not in present_entries: - im = load_and_resize(os.path.join(args.root, 'images', split), p, args.imscale) - im = np.array(im).astype(np.uint8) - with parts[split].begin(write=True) as txn: - txn.put(p.encode(), im) - imname2pos[split][p] = j - j += 1 - pickle.dump(imname2pos, open(os.path.join(args.save_dir, 'imname2pos.pkl'), 'wb')) - - -def test(args): - - imname2pos = pickle.load(open(os.path.join(args.save_dir, 'imname2pos.pkl'), 'rb')) - paths = imname2pos['val'] - - for k, v in paths.items(): - path = k - break - image_file = lmdb.open(os.path.join(args.save_dir, 'lmdb_' + 'val'), max_readers=1, readonly=True, - lock=False, readahead=False, meminit=False) - with image_file.begin(write=False) as txn: - image = txn.get(path.encode()) - image = np.fromstring(image, dtype=np.uint8) - image = np.reshape(image, (args.imscale, args.imscale, 3)) - image = Image.fromarray(image.astype('uint8'), 'RGB') - print (np.shape(image)) - - -if __name__ == "__main__": - - parser = argparse.ArgumentParser() - parser.add_argument('--root', type=str, default='path/to/recipe1m', - help='path to the recipe1m dataset') - parser.add_argument('--save_dir', type=str, default='../data', - help='path where the lmdbs will be saved') - parser.add_argument('--imscale', type=int, default=256, - help='size of images (will be rescaled and center cropped)') - parser.add_argument('--maxnumims', type=int, default=5, - help='maximum number of images to allow for each sample') - parser.add_argument('--suff', type=str, default='', - help='id of the vocabulary to use') - parser.add_argument('--test_only', dest='test_only', action='store_true') - parser.set_defaults(test_only=False) - args = parser.parse_args() - - if not args.test_only: - main(args) - test(args) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/bart/summarize.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/bart/summarize.py deleted file mode 100644 index 04435f80e39c2d9d894696dae7cba5b381e13da9..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/bart/summarize.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from fairseq.models.bart import BARTModel -import argparse - -XSUM_KWARGS = dict(beam=6, lenpen=1.0, max_len_b=60, min_len=10, no_repeat_ngram_size=3) -CNN_KWARGS = dict(beam=4, lenpen=2.0, max_len_b=140, min_len=55, no_repeat_ngram_size=3) - - -@torch.no_grad() -def generate(bart, infile, outfile="bart_hypo.txt", bsz=32, n_obs=None, **eval_kwargs): - count = 1 - - # if n_obs is not None: bsz = min(bsz, n_obs) - - with open(infile) as source, open(outfile, "w") as fout: - sline = source.readline().strip() - slines = [sline] - for sline in source: - if n_obs is not None and count > n_obs: - break - if count % bsz == 0: - hypotheses_batch = bart.sample(slines, **eval_kwargs) - for hypothesis in hypotheses_batch: - fout.write(hypothesis + "\n") - fout.flush() - slines = [] - - slines.append(sline.strip()) - count += 1 - - if slines != []: - hypotheses_batch = bart.sample(slines, **eval_kwargs) - for hypothesis in hypotheses_batch: - fout.write(hypothesis + "\n") - fout.flush() - - -def main(): - """ - Usage:: - - python examples/bart/summarize.py \ - --model-dir $HOME/bart.large.cnn \ - --model-file model.pt \ - --src $HOME/data-bin/cnn_dm/test.source - """ - parser = argparse.ArgumentParser() - parser.add_argument( - "--model-dir", - required=True, - type=str, - default="bart.large.cnn/", - help="path containing model file and src_dict.txt", - ) - parser.add_argument( - "--model-file", - default="checkpoint_best.pt", - help="where in model_dir are weights saved", - ) - parser.add_argument( - "--src", default="test.source", help="text to summarize", type=str - ) - parser.add_argument( - "--out", default="test.hypo", help="where to save summaries", type=str - ) - parser.add_argument("--bsz", default=32, help="where to save summaries", type=int) - parser.add_argument( - "--n", default=None, help="how many examples to summarize", type=int - ) - parser.add_argument( - "--xsum-kwargs", - action="store_true", - default=False, - help="if true use XSUM_KWARGS else CNN_KWARGS", - ) - args = parser.parse_args() - eval_kwargs = XSUM_KWARGS if args.xsum_kwargs else CNN_KWARGS - if args.model_dir == "pytorch/fairseq": - bart = torch.hub.load("pytorch/fairseq", args.model_file) - else: - bart = BARTModel.from_pretrained( - args.model_dir, - checkpoint_file=args.model_file, - data_name_or_path=args.model_dir, - ) - bart = bart.eval() - if torch.cuda.is_available(): - bart = bart.cuda().half() - generate( - bart, args.src, bsz=args.bsz, n_obs=args.n, outfile=args.out, **eval_kwargs - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/get_feature_manifest.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/get_feature_manifest.py deleted file mode 100644 index 516f2cc469af9b417126dea1988698adac41d8ab..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/get_feature_manifest.py +++ /dev/null @@ -1,233 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -from pathlib import Path -import shutil -from tempfile import NamedTemporaryFile -from collections import Counter, defaultdict - -import pandas as pd -import torchaudio -from tqdm import tqdm - -from fairseq.data.audio.audio_utils import convert_waveform -from examples.speech_to_text.data_utils import ( - create_zip, - gen_config_yaml, - gen_vocab, - get_zip_manifest, - load_tsv_to_dicts, - save_df_to_tsv -) -from examples.speech_synthesis.data_utils import ( - extract_logmel_spectrogram, extract_pitch, extract_energy, get_global_cmvn, - ipa_phonemize, get_mfa_alignment, get_unit_alignment -) - - -log = logging.getLogger(__name__) - - -def process(args): - assert "train" in args.splits - out_root = Path(args.output_root).absolute() - out_root.mkdir(exist_ok=True) - - print("Fetching data...") - audio_manifest_root = Path(args.audio_manifest_root).absolute() - samples = [] - for s in args.splits: - for e in load_tsv_to_dicts(audio_manifest_root / f"{s}.audio.tsv"): - e["split"] = s - samples.append(e) - sample_ids = [s["id"] for s in samples] - - # Get alignment info - id_to_alignment = None - if args.textgrid_zip is not None: - assert args.id_to_units_tsv is None - id_to_alignment = get_mfa_alignment( - args.textgrid_zip, sample_ids, args.sample_rate, args.hop_length - ) - elif args.id_to_units_tsv is not None: - # assume identical hop length on the unit sequence - id_to_alignment = get_unit_alignment(args.id_to_units_tsv, sample_ids) - - # Extract features and pack features into ZIP - feature_name = "logmelspec80" - zip_path = out_root / f"{feature_name}.zip" - pitch_zip_path = out_root / "pitch.zip" - energy_zip_path = out_root / "energy.zip" - gcmvn_npz_path = out_root / "gcmvn_stats.npz" - if zip_path.exists() and gcmvn_npz_path.exists(): - print(f"{zip_path} and {gcmvn_npz_path} exist.") - else: - feature_root = out_root / feature_name - feature_root.mkdir(exist_ok=True) - pitch_root = out_root / "pitch" - energy_root = out_root / "energy" - if args.add_fastspeech_targets: - pitch_root.mkdir(exist_ok=True) - energy_root.mkdir(exist_ok=True) - print("Extracting Mel spectrogram features...") - for sample in tqdm(samples): - waveform, sample_rate = torchaudio.load(sample["audio"]) - waveform, sample_rate = convert_waveform( - waveform, sample_rate, normalize_volume=args.normalize_volume, - to_sample_rate=args.sample_rate - ) - sample_id = sample["id"] - target_length = None - if id_to_alignment is not None: - a = id_to_alignment[sample_id] - target_length = sum(a.frame_durations) - if a.start_sec is not None and a.end_sec is not None: - start_frame = int(a.start_sec * sample_rate) - end_frame = int(a.end_sec * sample_rate) - waveform = waveform[:, start_frame: end_frame] - extract_logmel_spectrogram( - waveform, sample_rate, feature_root / f"{sample_id}.npy", - win_length=args.win_length, hop_length=args.hop_length, - n_fft=args.n_fft, n_mels=args.n_mels, f_min=args.f_min, - f_max=args.f_max, target_length=target_length - ) - if args.add_fastspeech_targets: - assert id_to_alignment is not None - extract_pitch( - waveform, sample_rate, pitch_root / f"{sample_id}.npy", - hop_length=args.hop_length, log_scale=True, - phoneme_durations=id_to_alignment[sample_id].frame_durations - ) - extract_energy( - waveform, energy_root / f"{sample_id}.npy", - hop_length=args.hop_length, n_fft=args.n_fft, - log_scale=True, - phoneme_durations=id_to_alignment[sample_id].frame_durations - ) - print("ZIPing features...") - create_zip(feature_root, zip_path) - get_global_cmvn(feature_root, gcmvn_npz_path) - shutil.rmtree(feature_root) - if args.add_fastspeech_targets: - create_zip(pitch_root, pitch_zip_path) - shutil.rmtree(pitch_root) - create_zip(energy_root, energy_zip_path) - shutil.rmtree(energy_root) - - print("Fetching ZIP manifest...") - audio_paths, audio_lengths = get_zip_manifest(zip_path) - pitch_paths, pitch_lengths, energy_paths, energy_lengths = [None] * 4 - if args.add_fastspeech_targets: - pitch_paths, pitch_lengths = get_zip_manifest(pitch_zip_path) - energy_paths, energy_lengths = get_zip_manifest(energy_zip_path) - # Generate TSV manifest - print("Generating manifest...") - manifest_by_split = {split: defaultdict(list) for split in args.splits} - for sample in tqdm(samples): - sample_id, split = sample["id"], sample["split"] - normalized_utt = sample["tgt_text"] - if id_to_alignment is not None: - normalized_utt = " ".join(id_to_alignment[sample_id].tokens) - elif args.ipa_vocab: - normalized_utt = ipa_phonemize( - normalized_utt, lang=args.lang, use_g2p=args.use_g2p - ) - manifest_by_split[split]["id"].append(sample_id) - manifest_by_split[split]["audio"].append(audio_paths[sample_id]) - manifest_by_split[split]["n_frames"].append(audio_lengths[sample_id]) - manifest_by_split[split]["tgt_text"].append(normalized_utt) - manifest_by_split[split]["speaker"].append(sample["speaker"]) - manifest_by_split[split]["src_text"].append(sample["src_text"]) - if args.add_fastspeech_targets: - assert id_to_alignment is not None - duration = " ".join( - str(d) for d in id_to_alignment[sample_id].frame_durations - ) - manifest_by_split[split]["duration"].append(duration) - manifest_by_split[split]["pitch"].append(pitch_paths[sample_id]) - manifest_by_split[split]["energy"].append(energy_paths[sample_id]) - for split in args.splits: - save_df_to_tsv( - pd.DataFrame.from_dict(manifest_by_split[split]), - out_root / f"{split}.tsv" - ) - # Generate vocab - vocab_name, spm_filename = None, None - if id_to_alignment is not None or args.ipa_vocab: - vocab = Counter() - for t in manifest_by_split["train"]["tgt_text"]: - vocab.update(t.split(" ")) - vocab_name = "vocab.txt" - with open(out_root / vocab_name, "w") as f: - for s, c in vocab.most_common(): - f.write(f"{s} {c}\n") - else: - spm_filename_prefix = "spm_char" - spm_filename = f"{spm_filename_prefix}.model" - with NamedTemporaryFile(mode="w") as f: - for t in manifest_by_split["train"]["tgt_text"]: - f.write(t + "\n") - f.flush() # needed to ensure gen_vocab sees dumped text - gen_vocab(Path(f.name), out_root / spm_filename_prefix, "char") - # Generate speaker list - speakers = sorted({sample["speaker"] for sample in samples}) - speakers_path = out_root / "speakers.txt" - with open(speakers_path, "w") as f: - for speaker in speakers: - f.write(f"{speaker}\n") - # Generate config YAML - win_len_t = args.win_length / args.sample_rate - hop_len_t = args.hop_length / args.sample_rate - extra = { - "sample_rate": args.sample_rate, - "features": { - "type": "spectrogram+melscale+log", - "eps": 1e-2, "n_mels": args.n_mels, "n_fft": args.n_fft, - "window_fn": "hann", "win_length": args.win_length, - "hop_length": args.hop_length, "sample_rate": args.sample_rate, - "win_len_t": win_len_t, "hop_len_t": hop_len_t, - "f_min": args.f_min, "f_max": args.f_max, - "n_stft": args.n_fft // 2 + 1 - } - } - if len(speakers) > 1: - extra["speaker_set_filename"] = "speakers.txt" - gen_config_yaml( - out_root, spm_filename=spm_filename, vocab_name=vocab_name, - audio_root=out_root.as_posix(), input_channels=None, - input_feat_per_channel=None, specaugment_policy=None, - cmvn_type="global", gcmvn_path=gcmvn_npz_path, extra=extra - ) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--audio-manifest-root", "-m", required=True, type=str) - parser.add_argument("--output-root", "-o", required=True, type=str) - parser.add_argument("--splits", "-s", type=str, nargs="+", - default=["train", "dev", "test"]) - parser.add_argument("--ipa-vocab", action="store_true") - parser.add_argument("--use-g2p", action="store_true") - parser.add_argument("--lang", type=str, default="en-us") - parser.add_argument("--win-length", type=int, default=1024) - parser.add_argument("--hop-length", type=int, default=256) - parser.add_argument("--n-fft", type=int, default=1024) - parser.add_argument("--n-mels", type=int, default=80) - parser.add_argument("--f-min", type=int, default=20) - parser.add_argument("--f-max", type=int, default=8000) - parser.add_argument("--sample-rate", type=int, default=22050) - parser.add_argument("--normalize-volume", "-n", action="store_true") - parser.add_argument("--textgrid-zip", type=str, default=None) - parser.add_argument("--id-to-units-tsv", type=str, default=None) - parser.add_argument("--add-fastspeech-targets", action="store_true") - args = parser.parse_args() - - process(args) - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/scripts/pca.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/scripts/pca.py deleted file mode 100644 index 948cf5319fd86ba1bccff65270b2881048faf9b1..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/scripts/pca.py +++ /dev/null @@ -1,53 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os -import os.path as osp -import numpy as np - -import faiss - - - -def get_parser(): - parser = argparse.ArgumentParser( - description="compute a pca matrix given an array of numpy features" - ) - # fmt: off - parser.add_argument('data', help='numpy file containing features') - parser.add_argument('--output', help='where to save the pca matrix', required=True) - parser.add_argument('--dim', type=int, help='dim for pca reduction', required=True) - parser.add_argument('--eigen-power', type=float, default=0, help='eigen power, -0.5 for whitening') - - return parser - - -def main(): - parser = get_parser() - args = parser.parse_args() - - print("Reading features") - x = np.load(args.data, mmap_mode="r") - - print("Computing PCA") - pca = faiss.PCAMatrix(x.shape[-1], args.dim, args.eigen_power) - pca.train(x) - b = faiss.vector_to_array(pca.b) - A = faiss.vector_to_array(pca.A).reshape(pca.d_out, pca.d_in) - - os.makedirs(args.output, exist_ok=True) - - prefix = str(args.dim) - if args.eigen_power != 0: - prefix += f"_{args.eigen_power}" - - np.save(osp.join(args.output, f"{prefix}_pca_A"), A.T) - np.save(osp.join(args.output, f"{prefix}_pca_b"), b) - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/benchmark/dummy_masked_lm.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/benchmark/dummy_masked_lm.py deleted file mode 100644 index 12b9c5d0f55993bf8750564882a351fc3f8055f0..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/benchmark/dummy_masked_lm.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from dataclasses import dataclass, field -from typing import Optional - -import torch -from omegaconf import II - -from .dummy_dataset import DummyDataset -from fairseq.data import Dictionary -from fairseq.dataclass import FairseqDataclass -from fairseq.tasks import FairseqTask, register_task - -logger = logging.getLogger(__name__) - - -@dataclass -class DummyMaskedLMConfig(FairseqDataclass): - dict_size: int = 49996 - dataset_size: int = 100000 - tokens_per_sample: int = field( - default=512, - metadata={ - "help": "max number of total tokens over all" - " segments per sample for BERT dataset" - }, - ) - batch_size: Optional[int] = II("dataset.batch_size") - max_tokens: Optional[int] = II("dataset.max_tokens") - max_target_positions: int = II("task.tokens_per_sample") - - -@register_task("dummy_masked_lm", dataclass=DummyMaskedLMConfig) -class DummyMaskedLMTask(FairseqTask): - def __init__(self, cfg: DummyMaskedLMConfig): - super().__init__(cfg) - - self.dictionary = Dictionary() - for i in range(cfg.dict_size): - self.dictionary.add_symbol("word{}".format(i)) - logger.info("dictionary: {} types".format(len(self.dictionary))) - # add mask token - self.mask_idx = self.dictionary.add_symbol("") - self.dictionary.pad_to_multiple_(8) # often faster if divisible by 8 - - mask_idx = 0 - pad_idx = 1 - seq = torch.arange(cfg.tokens_per_sample) + pad_idx + 1 - mask = torch.arange(2, cfg.tokens_per_sample, 7) # ~15% - src = seq.clone() - src[mask] = mask_idx - tgt = torch.full_like(seq, pad_idx) - tgt[mask] = seq[mask] - - self.dummy_src = src - self.dummy_tgt = tgt - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - Args: - split (str): name of the split (e.g., train, valid, test) - """ - if self.cfg.batch_size is not None: - bsz = self.cfg.batch_size - else: - bsz = max(1, self.cfg.max_tokens // self.cfg.tokens_per_sample) - self.datasets[split] = DummyDataset( - { - "id": 1, - "net_input": { - "src_tokens": torch.stack([self.dummy_src for _ in range(bsz)]), - "src_lengths": torch.full( - (bsz,), self.cfg.tokens_per_sample, dtype=torch.long - ), - }, - "target": torch.stack([self.dummy_tgt for _ in range(bsz)]), - "nsentences": bsz, - "ntokens": bsz * self.cfg.tokens_per_sample, - }, - num_items=self.cfg.dataset_size, - item_size=self.cfg.tokens_per_sample, - ) - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/layer_norm.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/layer_norm.py deleted file mode 100644 index 234609d9e213a650e0032aaa0ca0462a818bfead..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/layer_norm.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -try: - from apex.normalization import FusedLayerNorm as _FusedLayerNorm - - has_fused_layernorm = True - - class FusedLayerNorm(_FusedLayerNorm): - @torch.jit.unused - def forward(self, x): - if not x.is_cuda: - return super().forward(x) - else: - with torch.cuda.device(x.device): - return super().forward(x) - - -except ImportError: - has_fused_layernorm = False - - -def LayerNorm(normalized_shape, eps=1e-5, elementwise_affine=True, export=False): - if torch.jit.is_scripting(): - export = True - if not export and torch.cuda.is_available() and has_fused_layernorm: - return FusedLayerNorm(normalized_shape, eps, elementwise_affine) - return torch.nn.LayerNorm(normalized_shape, eps, elementwise_affine) - - -class Fp32LayerNorm(nn.LayerNorm): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - def forward(self, input): - output = F.layer_norm( - input.float(), - self.normalized_shape, - self.weight.float() if self.weight is not None else None, - self.bias.float() if self.bias is not None else None, - self.eps, - ) - return output.type_as(input) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq_cli/eval_lm.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq_cli/eval_lm.py deleted file mode 100644 index ab6e77029ef738291efd190b1cfe2435dd403dea..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq_cli/eval_lm.py +++ /dev/null @@ -1,347 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Evaluate the perplexity of a trained language model. -""" - -import logging -import math -import os -import sys -from argparse import Namespace -from typing import Iterable, List, Optional - -import torch -import fairseq -from fairseq import checkpoint_utils, distributed_utils, options, tasks, utils -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.logging import progress_bar -from fairseq.logging.meters import StopwatchMeter -from fairseq.sequence_scorer import SequenceScorer -from omegaconf import DictConfig - - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("fairseq_cli.eval_lm") - - -def eval_lm( - models: List[fairseq.models.FairseqModel], - source_dictionary: fairseq.data.Dictionary, - batch_iterator: Iterable, - post_process: Optional[str] = None, - output_word_probs: bool = False, - output_word_stats: bool = False, - target_dictionary: Optional[fairseq.data.Dictionary] = None, - softmax_batch: int = 0, - remove_bos_token: bool = False, - device: Optional[torch.device] = None, -): - """ - Args: - models (List[~fairseq.models.FairseqModel]): list of models to - evaluate. Models are essentially `nn.Module` instances, but - must be compatible with fairseq's `SequenceScorer`. - source_dictionary (~fairseq.data.Dictionary): dictionary for - applying any relevant post processing or outputing word - probs/stats. - batch_iterator (Iterable): yield batches of data - post_process (Optional[str]): post-process text by removing BPE, - letter segmentation, etc. Valid options can be found in - fairseq.data.utils.post_process, although not all options - are implemented here. - output_word_probs (Optional[bool]): output words and their - predicted log probabilities - output_word_stats (Optional[bool]): output word statistics such - as word count and average probability - target_dictionary (Optional[~fairseq.data.Dictionary]): output - dictionary (defaults to *source_dictionary*) - softmax_batch (Optional[bool]): if BxT is more than this, will - batch the softmax over vocab to this amount of tokens, in - order to fit into GPU memory - remove_bos_token (Optional[bool]): if True, confirm that the - first token is the beginning-of-sentence symbol (according - to the relevant dictionary) and remove it from the output - device (Optional[torch.device]): device to use for evaluation - (defaults to device of first model parameter) - """ - if target_dictionary is None: - target_dictionary = source_dictionary - if device is None: - device = next(models[0].parameters()).device - - gen_timer = StopwatchMeter() - scorer = SequenceScorer(target_dictionary, softmax_batch) - - score_sum = 0.0 - count = 0 - - if post_process is not None: - if post_process in {"subword_nmt", "@@ "}: - bpe_cont = post_process.rstrip() - bpe_toks = { - i - for i in range(len(source_dictionary)) - if source_dictionary[i].endswith(bpe_cont) - } - else: - raise NotImplementedError( - "--post-process={post_process} is not implemented" - ) - bpe_len = len(bpe_cont) - else: - bpe_toks = None - bpe_len = 0 - - word_stats = dict() - - for sample in batch_iterator: - if "net_input" not in sample: - continue - - sample = utils.move_to_cuda(sample, device=device) - - gen_timer.start() - hypos = scorer.generate(models, sample) - gen_timer.stop(sample["ntokens"]) - - for i, hypos_i in enumerate(hypos): - hypo = hypos_i[0] - sample_id = sample["id"][i] - - tokens = hypo["tokens"] - tgt_len = tokens.numel() - pos_scores = hypo["positional_scores"].float() - - if remove_bos_token: - assert hypo["tokens"][0].item() == target_dictionary.bos() - tokens = tokens[1:] - pos_scores = pos_scores[1:] - - skipped_toks = 0 - if bpe_toks is not None: - for i in range(tgt_len - 1): - if tokens[i].item() in bpe_toks: - skipped_toks += 1 - pos_scores[i + 1] += pos_scores[i] - pos_scores[i] = 0 - - inf_scores = pos_scores.eq(float("inf")) | pos_scores.eq(float("-inf")) - if inf_scores.any(): - logger.info( - "skipping tokens with inf scores:", - target_dictionary.string(tokens[inf_scores.nonzero()]), - ) - pos_scores = pos_scores[(~inf_scores).nonzero()] - score_sum += pos_scores.sum().cpu() - count += pos_scores.numel() - skipped_toks - - if output_word_probs or output_word_stats: - w = "" - word_prob = [] - is_bpe = False - for i in range(len(tokens)): - w_ind = tokens[i].item() - w += source_dictionary[w_ind] - if bpe_toks is not None and w_ind in bpe_toks: - w = w[:-bpe_len] - is_bpe = True - else: - word_prob.append((w, pos_scores[i].item())) - - next_prob = None - ind = i + 1 - while ind < len(tokens): - if pos_scores[ind].item() != 0: - next_prob = pos_scores[ind] - break - ind += 1 - - word_stats.setdefault(w, WordStat(w, is_bpe)).add( - pos_scores[i].item(), next_prob - ) - is_bpe = False - w = "" - if output_word_probs: - logger.info( - str(int(sample_id)) - + " " - + ( - "\t".join( - "{} [{:2f}]".format(x[0], x[1]) for x in word_prob - ) - ) - ) - - avg_nll_loss = ( - -score_sum / count / math.log(2) if count > 0 else 0 - ) # convert to base 2 - logger.info( - "Evaluated {:,} tokens in {:.1f}s ({:.2f} tokens/s)".format( - gen_timer.n, gen_timer.sum, 1.0 / gen_timer.avg if gen_timer.avg > 0 else 0 - ) - ) - - if output_word_stats: - for ws in sorted(word_stats.values(), key=lambda x: x.count, reverse=True): - logger.info(ws) - - return { - "loss": avg_nll_loss, - "perplexity": 2 ** avg_nll_loss, - } - - -class WordStat(object): - def __init__(self, word, is_bpe): - self.word = word - self.is_bpe = is_bpe - self.log_prob = 0 - self.next_word_prob = 0 - self.count = 0 - self.missing_next_words = 0 - - def add(self, log_prob, next_word_prob): - """increments counters for the sum of log probs of current word and next - word (given context ending at current word). Since the next word might be at the end of the example, - or it might be not counted because it is not an ending subword unit, - also keeps track of how many of those we have seen""" - if next_word_prob is not None: - self.next_word_prob += next_word_prob - else: - self.missing_next_words += 1 - self.log_prob += log_prob - self.count += 1 - - def __str__(self): - return "{}\t{}\t{}\t{}\t{}\t{}".format( - self.word, - self.count, - self.log_prob, - self.is_bpe, - self.next_word_prob, - self.count - self.missing_next_words, - ) - - -def main(cfg: DictConfig, **unused_kwargs): - if isinstance(cfg, Namespace): - cfg = convert_namespace_to_omegaconf(cfg) - - utils.import_user_module(cfg.common) - - logger.info(cfg) - - if cfg.eval_lm.context_window > 0: - # reduce tokens per sample by the required context window size - cfg.task.tokens_per_sample -= cfg.eval_lm.context_window - - # Initialize the task using the current *cfg* - task = tasks.setup_task(cfg.task) - - # Load ensemble - logger.info("loading model(s) from {}".format(cfg.common_eval.path)) - models, model_args, task = checkpoint_utils.load_model_ensemble_and_task( - [cfg.common_eval.path], - arg_overrides=eval(cfg.common_eval.model_overrides), - suffix=cfg.checkpoint.checkpoint_suffix, - strict=(cfg.checkpoint.checkpoint_shard_count == 1), - num_shards=cfg.checkpoint.checkpoint_shard_count, - task=task, - ) - - use_fp16 = cfg.common.fp16 - use_cuda = torch.cuda.is_available() and not cfg.common.cpu - if use_cuda: - torch.cuda.set_device(cfg.distributed_training.device_id) - - # Optimize ensemble for generation and set the source and dest dicts on the model - # (required by scorer) - for model in models: - if use_fp16: - model.half() - if use_cuda and not cfg.distributed_training.pipeline_model_parallel: - model.cuda() - model.prepare_for_inference_(cfg) - - assert len(models) > 0 - - logger.info( - "num. model params: {:,}".format(sum(p.numel() for p in models[0].parameters())) - ) - - # Load dataset splits - task.load_dataset(cfg.dataset.gen_subset) - dataset = task.dataset(cfg.dataset.gen_subset) - logger.info( - "{} {} {:,} examples".format( - cfg.task.data, cfg.dataset.gen_subset, len(dataset) - ) - ) - - itr = task.eval_lm_dataloader( - dataset=dataset, - max_tokens=cfg.dataset.max_tokens or 36000, - batch_size=cfg.dataset.batch_size, - max_positions=utils.resolve_max_positions( - *[model.max_positions() for model in models] - ), - num_shards=max( - cfg.dataset.num_shards, - cfg.distributed_training.distributed_world_size, - ), - shard_id=max( - cfg.dataset.shard_id, - cfg.distributed_training.distributed_rank, - ), - num_workers=cfg.dataset.num_workers, - data_buffer_size=cfg.dataset.data_buffer_size, - context_window=cfg.eval_lm.context_window, - ) - - itr = progress_bar.progress_bar( - itr, - log_format=cfg.common.log_format, - log_interval=cfg.common.log_interval, - default_log_format=("tqdm" if not cfg.common.no_progress_bar else "simple"), - ) - - results = eval_lm( - models=models, - source_dictionary=task.source_dictionary, - batch_iterator=itr, - post_process=cfg.common_eval.post_process, - output_word_probs=cfg.eval_lm.output_word_probs, - output_word_stats=cfg.eval_lm.output_word_stats, - target_dictionary=task.target_dictionary, - softmax_batch=cfg.eval_lm.softmax_batch, - remove_bos_token=getattr(cfg.task, "add_bos_token", False), - ) - - logger.info( - "Loss (base 2): {:.4f}, Perplexity: {:.2f}".format( - results["loss"], results["perplexity"] - ) - ) - - return results - - -def cli_main(): - parser = options.get_eval_lm_parser() - args = options.parse_args_and_arch(parser) - - distributed_utils.call_main(convert_namespace_to_omegaconf(args), main) - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/layerdrop/README.md b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/layerdrop/README.md deleted file mode 100644 index 4d48ee9615e1458e1e889635dc9938e427a7f64a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/layerdrop/README.md +++ /dev/null @@ -1,154 +0,0 @@ -# Reducing Transformer Depth on Demand with Structured Dropout (Fan et al., 2019) -This page contains information for how to train models with LayerDrop, based on this [paper](https://arxiv.org/abs/1909.11556). - -## Citation: -If you found this technique useful, please cite our paper: -```bibtex -@article{fan2019reducing, - title={Reducing Transformer Depth on Demand with Structured Dropout}, - author={Fan, Angela and Grave, Edouard and Joulin, Armand}, - journal={arXiv preprint arXiv:1909.11556}, - year={2019} -} -``` - -## Pre-trained models - -Model | Description | Download ----|---|--- -`layerdrop_wmt_en_de_12_6` | Transformer + LayerDrop 0.2 trained on WMT16 en-de with 12 encoder and 6 decoder layers | [layerdrop_wmt_en_de_12_6.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/layerdrop_wmt_en_de_12_6.tar.gz) -`roberta_layerdrop.base` | RoBERTa Base + LayerDrop 0.2 | [roberta_layerdrop.base.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta_layerdrop.base.qnli.tar.gz) -`roberta_layerdrop.large` | RoBERTa Large + LayerDrop 0.2 | [roberta_layerdrop.large.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta_layerdrop.large.tar.gz) -`roberta_layerdrop.large.mnli` | `roberta_layerdrop.large` finetuned on [MNLI](http://www.nyu.edu/projects/bowman/multinli) | [roberta_layerdrop.large.mnli.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta_layerdrop.large.mnli.tar.gz) -`roberta_layerdrop.large.qnli` | `roberta_layerdrop.large` finetuned on [QNLI](https://arxiv.org/abs/1804.07461) | [roberta_layerdrop.large.mnli.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta_layerdrop.large.qnli.tar.gz) - - -Evaluate performance of these pre-trained models: -```bash -# Example for Machine Translation -fairseq-generate /path/to/bped/wmt/data --path nmt_checkpoint.pt \ - --beam 8 --lenpen 0.4 \ - --batch-size 64 \ - --remove-bpe \ - --gen-subset test > wmt16_gen.txt -bash scripts/compound_split_bleu.sh wmt16_gen.txt -# prints BLEU4 = 30.17 -``` - -```python -# Example for RoBERTa + LayerDrop finetuned on MNLI: -from fairseq.models.roberta import RobertaModel - -roberta_layerdrop = RobertaModel.from_pretrained( - '/path/to/MNLI/model', - checkpoint_file='mnli_checkpoint.pt', - data_name_or_path='/path/to/MNLI/data/MNLI-bin' -) -label_map = {0: 'contradiction', 2: 'neutral', 1: 'entailment'} -ncorrect, nsamples = 0, 0 -roberta_layerdrop.cuda() -roberta_layerdrop.eval() -with open('/path/to/MNLI/data/dev_matched.tsv') as fin: - fin.readline() - for index, line in enumerate(fin): - tokens = line.strip().split('\t') - sent1, sent2, target = tokens[8], tokens[9], tokens[-1] - tokens = roberta_layerdrop.encode(sent1, sent2) - prediction = roberta_layerdrop.predict('sentence_classification_head', tokens).argmax().item() - prediction_label = label_map[prediction] - ncorrect += int(prediction_label == target) - nsamples += 1 -print('| Accuracy: ', float(ncorrect)/float(nsamples)) -# prints | Accuracy: 0.9026999490575649 - - -# Example for RoBERTa + LayerDrop finetuned on QNLI: -roberta = RobertaModel.from_pretrained( - '/path/to/QNLI/model', - checkpoint_file='qnli_checkpoint.pt', - data_name_or_path='/path/to/QNLI/data/QNLI-bin' -) - -label_fn = lambda label: roberta.task.label_dictionary.string( - [label + roberta.task.target_dictionary.nspecial] -) -ncorrect, nsamples = 0, 0 -roberta.cuda() -roberta.eval() -with open('/path/to/QNLI/data/dev.tsv') as fin: - fin.readline() - for index, line in enumerate(fin): - tokens = line.strip().split('\t') - sent1, sent2, target = tokens[1], tokens[2], tokens[3] - tokens = roberta.encode(sent1, sent2) - prediction = roberta.predict('sentence_classification_head', tokens).argmax().item() - prediction_label = label_fn(prediction) - ncorrect += int(prediction_label == target) - nsamples += 1 -print('| Accuracy: ', float(ncorrect)/float(nsamples)) -# prints | Accuracy: 0.9480139117700896 -``` - - -## Example usage - -To train a model with LayerDrop, add the following flags. We recommend 0.2, a value that worked well in our experiments. For Language Models that are decoder-only, you need only the decoder flag. For RoBERTa, an encoder, you need only the encoder flag. The encoder and decoder LayerDrop values can be set differently. -``` ---encoder-layerdrop 0.2 --decoder-layerdrop 0.2 -``` - -To prune a model that has been trained with LayerDrop, add the following flags followed by a comma separated list of which layers you would like to keep. -``` ---encoder-layers-to-keep 0,2,4,6,8,10,12,14 --decoder-layers-to-keep 0,2,4,6,8,10,12,14 -``` -Setting these flags should print a message such as: -``` -| Pruning model to specified layer configuration -``` -You should also see a smaller number of parameters in the model, for example the 16-Layer Transformer Language Model prints: -``` -num. model params: 246933504 -``` -while a model pruned to 8 Layers prints: -``` -num. model params: 146163712 -``` - -If you would like to pick up training with a model that has been pruned, simply adding these flags is sufficient. If you would like to use a script that only does evaluation (no training), you may need to pass an override command. A specific example would be for language modeling: -```bash -fairseq-eval-lm /path/to/wikitext-103 \ - --path /path/to/model/checkpoint.pt \ - --model-overrides "{'decoder_layers_to_keep':'0,2,4,6,8,10,12,14'}" -``` -This model override command overrides the training parameters and updates the model arguments so that the pruned model is run instead of the full model. - -## Reproduce Paper Results - -Looking to reproduce the results in the paper? - -1. For Translation on WMT16 en-de, we followed this setting [here](https://github.com/pytorch/fairseq/blob/main/examples/scaling_nmt/README.md) -2. To train RoBERTa, we followed this setting [here](https://github.com/pytorch/fairseq/tree/main/examples/roberta) -3. To train Language Models on Wikitext-103, we followed this setting [here](https://github.com/pytorch/fairseq/tree/main/examples/language_model) - - -## Tips - -1. If you would like to train large models with better performance, LayerDrop should be set to a smaller value such as 0.1 or 0.2. Too much LayerDrop will mean the model has too much regularization, so may not reach the best performance. Since LayerDrop adds regularization, you may achieve the best performance by slightly reducing the amount of standard dropout (for example, reduce by 0.1). - -2. If you would like to train large models to be pruned and made smaller, LayerDrop should be set to a larger value such as 0.5 if you want to prune very aggressively (such as removing half the network or more). If you would like to prune fewer layers away, LayerDrop can be set to a smaller value such as 0.2. Our experiments were conducted with low values of LayerDrop (such as 0.1 and 0.2), for reference. - -3. When pruning layers at inference time, it is best to spread out the layers remaining so they are evenly spaced throughout the network. For example, if you want to remove 50% of the network, keeping every other layer is good. - - -## FAQ - -1. How did the sharing layers experiment work? In an appendix (https://openreview.net/pdf?id=SylO2yStDr) we added an experiment on Wikitext-103 language modeling that combined LayerDrop with Weight Sharing. We shared chunks of 2 layers such that every other layer had shared weights. For example, if our network has layers 1 through 6, then layer 1 and 2 are shared, layer 3 and 4 are shared, and layer 5 and 6 are shared. - -2. LayerDrop hasn't been helping in my setting? During training time, LayerDrop can help regularize your network. This is most important if your network is already overfitting - if your network is underfitting, it is possible LayerDrop is adding too much regularization. We recommend using smaller values (such as 0.1 or 0.2) and also decreasing the quantity of standard dropout (for example, reduce by 0.1). - -3. Can you train a model without LayerDrop and finetune with LayerDrop (e.g. for BERT)? In our experiments, we did not see great performance. Models such as RoBERTa have trained for a long time in the pre-training setting, so only finetuning with LayerDrop for a few epochs on a downstream task such as MNLI does not achieve the robustness required for successful pruning. - - -## Having an issue or have a question? - -Please open an issue in this repository with the details of your question. Thanks! diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/strip_token_dataset.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/strip_token_dataset.py deleted file mode 100644 index cae39ba4d2f8106398eccd7eb0cf5c2194ec0db5..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/strip_token_dataset.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import BaseWrapperDataset - - -class StripTokenDataset(BaseWrapperDataset): - def __init__(self, dataset, id_to_strip): - super().__init__(dataset) - self.id_to_strip = id_to_strip - - def __getitem__(self, index): - item = self.dataset[index] - while len(item) > 0 and item[-1] == self.id_to_strip: - item = item[:-1] - while len(item) > 0 and item[0] == self.id_to_strip: - item = item[1:] - return item diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/noisychannel/rerank_score_bw.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/noisychannel/rerank_score_bw.py deleted file mode 100644 index b0bc913651bd76667e25c214acb70f2bca19e185..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/noisychannel/rerank_score_bw.py +++ /dev/null @@ -1,143 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os -from contextlib import redirect_stdout - -from fairseq import options -from fairseq_cli import generate - -from examples.noisychannel import rerank_options, rerank_utils - - -def score_bw(args): - if args.backwards1: - scorer1_src = args.target_lang - scorer1_tgt = args.source_lang - else: - scorer1_src = args.source_lang - scorer1_tgt = args.target_lang - - if args.score_model2 is not None: - if args.backwards2: - scorer2_src = args.target_lang - scorer2_tgt = args.source_lang - else: - scorer2_src = args.source_lang - scorer2_tgt = args.target_lang - - rerank1_is_gen = ( - args.gen_model == args.score_model1 and args.source_prefix_frac is None - ) - rerank2_is_gen = ( - args.gen_model == args.score_model2 and args.source_prefix_frac is None - ) - - ( - pre_gen, - left_to_right_preprocessed_dir, - right_to_left_preprocessed_dir, - backwards_preprocessed_dir, - lm_preprocessed_dir, - ) = rerank_utils.get_directories( - args.data_dir_name, - args.num_rescore, - args.gen_subset, - args.gen_model_name, - args.shard_id, - args.num_shards, - args.sampling, - args.prefix_len, - args.target_prefix_frac, - args.source_prefix_frac, - ) - - score1_file = rerank_utils.rescore_file_name( - pre_gen, - args.prefix_len, - args.model1_name, - target_prefix_frac=args.target_prefix_frac, - source_prefix_frac=args.source_prefix_frac, - backwards=args.backwards1, - ) - - if args.score_model2 is not None: - score2_file = rerank_utils.rescore_file_name( - pre_gen, - args.prefix_len, - args.model2_name, - target_prefix_frac=args.target_prefix_frac, - source_prefix_frac=args.source_prefix_frac, - backwards=args.backwards2, - ) - - if args.right_to_left1: - rerank_data1 = right_to_left_preprocessed_dir - elif args.backwards1: - rerank_data1 = backwards_preprocessed_dir - else: - rerank_data1 = left_to_right_preprocessed_dir - - gen_param = ["--batch-size", str(128), "--score-reference", "--gen-subset", "train"] - if not rerank1_is_gen and not os.path.isfile(score1_file): - print("STEP 4: score the translations for model 1") - - model_param1 = [ - "--path", - args.score_model1, - "--source-lang", - scorer1_src, - "--target-lang", - scorer1_tgt, - ] - gen_model1_param = [rerank_data1] + gen_param + model_param1 - - gen_parser = options.get_generation_parser() - input_args = options.parse_args_and_arch(gen_parser, gen_model1_param) - - with open(score1_file, "w") as f: - with redirect_stdout(f): - generate.main(input_args) - - if ( - args.score_model2 is not None - and not os.path.isfile(score2_file) - and not rerank2_is_gen - ): - print("STEP 4: score the translations for model 2") - - if args.right_to_left2: - rerank_data2 = right_to_left_preprocessed_dir - elif args.backwards2: - rerank_data2 = backwards_preprocessed_dir - else: - rerank_data2 = left_to_right_preprocessed_dir - - model_param2 = [ - "--path", - args.score_model2, - "--source-lang", - scorer2_src, - "--target-lang", - scorer2_tgt, - ] - gen_model2_param = [rerank_data2] + gen_param + model_param2 - - gen_parser = options.get_generation_parser() - input_args = options.parse_args_and_arch(gen_parser, gen_model2_param) - - with open(score2_file, "w") as f: - with redirect_stdout(f): - generate.main(input_args) - - -def cli_main(): - parser = rerank_options.get_reranking_parser() - args = options.parse_args_and_arch(parser) - score_bw(args) - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/legacy/block_pair_dataset.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/legacy/block_pair_dataset.py deleted file mode 100644 index ba069b46052286c531b4f9706d96788732cd2ad2..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/legacy/block_pair_dataset.py +++ /dev/null @@ -1,311 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import numpy as np -import torch -from fairseq.data import FairseqDataset - - -class BlockPairDataset(FairseqDataset): - """Break a Dataset of tokens into sentence pair blocks for next sentence - prediction as well as masked language model. - - High-level logics are: - 1. break input tensor to tensor blocks - 2. pair the blocks with 50% next sentence and 50% random sentence - 3. return paired blocks as well as related segment labels - - Args: - dataset (~torch.utils.data.Dataset): dataset to break into blocks - sizes: array of sentence lengths - dictionary: dictionary for the task - block_size: maximum block size - break_mode: mode for breaking copurs into block pairs. currently we support - 2 modes - doc: respect document boundaries and each part of the pair should belong to on document - none: don't respect any boundary and cut tokens evenly - short_seq_prob: probability for generating shorter block pairs - doc_break_size: Size for empty line separating documents. Typically 1 if - the sentences have eos, 0 otherwise. - """ - - def __init__( - self, - dataset, - dictionary, - sizes, - block_size, - break_mode="doc", - short_seq_prob=0.1, - doc_break_size=1, - ): - super().__init__() - self.dataset = dataset - self.pad = dictionary.pad() - self.eos = dictionary.eos() - self.cls = dictionary.cls() - self.mask = dictionary.mask() - self.sep = dictionary.sep() - self.break_mode = break_mode - self.dictionary = dictionary - self.short_seq_prob = short_seq_prob - self.block_indices = [] - - assert len(dataset) == len(sizes) - - if break_mode == "doc": - cur_doc = [] - for sent_id, sz in enumerate(sizes): - assert doc_break_size == 0 or sz != 0, ( - "when doc_break_size is non-zero, we expect documents to be" - "separated by a blank line with a single eos." - ) - # empty line as document separator - if sz == doc_break_size: - if len(cur_doc) == 0: - continue - self.block_indices.append(cur_doc) - cur_doc = [] - else: - cur_doc.append(sent_id) - max_num_tokens = block_size - 3 # Account for [CLS], [SEP], [SEP] - self.sent_pairs = [] - self.sizes = [] - for doc_id, doc in enumerate(self.block_indices): - self._generate_sentence_pair(doc, doc_id, max_num_tokens, sizes) - elif break_mode is None or break_mode == "none": - # each block should have half of the block size since we are constructing block pair - sent_length = (block_size - 3) // 2 - total_len = sum(dataset.sizes) - length = math.ceil(total_len / sent_length) - - def block_at(i): - start = i * sent_length - end = min(start + sent_length, total_len) - return (start, end) - - sent_indices = np.array([block_at(i) for i in range(length)]) - sent_sizes = np.array([e - s for s, e in sent_indices]) - dataset_index = self._sent_to_dataset_index(sent_sizes) - - # pair sentences - self._pair_sentences(dataset_index) - else: - raise ValueError("Invalid break_mode: " + break_mode) - - def _pair_sentences(self, dataset_index): - """ - Give a list of evenly cut blocks/sentences, pair these sentences with 50% - consecutive sentences and 50% random sentences. - This is used for none break mode - """ - # pair sentences - for sent_id, sent in enumerate(dataset_index): - next_sent_label = ( - 1 if np.random.rand() > 0.5 and sent_id != len(dataset_index) - 1 else 0 - ) - if next_sent_label: - next_sent = dataset_index[sent_id + 1] - else: - next_sent = dataset_index[ - self._skip_sampling(len(dataset_index), [sent_id, sent_id + 1]) - ] - self.sent_pairs.append((sent, next_sent, next_sent_label)) - - # The current blocks don't include the special tokens but the - # sizes already account for this - self.sizes.append(3 + sent[3] + next_sent[3]) - - def _sent_to_dataset_index(self, sent_sizes): - """ - Build index mapping block indices to the underlying dataset indices - """ - dataset_index = [] - ds_idx, ds_remaining = -1, 0 - for to_consume in sent_sizes: - sent_size = to_consume - if ds_remaining == 0: - ds_idx += 1 - ds_remaining = sent_sizes[ds_idx] - start_ds_idx = ds_idx - start_offset = sent_sizes[ds_idx] - ds_remaining - while to_consume > ds_remaining: - to_consume -= ds_remaining - ds_idx += 1 - ds_remaining = sent_sizes[ds_idx] - ds_remaining -= to_consume - dataset_index.append( - ( - start_ds_idx, # starting index in dataset - start_offset, # starting offset within starting index - ds_idx, # ending index in dataset - sent_size, # sentence length - ) - ) - assert ds_remaining == 0 - assert ds_idx == len(self.dataset) - 1 - return dataset_index - - def _generate_sentence_pair(self, doc, doc_id, max_num_tokens, sizes): - """ - Go through a single document and genrate sentence paris from it - """ - current_chunk = [] - current_length = 0 - curr = 0 - # To provide more randomness, we decrease target seq length for parts of - # samples (10% by default). Note that max_num_tokens is the hard threshold - # for batching and will never be changed. - target_seq_length = max_num_tokens - if np.random.random() < self.short_seq_prob: - target_seq_length = np.random.randint(2, max_num_tokens) - # loop through all sentences in document - while curr < len(doc): - sent_id = doc[curr] - current_chunk.append(sent_id) - current_length = sum(sizes[current_chunk]) - # split chunk and generate pair when exceed target_seq_length or - # finish the loop - if curr == len(doc) - 1 or current_length >= target_seq_length: - # split the chunk into 2 parts - a_end = 1 - if len(current_chunk) > 2: - a_end = np.random.randint(1, len(current_chunk) - 1) - sent_a = current_chunk[:a_end] - len_a = sum(sizes[sent_a]) - # generate next sentence label, note that if there is only 1 sentence - # in current chunk, label is always 0 - next_sent_label = ( - 1 if np.random.rand() > 0.5 and len(current_chunk) != 1 else 0 - ) - if not next_sent_label: - # if next sentence label is 0, sample sent_b from a random doc - target_b_length = target_seq_length - len_a - rand_doc_id = self._skip_sampling(len(self.block_indices), [doc_id]) - random_doc = self.block_indices[rand_doc_id] - random_start = np.random.randint(0, len(random_doc)) - sent_b = [] - len_b = 0 - for j in range(random_start, len(random_doc)): - sent_b.append(random_doc[j]) - len_b = sum(sizes[sent_b]) - if len_b >= target_b_length: - break - # return the second part of the chunk since it's not used - num_unused_segments = len(current_chunk) - a_end - curr -= num_unused_segments - else: - # if next sentence label is 1, use the second part of chunk as sent_B - sent_b = current_chunk[a_end:] - len_b = sum(sizes[sent_b]) - # currently sent_a and sent_B may be longer than max_num_tokens, - # truncate them and return block idx and offsets for them - sent_a, sent_b = self._truncate_sentences( - sent_a, sent_b, max_num_tokens - ) - self.sent_pairs.append((sent_a, sent_b, next_sent_label)) - self.sizes.append(3 + sent_a[3] + sent_b[3]) - current_chunk = [] - curr += 1 - - def _skip_sampling(self, total, skip_ids): - """ - Generate a random integer which is not in skip_ids. Sample range is [0, total) - TODO: ids in skip_ids should be consecutive, we can extend it to more generic version later - """ - rand_id = np.random.randint(total - len(skip_ids)) - return rand_id if rand_id < min(skip_ids) else rand_id + len(skip_ids) - - def _truncate_sentences(self, sent_a, sent_b, max_num_tokens): - """ - Trancate a pair of sentence to limit total length under max_num_tokens - Logics: - 1. Truncate longer sentence - 2. Tokens to be truncated could be at the beginning or the end of the sentnce - Returns: - Truncated sentences represented by dataset idx - """ - len_a, len_b = sum(self.dataset.sizes[sent_a]), sum(self.dataset.sizes[sent_b]) - front_cut_a = front_cut_b = end_cut_a = end_cut_b = 0 - - while True: - total_length = ( - len_a + len_b - front_cut_a - front_cut_b - end_cut_a - end_cut_b - ) - if total_length <= max_num_tokens: - break - - if len_a - front_cut_a - end_cut_a > len_b - front_cut_b - end_cut_b: - if np.random.rand() < 0.5: - front_cut_a += 1 - else: - end_cut_a += 1 - else: - if np.random.rand() < 0.5: - front_cut_b += 1 - else: - end_cut_b += 1 - - # calculate ds indices as well as offsets and return - truncated_sent_a = self._cut_sentence(sent_a, front_cut_a, end_cut_a) - truncated_sent_b = self._cut_sentence(sent_b, front_cut_b, end_cut_b) - return truncated_sent_a, truncated_sent_b - - def _cut_sentence(self, sent, front_cut, end_cut): - """ - Cut a sentence based on the numbers of tokens to be cut from beginning and end - Represent the sentence as dataset idx and return - """ - start_ds_idx, end_ds_idx, offset = sent[0], sent[-1], 0 - target_len = sum(self.dataset.sizes[sent]) - front_cut - end_cut - while front_cut > 0: - if self.dataset.sizes[start_ds_idx] > front_cut: - offset += front_cut - break - else: - front_cut -= self.dataset.sizes[start_ds_idx] - start_ds_idx += 1 - while end_cut > 0: - if self.dataset.sizes[end_ds_idx] > end_cut: - break - else: - end_cut -= self.dataset.sizes[end_ds_idx] - end_ds_idx -= 1 - return start_ds_idx, offset, end_ds_idx, target_len - - def _fetch_block(self, start_ds_idx, offset, end_ds_idx, length): - """ - Fetch a block of tokens based on its dataset idx - """ - buffer = torch.cat( - [self.dataset[idx] for idx in range(start_ds_idx, end_ds_idx + 1)] - ) - s, e = offset, offset + length - return buffer[s:e] - - def __getitem__(self, index): - block1, block2, next_sent_label = self.sent_pairs[index] - block1 = self._fetch_block(*block1) - block2 = self._fetch_block(*block2) - return block1, block2, next_sent_label - - def __len__(self): - return len(self.sizes) - - @property - def supports_prefetch(self): - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - prefetch_idx = set() - for index in indices: - for block1, block2, _ in [self.sent_pairs[index]]: - for ds_idx in range(block1[0], block1[2] + 1): - prefetch_idx.add(ds_idx) - for ds_idx in range(block2[0], block2[2] + 1): - prefetch_idx.add(ds_idx) - self.dataset.prefetch(prefetch_idx) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/numel_dataset.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/numel_dataset.py deleted file mode 100644 index ac86dfd2f1d89055de909656d61d6aca85523f00..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/numel_dataset.py +++ /dev/null @@ -1,31 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch - -from . import BaseWrapperDataset - - -class NumelDataset(BaseWrapperDataset): - def __init__(self, dataset, reduce=False): - super().__init__(dataset) - self.reduce = reduce - - def __getitem__(self, index): - item = self.dataset[index] - if torch.is_tensor(item): - return torch.numel(item) - else: - return np.size(item) - - def __len__(self): - return len(self.dataset) - - def collater(self, samples): - if self.reduce: - return sum(samples) - else: - return torch.tensor(samples) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/models/roberta/model.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/models/roberta/model.py deleted file mode 100644 index 77a80ef72057219110b34678a38705549910edd3..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/models/roberta/model.py +++ /dev/null @@ -1,225 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -RoBERTa: A Robustly Optimized BERT Pretraining Approach. -""" - -import logging - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.model_parallel.models.transformer import ModelParallelTransformerEncoder -from fairseq.models import register_model, register_model_architecture -from fairseq.models.roberta import ( - roberta_base_architecture, - roberta_prenorm_architecture, - RobertaEncoder, - RobertaModel, -) -from fairseq.modules import LayerNorm - - -try: - from fairseq.model_parallel.megatron.mpu import ( - copy_to_model_parallel_region, - gather_from_model_parallel_region, - ColumnParallelLinear, - VocabParallelEmbedding, - ) - - has_megatron_submodule = True -except (ImportError, ModuleNotFoundError): - has_megatron_submodule = False - -logger = logging.getLogger(__name__) - - -@register_model("model_parallel_roberta") -class ModelParallelRobertaModel(RobertaModel): - def __init__(self, args, encoder): - super().__init__(args, encoder) - - self.classification_heads = nn.ModuleDict() - - @staticmethod - def add_args(parser): - RobertaModel.add_args(parser) - parser.add_argument( - "--no-final-layer-norm", - action="store_true", - help=( - "don't add final layernorm (only applicable when " - "--encoder-normalize-before=True" - ), - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present - base_architecture(args) - - task.source_dictionary.pad_to_multiple_(args.model_parallel_size * 8) - task.target_dictionary.pad_to_multiple_(args.model_parallel_size * 8) - - if not hasattr(args, "max_positions"): - args.max_positions = args.tokens_per_sample - - if getattr(args, "untie_weights_roberta", False): - raise NotImplementedError( - "--untie-weights-roberta is not supported in model parallel mode" - ) - - encoder = ModelParallelRobertaEncoder(args, task.source_dictionary) - return cls(args, encoder) - - def forward( - self, - src_tokens, - features_only=False, - return_all_hiddens=False, - classification_head_name=None, - **kwargs - ): - if classification_head_name is not None: - features_only = True - - x, extra = self.encoder(src_tokens, features_only, return_all_hiddens, **kwargs) - - if classification_head_name is not None: - x = self.classification_heads[classification_head_name](x) - return x, extra - - def register_classification_head( - self, name, num_classes=None, inner_dim=None, **kwargs - ): - """Register a classification head.""" - if name in self.classification_heads: - prev_num_classes = self.classification_heads[name].out_proj.out_features - prev_inner_dim = self.classification_heads[name].dense.out_features - if num_classes != prev_num_classes or inner_dim != prev_inner_dim: - logger.warning( - 're-registering head "{}" with num_classes {} (prev: {}) ' - "and inner_dim {} (prev: {})".format( - name, num_classes, prev_num_classes, inner_dim, prev_inner_dim - ) - ) - self.classification_heads[name] = ModelParallelRobertaClassificationHead( - self.args.encoder_embed_dim, - inner_dim or self.args.encoder_embed_dim, - num_classes, - self.args.pooler_activation_fn, - self.args.pooler_dropout, - ) - - -class ModelParallelRobertaLMHead(nn.Module): - """Head for masked language modeling.""" - - def __init__(self, embed_dim, output_dim, activation_fn, weight=None): - super().__init__() - self.dense = ColumnParallelLinear(embed_dim, embed_dim, gather_output=True) - self.activation_fn = utils.get_activation_fn(activation_fn) - self.layer_norm = LayerNorm(embed_dim) - - if weight is None: - weight = nn.Linear(embed_dim, output_dim, bias=False).weight - self.weight = weight - self.bias = nn.Parameter(torch.zeros(output_dim)) - - def forward(self, features, masked_tokens=None, **kwargs): - # Only project the unmasked tokens while training, - # saves both memory and computation - if masked_tokens is not None: - features = features[masked_tokens, :] - - x = self.dense(features) - x = self.activation_fn(x) - x = self.layer_norm(x) - - x = copy_to_model_parallel_region(x) - # project back to size of vocabulary with bias - x = F.linear(x, self.weight) - x = gather_from_model_parallel_region(x).contiguous() - x = x + self.bias - return x - - -class ModelParallelRobertaClassificationHead(nn.Module): - """Head for sentence-level classification tasks.""" - - def __init__( - self, input_dim, inner_dim, num_classes, activation_fn, pooler_dropout - ): - super().__init__() - self.dense = ColumnParallelLinear(input_dim, inner_dim, gather_output=True) - self.activation_fn = utils.get_activation_fn(activation_fn) - self.dropout = nn.Dropout(p=pooler_dropout) - self.out_proj = nn.Linear(inner_dim, num_classes) - - def forward(self, features, **kwargs): - x = features[:, 0, :] # take token (equiv. to [CLS]) - x = self.dropout(x) - x = self.dense(x) - x = self.activation_fn(x) - x = self.dropout(x) - x = self.out_proj(x) - return x - - -class ModelParallelRobertaEncoder(RobertaEncoder): - """RoBERTa encoder.""" - - def __init__(self, args, dictionary): - super().__init__(args, dictionary) - assert not self.args.untie_weights_roberta - - def build_embedding(self, vocab_size, embedding_dim, padding_idx): - return VocabParallelEmbedding(vocab_size, embedding_dim, padding_idx) - - def build_encoder(self, args, dictionary, embed_tokens): - return ModelParallelTransformerEncoder(args, dictionary, embed_tokens) - - def build_lm_head(self, embed_dim, output_dim, activation_fn, weight): - return ModelParallelRobertaLMHead(embed_dim, output_dim, activation_fn, weight) - - -@register_model_architecture("model_parallel_roberta", "model_parallel_roberta") -def base_architecture(args): - args.no_final_layer_norm = getattr(args, "no_final_layer_norm", False) - # model parallel RoBERTa defaults to "Pre-LN" formulation - roberta_prenorm_architecture(args) - - -# earlier versions of model parallel RoBERTa removed the final layer norm -@register_model_architecture("model_parallel_roberta", "model_parallel_roberta_v1") -def model_parallel_roberta_v1_architecture(args): - args.no_final_layer_norm = getattr(args, "no_final_layer_norm", True) - base_architecture(args) - - -@register_model_architecture( - "model_parallel_roberta", "model_parallel_roberta_postnorm" -) -def model_parallel_roberta_postnorm_architecture(args): - # the original BERT/RoBERTa uses the "Post-LN" formulation - roberta_base_architecture(args) - - -@register_model_architecture("model_parallel_roberta", "model_parallel_roberta_base") -def model_parallel_roberta_base_architecture(args): - base_architecture(args) - - -@register_model_architecture("model_parallel_roberta", "model_parallel_roberta_large") -def model_parallel_roberta_large_architecture(args): - args.encoder_layers = getattr(args, "encoder_layers", 24) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - base_architecture(args) diff --git a/spaces/ORI-Muchim/BlueArchiveTTS/mel_processing.py b/spaces/ORI-Muchim/BlueArchiveTTS/mel_processing.py deleted file mode 100644 index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000 --- a/spaces/ORI-Muchim/BlueArchiveTTS/mel_processing.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/OlaWod/FreeVC/speaker_encoder/model.py b/spaces/OlaWod/FreeVC/speaker_encoder/model.py deleted file mode 100644 index c022b663ee5c344c52041026bc88dc02734afa33..0000000000000000000000000000000000000000 --- a/spaces/OlaWod/FreeVC/speaker_encoder/model.py +++ /dev/null @@ -1,135 +0,0 @@ -from speaker_encoder.params_model import * -from speaker_encoder.params_data import * -from scipy.interpolate import interp1d -from sklearn.metrics import roc_curve -from torch.nn.utils import clip_grad_norm_ -from scipy.optimize import brentq -from torch import nn -import numpy as np -import torch - - -class SpeakerEncoder(nn.Module): - def __init__(self, device, loss_device): - super().__init__() - self.loss_device = loss_device - - # Network defition - self.lstm = nn.LSTM(input_size=mel_n_channels, # 40 - hidden_size=model_hidden_size, # 256 - num_layers=model_num_layers, # 3 - batch_first=True).to(device) - self.linear = nn.Linear(in_features=model_hidden_size, - out_features=model_embedding_size).to(device) - self.relu = torch.nn.ReLU().to(device) - - # Cosine similarity scaling (with fixed initial parameter values) - self.similarity_weight = nn.Parameter(torch.tensor([10.])).to(loss_device) - self.similarity_bias = nn.Parameter(torch.tensor([-5.])).to(loss_device) - - # Loss - self.loss_fn = nn.CrossEntropyLoss().to(loss_device) - - def do_gradient_ops(self): - # Gradient scale - self.similarity_weight.grad *= 0.01 - self.similarity_bias.grad *= 0.01 - - # Gradient clipping - clip_grad_norm_(self.parameters(), 3, norm_type=2) - - def forward(self, utterances, hidden_init=None): - """ - Computes the embeddings of a batch of utterance spectrograms. - - :param utterances: batch of mel-scale filterbanks of same duration as a tensor of shape - (batch_size, n_frames, n_channels) - :param hidden_init: initial hidden state of the LSTM as a tensor of shape (num_layers, - batch_size, hidden_size). Will default to a tensor of zeros if None. - :return: the embeddings as a tensor of shape (batch_size, embedding_size) - """ - # Pass the input through the LSTM layers and retrieve all outputs, the final hidden state - # and the final cell state. - out, (hidden, cell) = self.lstm(utterances, hidden_init) - - # We take only the hidden state of the last layer - embeds_raw = self.relu(self.linear(hidden[-1])) - - # L2-normalize it - embeds = embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True) - - return embeds - - def similarity_matrix(self, embeds): - """ - Computes the similarity matrix according the section 2.1 of GE2E. - - :param embeds: the embeddings as a tensor of shape (speakers_per_batch, - utterances_per_speaker, embedding_size) - :return: the similarity matrix as a tensor of shape (speakers_per_batch, - utterances_per_speaker, speakers_per_batch) - """ - speakers_per_batch, utterances_per_speaker = embeds.shape[:2] - - # Inclusive centroids (1 per speaker). Cloning is needed for reverse differentiation - centroids_incl = torch.mean(embeds, dim=1, keepdim=True) - centroids_incl = centroids_incl.clone() / torch.norm(centroids_incl, dim=2, keepdim=True) - - # Exclusive centroids (1 per utterance) - centroids_excl = (torch.sum(embeds, dim=1, keepdim=True) - embeds) - centroids_excl /= (utterances_per_speaker - 1) - centroids_excl = centroids_excl.clone() / torch.norm(centroids_excl, dim=2, keepdim=True) - - # Similarity matrix. The cosine similarity of already 2-normed vectors is simply the dot - # product of these vectors (which is just an element-wise multiplication reduced by a sum). - # We vectorize the computation for efficiency. - sim_matrix = torch.zeros(speakers_per_batch, utterances_per_speaker, - speakers_per_batch).to(self.loss_device) - mask_matrix = 1 - np.eye(speakers_per_batch, dtype=np.int) - for j in range(speakers_per_batch): - mask = np.where(mask_matrix[j])[0] - sim_matrix[mask, :, j] = (embeds[mask] * centroids_incl[j]).sum(dim=2) - sim_matrix[j, :, j] = (embeds[j] * centroids_excl[j]).sum(dim=1) - - ## Even more vectorized version (slower maybe because of transpose) - # sim_matrix2 = torch.zeros(speakers_per_batch, speakers_per_batch, utterances_per_speaker - # ).to(self.loss_device) - # eye = np.eye(speakers_per_batch, dtype=np.int) - # mask = np.where(1 - eye) - # sim_matrix2[mask] = (embeds[mask[0]] * centroids_incl[mask[1]]).sum(dim=2) - # mask = np.where(eye) - # sim_matrix2[mask] = (embeds * centroids_excl).sum(dim=2) - # sim_matrix2 = sim_matrix2.transpose(1, 2) - - sim_matrix = sim_matrix * self.similarity_weight + self.similarity_bias - return sim_matrix - - def loss(self, embeds): - """ - Computes the softmax loss according the section 2.1 of GE2E. - - :param embeds: the embeddings as a tensor of shape (speakers_per_batch, - utterances_per_speaker, embedding_size) - :return: the loss and the EER for this batch of embeddings. - """ - speakers_per_batch, utterances_per_speaker = embeds.shape[:2] - - # Loss - sim_matrix = self.similarity_matrix(embeds) - sim_matrix = sim_matrix.reshape((speakers_per_batch * utterances_per_speaker, - speakers_per_batch)) - ground_truth = np.repeat(np.arange(speakers_per_batch), utterances_per_speaker) - target = torch.from_numpy(ground_truth).long().to(self.loss_device) - loss = self.loss_fn(sim_matrix, target) - - # EER (not backpropagated) - with torch.no_grad(): - inv_argmax = lambda i: np.eye(1, speakers_per_batch, i, dtype=np.int)[0] - labels = np.array([inv_argmax(i) for i in ground_truth]) - preds = sim_matrix.detach().cpu().numpy() - - # Snippet from https://yangcha.github.io/EER-ROC/ - fpr, tpr, thresholds = roc_curve(labels.flatten(), preds.flatten()) - eer = brentq(lambda x: 1. - x - interp1d(fpr, tpr)(x), 0., 1.) - - return loss, eer \ No newline at end of file diff --git a/spaces/Omnibus/MusicGen/audiocraft/models/builders.py b/spaces/Omnibus/MusicGen/audiocraft/models/builders.py deleted file mode 100644 index 77ee5f96fea2e3c9e475fe961bc1a5ee473ed8eb..0000000000000000000000000000000000000000 --- a/spaces/Omnibus/MusicGen/audiocraft/models/builders.py +++ /dev/null @@ -1,218 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -All the functions to build the relevant models and modules -from the Hydra config. -""" - -import typing as tp -import warnings - -import audiocraft -import omegaconf -import torch - -from .encodec import CompressionModel, EncodecModel, FlattenedCompressionModel # noqa -from .lm import LMModel -from ..modules.codebooks_patterns import ( - CodebooksPatternProvider, - DelayedPatternProvider, - ParallelPatternProvider, - UnrolledPatternProvider, - VALLEPattern, - MusicLMPattern, -) -from ..modules.conditioners import ( - BaseConditioner, - ConditioningProvider, - LUTConditioner, - T5Conditioner, - ConditionFuser, - ChromaStemConditioner, -) -from .. import quantization as qt -from ..utils.utils import dict_from_config - - -def get_quantizer(quantizer: str, cfg: omegaconf.DictConfig, dimension: int) -> qt.BaseQuantizer: - klass = { - 'no_quant': qt.DummyQuantizer, - 'rvq': qt.ResidualVectorQuantizer - }[quantizer] - kwargs = dict_from_config(getattr(cfg, quantizer)) - if quantizer != 'no_quant': - kwargs['dimension'] = dimension - return klass(**kwargs) - - -def get_encodec_autoencoder(encoder_name: str, cfg: omegaconf.DictConfig): - if encoder_name == 'seanet': - kwargs = dict_from_config(getattr(cfg, 'seanet')) - encoder_override_kwargs = kwargs.pop('encoder') - decoder_override_kwargs = kwargs.pop('decoder') - encoder_kwargs = {**kwargs, **encoder_override_kwargs} - decoder_kwargs = {**kwargs, **decoder_override_kwargs} - encoder = audiocraft.modules.SEANetEncoder(**encoder_kwargs) - decoder = audiocraft.modules.SEANetDecoder(**decoder_kwargs) - return encoder, decoder - else: - raise KeyError(f'Unexpected compression model {cfg.compression_model}') - - -def get_compression_model(cfg: omegaconf.DictConfig) -> CompressionModel: - """Instantiate a compression model. - """ - if cfg.compression_model == 'encodec': - kwargs = dict_from_config(getattr(cfg, 'encodec')) - encoder_name = kwargs.pop('autoencoder') - quantizer_name = kwargs.pop('quantizer') - encoder, decoder = get_encodec_autoencoder(encoder_name, cfg) - quantizer = get_quantizer(quantizer_name, cfg, encoder.dimension) - frame_rate = kwargs['sample_rate'] // encoder.hop_length - renormalize = kwargs.pop('renormalize', None) - renorm = kwargs.pop('renorm') - if renormalize is None: - renormalize = renorm is not None - warnings.warn("You are using a deprecated EnCodec model. Please migrate to new renormalization.") - return EncodecModel(encoder, decoder, quantizer, - frame_rate=frame_rate, renormalize=renormalize, **kwargs).to(cfg.device) - else: - raise KeyError(f'Unexpected compression model {cfg.compression_model}') - - -def get_lm_model(cfg: omegaconf.DictConfig) -> LMModel: - """Instantiate a transformer LM. - """ - if cfg.lm_model == 'transformer_lm': - kwargs = dict_from_config(getattr(cfg, 'transformer_lm')) - n_q = kwargs['n_q'] - q_modeling = kwargs.pop('q_modeling', None) - codebooks_pattern_cfg = getattr(cfg, 'codebooks_pattern') - attribute_dropout = dict_from_config(getattr(cfg, 'attribute_dropout')) - cls_free_guidance = dict_from_config(getattr(cfg, 'classifier_free_guidance')) - cfg_prob, cfg_coef = cls_free_guidance["training_dropout"], cls_free_guidance["inference_coef"] - fuser = get_condition_fuser(cfg) - condition_provider = get_conditioner_provider(kwargs["dim"], cfg).to(cfg.device) - if len(fuser.fuse2cond['cross']) > 0: # enforce cross-att programatically - kwargs['cross_attention'] = True - if codebooks_pattern_cfg.modeling is None: - assert q_modeling is not None, \ - 'LM model should either have a codebook pattern defined or transformer_lm.q_modeling' - codebooks_pattern_cfg = omegaconf.OmegaConf.create( - {'modeling': q_modeling, 'delay': {'delays': list(range(n_q))}} - ) - pattern_provider = get_codebooks_pattern_provider(n_q, codebooks_pattern_cfg) - return LMModel( - pattern_provider=pattern_provider, - condition_provider=condition_provider, - fuser=fuser, - cfg_dropout=cfg_prob, - cfg_coef=cfg_coef, - attribute_dropout=attribute_dropout, - dtype=getattr(torch, cfg.dtype), - device=cfg.device, - **kwargs - ).to(cfg.device) - else: - raise KeyError(f'Unexpected LM model {cfg.lm_model}') - - -def get_conditioner_provider(output_dim: int, cfg: omegaconf.DictConfig) -> ConditioningProvider: - """Instantiate a conditioning model. - """ - device = cfg.device - duration = cfg.dataset.segment_duration - cfg = getattr(cfg, "conditioners") - cfg = omegaconf.OmegaConf.create({}) if cfg is None else cfg - conditioners: tp.Dict[str, BaseConditioner] = {} - with omegaconf.open_dict(cfg): - condition_provider_args = cfg.pop('args', {}) - for cond, cond_cfg in cfg.items(): - model_type = cond_cfg["model"] - model_args = cond_cfg[model_type] - if model_type == "t5": - conditioners[str(cond)] = T5Conditioner(output_dim=output_dim, device=device, **model_args) - elif model_type == "lut": - conditioners[str(cond)] = LUTConditioner(output_dim=output_dim, **model_args) - elif model_type == "chroma_stem": - model_args.pop('cache_path', None) - conditioners[str(cond)] = ChromaStemConditioner( - output_dim=output_dim, - duration=duration, - device=device, - **model_args - ) - else: - raise ValueError(f"unrecognized conditioning model: {model_type}") - conditioner = ConditioningProvider(conditioners, device=device, **condition_provider_args) - return conditioner - - -def get_condition_fuser(cfg: omegaconf.DictConfig) -> ConditionFuser: - """Instantiate a condition fuser object. - """ - fuser_cfg = getattr(cfg, "fuser") - fuser_methods = ["sum", "cross", "prepend", "input_interpolate"] - fuse2cond = {k: fuser_cfg[k] for k in fuser_methods} - kwargs = {k: v for k, v in fuser_cfg.items() if k not in fuser_methods} - fuser = ConditionFuser(fuse2cond=fuse2cond, **kwargs) - return fuser - - -def get_codebooks_pattern_provider(n_q: int, cfg: omegaconf.DictConfig) -> CodebooksPatternProvider: - """Instantiate a codebooks pattern provider object. - """ - pattern_providers = { - 'parallel': ParallelPatternProvider, - 'delay': DelayedPatternProvider, - 'unroll': UnrolledPatternProvider, - 'valle': VALLEPattern, - 'musiclm': MusicLMPattern, - } - name = cfg.modeling - kwargs = dict_from_config(cfg.get(name)) if hasattr(cfg, name) else {} - klass = pattern_providers[name] - return klass(n_q, **kwargs) - - -def get_debug_compression_model(device='cpu'): - """Instantiate a debug compression model to be used for unit tests. - """ - seanet_kwargs = { - 'n_filters': 4, - 'n_residual_layers': 1, - 'dimension': 32, - 'ratios': [10, 8, 16] # 25 Hz at 32kHz - } - encoder = audiocraft.modules.SEANetEncoder(**seanet_kwargs) - decoder = audiocraft.modules.SEANetDecoder(**seanet_kwargs) - quantizer = qt.ResidualVectorQuantizer(dimension=32, bins=400, n_q=4) - init_x = torch.randn(8, 32, 128) - quantizer(init_x, 1) # initialize kmeans etc. - compression_model = EncodecModel( - encoder, decoder, quantizer, - frame_rate=25, sample_rate=32000, channels=1).to(device) - return compression_model.eval() - - -def get_debug_lm_model(device='cpu'): - """Instantiate a debug LM to be used for unit tests. - """ - pattern = DelayedPatternProvider(n_q=4) - dim = 16 - providers = { - 'description': LUTConditioner(n_bins=128, dim=dim, output_dim=dim, tokenizer="whitespace"), - } - condition_provider = ConditioningProvider(providers) - fuser = ConditionFuser( - {'cross': ['description'], 'prepend': [], - 'sum': [], 'input_interpolate': []}) - lm = LMModel( - pattern, condition_provider, fuser, - n_q=4, card=400, dim=dim, num_heads=4, custom=True, num_layers=2, - cross_attention=True, causal=True) - return lm.to(device).eval() diff --git a/spaces/Omnibus/TTS-voice-clone/app.py b/spaces/Omnibus/TTS-voice-clone/app.py deleted file mode 100644 index 80f1e600efb272967f408417030b4da7c18ae517..0000000000000000000000000000000000000000 --- a/spaces/Omnibus/TTS-voice-clone/app.py +++ /dev/null @@ -1,64 +0,0 @@ -import gradio as gr - -''' -from TTS.api import TTS -from bark import SAMPLE_RATE, generate_audio, preload_models -from scipy.io.wavfile import write as write_wav -#from IPython.display import Audio - -# download and load all models -#preload_models() - -def bark_try(): - # generate audio from text - text_prompt = """ - Hello, my name is Suno. And, uh — and I like pizza. [laughs] - But I also have other interests such as playing tic tac toe. - """ - audio_array = generate_audio(text_prompt) - - # save audio to disk - write_wav("bark_generation.wav", SAMPLE_RATE, audio_array) - - # play text in notebook - #Audio(audio_array, rate=SAMPLE_RATE) - return ("bark_generation.wav") -def try1(): - #model_name1 = TTS.list_models() - #print (f"model1 Name: {model_name1}") - #model_name = model_name1[0] - #print (f"model2 Name: {model_name}") - # Init TTS - tts = TTS("tts_models/multilingual/multi-dataset/bark", gpu=False) - # Run TTS - # Since this model is multi-speaker and multi-lingual, we must set the target speaker and the language - # Text to speech with a numpy output - #wav = tts.tts("This is a test! This is also a test!!", speaker=tts.speakers[0], language=tts.languages[0]) - # Text to speech to a file - tts.tts_to_file(text="Hello world!", speaker=tts.speakers[0], language=tts.languages[0], file_path="output.wav") - out = "output.wav" - return out - -#def try2(): - #tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False, gpu=False) - #tts.tts_to_file("This is voice cloning.", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav") - #tts.tts_to_file("C'est le clonage de la voix.", speaker_wav="my/cloning/audio.wav", language="fr", file_path="output.wav") - #tts.tts_to_file("Isso é clonagem de voz.", speaker_wav="my/cloning/audio.wav", language="pt", file_path="output.wav") - #out = "output.wav" - #return out -''' - -model = gr.Interface.load("models/suno/bark") -def bark_try_2(): - out = model("this is some text") - return out - -with gr.Blocks() as app: - out1 = gr.Audio() - btn1 = gr.Button() - btn2 = gr.Button() - - btn1.click(bark_try_2,None,out1) - #btn2.click(try1,None,out1) - -app.launch() \ No newline at end of file diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/string-fun.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/string-fun.go deleted file mode 100644 index aff9766bd38d2963746fea47f850e0c0201dad87..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/string-fun.go and /dev/null differ diff --git a/spaces/PeepDaSlan9/candle-llama2/worker.js b/spaces/PeepDaSlan9/candle-llama2/worker.js deleted file mode 100644 index a81a4853a3a9a89dfa3e2df3826507e019ba31a0..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/candle-llama2/worker.js +++ /dev/null @@ -1,477 +0,0 @@ -let wasm_bindgen; -(function() { - const __exports = {}; - let script_src; - if (typeof document !== 'undefined' && document.currentScript !== null) { - script_src = new URL(document.currentScript.src, location.href).toString(); - } - let wasm = undefined; - - const heap = new Array(128).fill(undefined); - - heap.push(undefined, null, true, false); - -function getObject(idx) { return heap[idx]; } - -let heap_next = heap.length; - -function dropObject(idx) { - if (idx < 132) return; - heap[idx] = heap_next; - heap_next = idx; -} - -function takeObject(idx) { - const ret = getObject(idx); - dropObject(idx); - return ret; -} - -function addHeapObject(obj) { - if (heap_next === heap.length) heap.push(heap.length + 1); - const idx = heap_next; - heap_next = heap[idx]; - - heap[idx] = obj; - return idx; -} - -const cachedTextDecoder = (typeof TextDecoder !== 'undefined' ? new TextDecoder('utf-8', { ignoreBOM: true, fatal: true }) : { decode: () => { throw Error('TextDecoder not available') } } ); - -if (typeof TextDecoder !== 'undefined') { cachedTextDecoder.decode(); }; - -let cachedUint8Memory0 = null; - -function getUint8Memory0() { - if (cachedUint8Memory0 === null || cachedUint8Memory0.byteLength === 0) { - cachedUint8Memory0 = new Uint8Array(wasm.memory.buffer); - } - return cachedUint8Memory0; -} - -function getStringFromWasm0(ptr, len) { - ptr = ptr >>> 0; - return cachedTextDecoder.decode(getUint8Memory0().subarray(ptr, ptr + len)); -} - -function debugString(val) { - // primitive types - const type = typeof val; - if (type == 'number' || type == 'boolean' || val == null) { - return `${val}`; - } - if (type == 'string') { - return `"${val}"`; - } - if (type == 'symbol') { - const description = val.description; - if (description == null) { - return 'Symbol'; - } else { - return `Symbol(${description})`; - } - } - if (type == 'function') { - const name = val.name; - if (typeof name == 'string' && name.length > 0) { - return `Function(${name})`; - } else { - return 'Function'; - } - } - // objects - if (Array.isArray(val)) { - const length = val.length; - let debug = '['; - if (length > 0) { - debug += debugString(val[0]); - } - for(let i = 1; i < length; i++) { - debug += ', ' + debugString(val[i]); - } - debug += ']'; - return debug; - } - // Test for built-in - const builtInMatches = /\[object ([^\]]+)\]/.exec(toString.call(val)); - let className; - if (builtInMatches.length > 1) { - className = builtInMatches[1]; - } else { - // Failed to match the standard '[object ClassName]' - return toString.call(val); - } - if (className == 'Object') { - // we're a user defined class or Object - // JSON.stringify avoids problems with cycles, and is generally much - // easier than looping through ownProperties of `val`. - try { - return 'Object(' + JSON.stringify(val) + ')'; - } catch (_) { - return 'Object'; - } - } - // errors - if (val instanceof Error) { - return `${val.name}: ${val.message}\n${val.stack}`; - } - // TODO we could test for more things here, like `Set`s and `Map`s. - return className; -} - -let WASM_VECTOR_LEN = 0; - -const cachedTextEncoder = (typeof TextEncoder !== 'undefined' ? new TextEncoder('utf-8') : { encode: () => { throw Error('TextEncoder not available') } } ); - -const encodeString = (typeof cachedTextEncoder.encodeInto === 'function' - ? function (arg, view) { - return cachedTextEncoder.encodeInto(arg, view); -} - : function (arg, view) { - const buf = cachedTextEncoder.encode(arg); - view.set(buf); - return { - read: arg.length, - written: buf.length - }; -}); - -function passStringToWasm0(arg, malloc, realloc) { - - if (realloc === undefined) { - const buf = cachedTextEncoder.encode(arg); - const ptr = malloc(buf.length, 1) >>> 0; - getUint8Memory0().subarray(ptr, ptr + buf.length).set(buf); - WASM_VECTOR_LEN = buf.length; - return ptr; - } - - let len = arg.length; - let ptr = malloc(len, 1) >>> 0; - - const mem = getUint8Memory0(); - - let offset = 0; - - for (; offset < len; offset++) { - const code = arg.charCodeAt(offset); - if (code > 0x7F) break; - mem[ptr + offset] = code; - } - - if (offset !== len) { - if (offset !== 0) { - arg = arg.slice(offset); - } - ptr = realloc(ptr, len, len = offset + arg.length * 3, 1) >>> 0; - const view = getUint8Memory0().subarray(ptr + offset, ptr + len); - const ret = encodeString(arg, view); - - offset += ret.written; - } - - WASM_VECTOR_LEN = offset; - return ptr; -} - -let cachedInt32Memory0 = null; - -function getInt32Memory0() { - if (cachedInt32Memory0 === null || cachedInt32Memory0.byteLength === 0) { - cachedInt32Memory0 = new Int32Array(wasm.memory.buffer); - } - return cachedInt32Memory0; -} - -function makeClosure(arg0, arg1, dtor, f) { - const state = { a: arg0, b: arg1, cnt: 1, dtor }; - const real = (...args) => { - // First up with a closure we increment the internal reference - // count. This ensures that the Rust closure environment won't - // be deallocated while we're invoking it. - state.cnt++; - try { - return f(state.a, state.b, ...args); - } finally { - if (--state.cnt === 0) { - wasm.__wbindgen_export_2.get(state.dtor)(state.a, state.b); - state.a = 0; - - } - } - }; - real.original = state; - - return real; -} -function __wbg_adapter_22(arg0, arg1, arg2) { - wasm._dyn_core__ops__function__Fn__A____Output___R_as_wasm_bindgen__closure__WasmClosure___describe__invoke__h394c66cd6bc0689a(arg0, arg1, addHeapObject(arg2)); -} - -function handleError(f, args) { - try { - return f.apply(this, args); - } catch (e) { - wasm.__wbindgen_exn_store(addHeapObject(e)); - } -} - -async function __wbg_load(module, imports) { - if (typeof Response === 'function' && module instanceof Response) { - if (typeof WebAssembly.instantiateStreaming === 'function') { - try { - return await WebAssembly.instantiateStreaming(module, imports); - - } catch (e) { - if (module.headers.get('Content-Type') != 'application/wasm') { - console.warn("`WebAssembly.instantiateStreaming` failed because your server does not serve wasm with `application/wasm` MIME type. Falling back to `WebAssembly.instantiate` which is slower. Original error:\n", e); - - } else { - throw e; - } - } - } - - const bytes = await module.arrayBuffer(); - return await WebAssembly.instantiate(bytes, imports); - - } else { - const instance = await WebAssembly.instantiate(module, imports); - - if (instance instanceof WebAssembly.Instance) { - return { instance, module }; - - } else { - return instance; - } - } -} - -function __wbg_get_imports() { - const imports = {}; - imports.wbg = {}; - imports.wbg.__wbindgen_object_drop_ref = function(arg0) { - takeObject(arg0); - }; - imports.wbg.__wbindgen_object_clone_ref = function(arg0) { - const ret = getObject(arg0); - return addHeapObject(ret); - }; - imports.wbg.__wbg_log_3af90b48c052f90b = function(arg0, arg1) { - console.log(getStringFromWasm0(arg0, arg1)); - }; - imports.wbg.__wbindgen_string_new = function(arg0, arg1) { - const ret = getStringFromWasm0(arg0, arg1); - return addHeapObject(ret); - }; - imports.wbg.__wbg_getRandomValues_37fa2ca9e4e07fab = function() { return handleError(function (arg0, arg1) { - getObject(arg0).getRandomValues(getObject(arg1)); - }, arguments) }; - imports.wbg.__wbg_randomFillSync_dc1e9a60c158336d = function() { return handleError(function (arg0, arg1) { - getObject(arg0).randomFillSync(takeObject(arg1)); - }, arguments) }; - imports.wbg.__wbg_crypto_c48a774b022d20ac = function(arg0) { - const ret = getObject(arg0).crypto; - return addHeapObject(ret); - }; - imports.wbg.__wbindgen_is_object = function(arg0) { - const val = getObject(arg0); - const ret = typeof(val) === 'object' && val !== null; - return ret; - }; - imports.wbg.__wbg_process_298734cf255a885d = function(arg0) { - const ret = getObject(arg0).process; - return addHeapObject(ret); - }; - imports.wbg.__wbg_versions_e2e78e134e3e5d01 = function(arg0) { - const ret = getObject(arg0).versions; - return addHeapObject(ret); - }; - imports.wbg.__wbg_node_1cd7a5d853dbea79 = function(arg0) { - const ret = getObject(arg0).node; - return addHeapObject(ret); - }; - imports.wbg.__wbindgen_is_string = function(arg0) { - const ret = typeof(getObject(arg0)) === 'string'; - return ret; - }; - imports.wbg.__wbg_msCrypto_bcb970640f50a1e8 = function(arg0) { - const ret = getObject(arg0).msCrypto; - return addHeapObject(ret); - }; - imports.wbg.__wbg_require_8f08ceecec0f4fee = function() { return handleError(function () { - const ret = module.require; - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbindgen_is_function = function(arg0) { - const ret = typeof(getObject(arg0)) === 'function'; - return ret; - }; - imports.wbg.__wbg_new_abda76e883ba8a5f = function() { - const ret = new Error(); - return addHeapObject(ret); - }; - imports.wbg.__wbg_stack_658279fe44541cf6 = function(arg0, arg1) { - const ret = getObject(arg1).stack; - const ptr1 = passStringToWasm0(ret, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc); - const len1 = WASM_VECTOR_LEN; - getInt32Memory0()[arg0 / 4 + 1] = len1; - getInt32Memory0()[arg0 / 4 + 0] = ptr1; - }; - imports.wbg.__wbg_error_f851667af71bcfc6 = function(arg0, arg1) { - let deferred0_0; - let deferred0_1; - try { - deferred0_0 = arg0; - deferred0_1 = arg1; - console.error(getStringFromWasm0(arg0, arg1)); - } finally { - wasm.__wbindgen_free(deferred0_0, deferred0_1, 1); - } - }; - imports.wbg.__wbg_setonmessage_731266b6f3ab0860 = function(arg0, arg1) { - getObject(arg0).onmessage = getObject(arg1); - }; - imports.wbg.__wbg_close_889c0c4e86f1403e = function(arg0) { - getObject(arg0).close(); - }; - imports.wbg.__wbg_postMessage_2f0b8369b84c3c1e = function() { return handleError(function (arg0, arg1) { - getObject(arg0).postMessage(getObject(arg1)); - }, arguments) }; - imports.wbg.__wbg_data_ab99ae4a2e1e8bc9 = function(arg0) { - const ret = getObject(arg0).data; - return addHeapObject(ret); - }; - imports.wbg.__wbg_newnoargs_581967eacc0e2604 = function(arg0, arg1) { - const ret = new Function(getStringFromWasm0(arg0, arg1)); - return addHeapObject(ret); - }; - imports.wbg.__wbg_call_cb65541d95d71282 = function() { return handleError(function (arg0, arg1) { - const ret = getObject(arg0).call(getObject(arg1)); - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbg_self_1ff1d729e9aae938 = function() { return handleError(function () { - const ret = self.self; - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbg_window_5f4faef6c12b79ec = function() { return handleError(function () { - const ret = window.window; - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbg_globalThis_1d39714405582d3c = function() { return handleError(function () { - const ret = globalThis.globalThis; - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbg_global_651f05c6a0944d1c = function() { return handleError(function () { - const ret = global.global; - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbindgen_is_undefined = function(arg0) { - const ret = getObject(arg0) === undefined; - return ret; - }; - imports.wbg.__wbg_call_01734de55d61e11d = function() { return handleError(function (arg0, arg1, arg2) { - const ret = getObject(arg0).call(getObject(arg1), getObject(arg2)); - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbg_buffer_085ec1f694018c4f = function(arg0) { - const ret = getObject(arg0).buffer; - return addHeapObject(ret); - }; - imports.wbg.__wbg_newwithbyteoffsetandlength_6da8e527659b86aa = function(arg0, arg1, arg2) { - const ret = new Uint8Array(getObject(arg0), arg1 >>> 0, arg2 >>> 0); - return addHeapObject(ret); - }; - imports.wbg.__wbg_new_8125e318e6245eed = function(arg0) { - const ret = new Uint8Array(getObject(arg0)); - return addHeapObject(ret); - }; - imports.wbg.__wbg_set_5cf90238115182c3 = function(arg0, arg1, arg2) { - getObject(arg0).set(getObject(arg1), arg2 >>> 0); - }; - imports.wbg.__wbg_length_72e2208bbc0efc61 = function(arg0) { - const ret = getObject(arg0).length; - return ret; - }; - imports.wbg.__wbg_newwithlength_e5d69174d6984cd7 = function(arg0) { - const ret = new Uint8Array(arg0 >>> 0); - return addHeapObject(ret); - }; - imports.wbg.__wbg_subarray_13db269f57aa838d = function(arg0, arg1, arg2) { - const ret = getObject(arg0).subarray(arg1 >>> 0, arg2 >>> 0); - return addHeapObject(ret); - }; - imports.wbg.__wbindgen_debug_string = function(arg0, arg1) { - const ret = debugString(getObject(arg1)); - const ptr1 = passStringToWasm0(ret, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc); - const len1 = WASM_VECTOR_LEN; - getInt32Memory0()[arg0 / 4 + 1] = len1; - getInt32Memory0()[arg0 / 4 + 0] = ptr1; - }; - imports.wbg.__wbindgen_throw = function(arg0, arg1) { - throw new Error(getStringFromWasm0(arg0, arg1)); - }; - imports.wbg.__wbindgen_memory = function() { - const ret = wasm.memory; - return addHeapObject(ret); - }; - imports.wbg.__wbindgen_closure_wrapper91 = function(arg0, arg1, arg2) { - const ret = makeClosure(arg0, arg1, 30, __wbg_adapter_22); - return addHeapObject(ret); - }; - - return imports; -} - -function __wbg_init_memory(imports, maybe_memory) { - -} - -function __wbg_finalize_init(instance, module) { - wasm = instance.exports; - __wbg_init.__wbindgen_wasm_module = module; - cachedInt32Memory0 = null; - cachedUint8Memory0 = null; - - wasm.__wbindgen_start(); - return wasm; -} - -function initSync(module) { - if (wasm !== undefined) return wasm; - - const imports = __wbg_get_imports(); - - __wbg_init_memory(imports); - - if (!(module instanceof WebAssembly.Module)) { - module = new WebAssembly.Module(module); - } - - const instance = new WebAssembly.Instance(module, imports); - - return __wbg_finalize_init(instance, module); -} - -async function __wbg_init(input) { - if (wasm !== undefined) return wasm; - - if (typeof input === 'undefined' && script_src !== 'undefined') { - input = script_src.replace(/\.js$/, '_bg.wasm'); - } - const imports = __wbg_get_imports(); - - if (typeof input === 'string' || (typeof Request === 'function' && input instanceof Request) || (typeof URL === 'function' && input instanceof URL)) { - input = fetch(input); - } - - __wbg_init_memory(imports); - - const { instance, module } = await __wbg_load(await input, imports); - - return __wbg_finalize_init(instance, module); -} - -wasm_bindgen = Object.assign(__wbg_init, { initSync }, __exports); - -})(); diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/losses/cross_entropy_loss.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/losses/cross_entropy_loss.py deleted file mode 100644 index 42c0790c98616bb69621deed55547fc04c7392ef..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/losses/cross_entropy_loss.py +++ /dev/null @@ -1,198 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import get_class_weight, weight_reduce_loss - - -def cross_entropy(pred, - label, - weight=None, - class_weight=None, - reduction='mean', - avg_factor=None, - ignore_index=-100): - """The wrapper function for :func:`F.cross_entropy`""" - # class_weight is a manual rescaling weight given to each class. - # If given, has to be a Tensor of size C element-wise losses - loss = F.cross_entropy( - pred, - label, - weight=class_weight, - reduction='none', - ignore_index=ignore_index) - - # apply weights and do the reduction - if weight is not None: - weight = weight.float() - loss = weight_reduce_loss( - loss, weight=weight, reduction=reduction, avg_factor=avg_factor) - - return loss - - -def _expand_onehot_labels(labels, label_weights, target_shape, ignore_index): - """Expand onehot labels to match the size of prediction.""" - bin_labels = labels.new_zeros(target_shape) - valid_mask = (labels >= 0) & (labels != ignore_index) - inds = torch.nonzero(valid_mask, as_tuple=True) - - if inds[0].numel() > 0: - if labels.dim() == 3: - bin_labels[inds[0], labels[valid_mask], inds[1], inds[2]] = 1 - else: - bin_labels[inds[0], labels[valid_mask]] = 1 - - valid_mask = valid_mask.unsqueeze(1).expand(target_shape).float() - if label_weights is None: - bin_label_weights = valid_mask - else: - bin_label_weights = label_weights.unsqueeze(1).expand(target_shape) - bin_label_weights *= valid_mask - - return bin_labels, bin_label_weights - - -def binary_cross_entropy(pred, - label, - weight=None, - reduction='mean', - avg_factor=None, - class_weight=None, - ignore_index=255): - """Calculate the binary CrossEntropy loss. - - Args: - pred (torch.Tensor): The prediction with shape (N, 1). - label (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - reduction (str, optional): The method used to reduce the loss. - Options are "none", "mean" and "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - class_weight (list[float], optional): The weight for each class. - ignore_index (int | None): The label index to be ignored. Default: 255 - - Returns: - torch.Tensor: The calculated loss - """ - if pred.dim() != label.dim(): - assert (pred.dim() == 2 and label.dim() == 1) or ( - pred.dim() == 4 and label.dim() == 3), \ - 'Only pred shape [N, C], label shape [N] or pred shape [N, C, ' \ - 'H, W], label shape [N, H, W] are supported' - label, weight = _expand_onehot_labels(label, weight, pred.shape, - ignore_index) - - # weighted element-wise losses - if weight is not None: - weight = weight.float() - loss = F.binary_cross_entropy_with_logits( - pred, label.float(), pos_weight=class_weight, reduction='none') - # do the reduction for the weighted loss - loss = weight_reduce_loss( - loss, weight, reduction=reduction, avg_factor=avg_factor) - - return loss - - -def mask_cross_entropy(pred, - target, - label, - reduction='mean', - avg_factor=None, - class_weight=None, - ignore_index=None): - """Calculate the CrossEntropy loss for masks. - - Args: - pred (torch.Tensor): The prediction with shape (N, C), C is the number - of classes. - target (torch.Tensor): The learning label of the prediction. - label (torch.Tensor): ``label`` indicates the class label of the mask' - corresponding object. This will be used to select the mask in the - of the class which the object belongs to when the mask prediction - if not class-agnostic. - reduction (str, optional): The method used to reduce the loss. - Options are "none", "mean" and "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - class_weight (list[float], optional): The weight for each class. - ignore_index (None): Placeholder, to be consistent with other loss. - Default: None. - - Returns: - torch.Tensor: The calculated loss - """ - assert ignore_index is None, 'BCE loss does not support ignore_index' - # TODO: handle these two reserved arguments - assert reduction == 'mean' and avg_factor is None - num_rois = pred.size()[0] - inds = torch.arange(0, num_rois, dtype=torch.long, device=pred.device) - pred_slice = pred[inds, label].squeeze(1) - return F.binary_cross_entropy_with_logits( - pred_slice, target, weight=class_weight, reduction='mean')[None] - - -@LOSSES.register_module() -class CrossEntropyLoss(nn.Module): - """CrossEntropyLoss. - - Args: - use_sigmoid (bool, optional): Whether the prediction uses sigmoid - of softmax. Defaults to False. - use_mask (bool, optional): Whether to use mask cross entropy loss. - Defaults to False. - reduction (str, optional): . Defaults to 'mean'. - Options are "none", "mean" and "sum". - class_weight (list[float] | str, optional): Weight of each class. If in - str format, read them from a file. Defaults to None. - loss_weight (float, optional): Weight of the loss. Defaults to 1.0. - """ - - def __init__(self, - use_sigmoid=False, - use_mask=False, - reduction='mean', - class_weight=None, - loss_weight=1.0): - super(CrossEntropyLoss, self).__init__() - assert (use_sigmoid is False) or (use_mask is False) - self.use_sigmoid = use_sigmoid - self.use_mask = use_mask - self.reduction = reduction - self.loss_weight = loss_weight - self.class_weight = get_class_weight(class_weight) - - if self.use_sigmoid: - self.cls_criterion = binary_cross_entropy - elif self.use_mask: - self.cls_criterion = mask_cross_entropy - else: - self.cls_criterion = cross_entropy - - def forward(self, - cls_score, - label, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function.""" - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.class_weight is not None: - class_weight = cls_score.new_tensor(self.class_weight) - else: - class_weight = None - loss_cls = self.loss_weight * self.cls_criterion( - cls_score, - label, - weight, - class_weight=class_weight, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss_cls diff --git a/spaces/Pranjal2041/SemSup-XC/semsup.py b/spaces/Pranjal2041/SemSup-XC/semsup.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/docs/ENCODEC.md b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/docs/ENCODEC.md deleted file mode 100644 index efc2bcc7ec50190b907c887b920b70fd799c6953..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/docs/ENCODEC.md +++ /dev/null @@ -1,179 +0,0 @@ -# EnCodec: High Fidelity Neural Audio Compression - -AudioCraft provides the training code for EnCodec, a state-of-the-art deep learning -based audio codec supporting both mono stereo audio, presented in the -[High Fidelity Neural Audio Compression][arxiv] paper. -Check out our [sample page][encodec_samples]. - -## Original EnCodec models - -The EnCodec models presented in High Fidelity Neural Audio Compression can be accessed -and used with the [EnCodec repository](https://github.com/facebookresearch/encodec). - -**Note**: We do not guarantee compatibility between the AudioCraft and EnCodec codebases -and released checkpoints at this stage. - - -## Installation - -Please follow the AudioCraft installation instructions from the [README](../README.md). - - -## Training - -The [CompressionSolver](../audiocraft/solvers/compression.py) implements the audio reconstruction -task to train an EnCodec model. Specifically, it trains an encoder-decoder with a quantization -bottleneck - a SEANet encoder-decoder with Residual Vector Quantization bottleneck for EnCodec - -using a combination of objective and perceptual losses in the forms of discriminators. - -The default configuration matches a causal EnCodec training with at a single bandwidth. - -### Example configuration and grids - -We provide sample configuration and grids for training EnCodec models. - -The compression configuration are defined in -[config/solver/compression](../config/solver/compression). - -The example grids are available at -[audiocraft/grids/compression](../audiocraft/grids/compression). - -```shell -# base causal encodec on monophonic audio sampled at 24 khz -dora grid compression.encodec_base_24khz -# encodec model used for MusicGen on monophonic audio sampled at 32 khz -dora grid compression.encodec_musicgen_32khz -``` - -### Training and valid stages - -The model is trained using a combination of objective and perceptual losses. -More specifically, EnCodec is trained with the MS-STFT discriminator along with -objective losses through the use of a loss balancer to effectively weight -the different losses, in an intuitive manner. - -### Evaluation stage - -Evaluations metrics for audio generation: -* SI-SNR: Scale-Invariant Signal-to-Noise Ratio. -* ViSQOL: Virtual Speech Quality Objective Listener. - -Note: Path to the ViSQOL binary (compiled with bazel) needs to be provided in -order to run the ViSQOL metric on the reference and degraded signals. -The metric is disabled by default. -Please refer to the [metrics documentation](../METRICS.md) to learn more. - -### Generation stage - -The generation stage consists in generating the reconstructed audio from samples -with the current model. The number of samples generated and the batch size used are -controlled by the `dataset.generate` configuration. The output path and audio formats -are defined in the generate stage configuration. - -```shell -# generate samples every 5 epoch -dora run solver=compression/encodec_base_24khz generate.every=5 -# run with a different dset -dora run solver=compression/encodec_base_24khz generate.path= -# limit the number of samples or use a different batch size -dora grid solver=compression/encodec_base_24khz dataset.generate.num_samples=10 dataset.generate.batch_size=4 -``` - -### Playing with the model - -Once you have a model trained, it is possible to get the entire solver, or just -the trained model with the following functions: - -```python -from audiocraft.solvers import CompressionSolver - -# If you trained a custom model with signature SIG. -model = CompressionSolver.model_from_checkpoint('//sig/SIG') -# If you want to get one of the pretrained models with the `//pretrained/` prefix. -model = CompressionSolver.model_from_checkpoint('//pretrained/facebook/encodec_32khz') -# Or load from a custom checkpoint path -model = CompressionSolver.model_from_checkpoint('/my_checkpoints/foo/bar/checkpoint.th') - - -# If you only want to use a pretrained model, you can also directly get it -# from the CompressionModel base model class. -from audiocraft.models import CompressionModel - -# Here do not put the `//pretrained/` prefix! -model = CompressionModel.get_pretrained('facebook/encodec_32khz') -model = CompressionModel.get_pretrained('dac_44khz') - -# Finally, you can also retrieve the full Solver object, with its dataloader etc. -from audiocraft import train -from pathlib import Path -import logging -import os -import sys - -# uncomment the following line if you want some detailed logs when loading a Solver. -logging.basicConfig(stream=sys.stderr, level=logging.INFO) -# You must always run the following function from the root directory. -os.chdir(Path(train.__file__).parent.parent) - - -# You can also get the full solver (only for your own experiments). -# You can provide some overrides to the parameters to make things more convenient. -solver = train.get_solver_from_sig('SIG', {'device': 'cpu', 'dataset': {'batch_size': 8}}) -solver.model -solver.dataloaders -``` - -### Importing / Exporting models - -At the moment we do not have a definitive workflow for exporting EnCodec models, for -instance to Hugging Face (HF). We are working on supporting automatic convertion between -AudioCraft and Hugging Face implementations. - -We still have some support for fine tuning an EnCodec model coming from HF in AudioCraft, -using for instance `continue_from=//pretrained/facebook/encodec_32k`. - -An AudioCraft checkpoint can be exported in a more compact format (excluding the optimizer etc.) -using `audiocraft.utils.export.export_encodec`. For instance, you could run - -```python -from audiocraft.utils import export -from audiocraft import train -xp = train.main.get_xp_from_sig('SIG') -export.export_encodec( - xp.folder / 'checkpoint.th', - '/checkpoints/my_audio_lm/compression_state_dict.bin') - - -from audiocraft.models import CompressionModel -model = CompressionModel.get_pretrained('/checkpoints/my_audio_lm/compression_state_dict.bin') - -from audiocraft.solvers import CompressionSolver -# The two are strictly equivalent, but this function supports also loading from non already exported models. -model = CompressionSolver.model_from_checkpoint('//pretrained//checkpoints/my_audio_lm/compression_state_dict.bin') -``` - -We will see then how to use this model as a tokenizer for MusicGen/Audio gen in the -[MusicGen documentation](./MUSICGEN.md). - -### Learn more - -Learn more about AudioCraft training pipelines in the [dedicated section](./TRAINING.md). - - -## Citation -``` -@article{defossez2022highfi, - title={High Fidelity Neural Audio Compression}, - author={Défossez, Alexandre and Copet, Jade and Synnaeve, Gabriel and Adi, Yossi}, - journal={arXiv preprint arXiv:2210.13438}, - year={2022} -} -``` - - -## License - -See license information in the [README](../README.md). - -[arxiv]: https://arxiv.org/abs/2210.13438 -[encodec_samples]: https://ai.honu.io/papers/encodec/samples.html diff --git a/spaces/QinBingFeng/dalle-mini/html2canvas.js b/spaces/QinBingFeng/dalle-mini/html2canvas.js deleted file mode 100644 index 96e2dc5707b1a584ff7b3b583aea7c6c18d4ea76..0000000000000000000000000000000000000000 --- a/spaces/QinBingFeng/dalle-mini/html2canvas.js +++ /dev/null @@ -1,7756 +0,0 @@ -/*! - * html2canvas 1.4.1 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ -(function (global, factory) { - typeof exports === 'object' && typeof module !== 'undefined' ? module.exports = factory() : - typeof define === 'function' && define.amd ? define(factory) : - (global = typeof globalThis !== 'undefined' ? globalThis : global || self, global.html2canvas = factory()); -}(this, (function () { 'use strict'; - - /*! ***************************************************************************** - Copyright (c) Microsoft Corporation. - - Permission to use, copy, modify, and/or distribute this software for any - purpose with or without fee is hereby granted. - - THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH - REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY - AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, - INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM - LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR - OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR - PERFORMANCE OF THIS SOFTWARE. - ***************************************************************************** */ - /* global Reflect, Promise */ - - var extendStatics = function(d, b) { - extendStatics = Object.setPrototypeOf || - ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) || - function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; }; - return extendStatics(d, b); - }; - - function __extends(d, b) { - if (typeof b !== "function" && b !== null) - throw new TypeError("Class extends value " + String(b) + " is not a constructor or null"); - extendStatics(d, b); - function __() { this.constructor = d; } - d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __()); - } - - var __assign = function() { - __assign = Object.assign || function __assign(t) { - for (var s, i = 1, n = arguments.length; i < n; i++) { - s = arguments[i]; - for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p]; - } - return t; - }; - return __assign.apply(this, arguments); - }; - - function __awaiter(thisArg, _arguments, P, generator) { - function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); } - return new (P || (P = Promise))(function (resolve, reject) { - function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } } - function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } } - function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); } - step((generator = generator.apply(thisArg, _arguments || [])).next()); - }); - } - - function __generator(thisArg, body) { - var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g; - return g = { next: verb(0), "throw": verb(1), "return": verb(2) }, typeof Symbol === "function" && (g[Symbol.iterator] = function() { return this; }), g; - function verb(n) { return function (v) { return step([n, v]); }; } - function step(op) { - if (f) throw new TypeError("Generator is already executing."); - while (_) try { - if (f = 1, y && (t = op[0] & 2 ? y["return"] : op[0] ? y["throw"] || ((t = y["return"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t; - if (y = 0, t) op = [op[0] & 2, t.value]; - switch (op[0]) { - case 0: case 1: t = op; break; - case 4: _.label++; return { value: op[1], done: false }; - case 5: _.label++; y = op[1]; op = [0]; continue; - case 7: op = _.ops.pop(); _.trys.pop(); continue; - default: - if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; } - if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; } - if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; } - if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; } - if (t[2]) _.ops.pop(); - _.trys.pop(); continue; - } - op = body.call(thisArg, _); - } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; } - if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true }; - } - } - - function __spreadArray(to, from, pack) { - if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) { - if (ar || !(i in from)) { - if (!ar) ar = Array.prototype.slice.call(from, 0, i); - ar[i] = from[i]; - } - } - return to.concat(ar || from); - } - - var Bounds = /** @class */ (function () { - function Bounds(left, top, width, height) { - this.left = left; - this.top = top; - this.width = width; - this.height = height; - } - Bounds.prototype.add = function (x, y, w, h) { - return new Bounds(this.left + x, this.top + y, this.width + w, this.height + h); - }; - Bounds.fromClientRect = function (context, clientRect) { - return new Bounds(clientRect.left + context.windowBounds.left, clientRect.top + context.windowBounds.top, clientRect.width, clientRect.height); - }; - Bounds.fromDOMRectList = function (context, domRectList) { - var domRect = Array.from(domRectList).find(function (rect) { return rect.width !== 0; }); - return domRect - ? new Bounds(domRect.left + context.windowBounds.left, domRect.top + context.windowBounds.top, domRect.width, domRect.height) - : Bounds.EMPTY; - }; - Bounds.EMPTY = new Bounds(0, 0, 0, 0); - return Bounds; - }()); - var parseBounds = function (context, node) { - return Bounds.fromClientRect(context, node.getBoundingClientRect()); - }; - var parseDocumentSize = function (document) { - var body = document.body; - var documentElement = document.documentElement; - if (!body || !documentElement) { - throw new Error("Unable to get document size"); - } - var width = Math.max(Math.max(body.scrollWidth, documentElement.scrollWidth), Math.max(body.offsetWidth, documentElement.offsetWidth), Math.max(body.clientWidth, documentElement.clientWidth)); - var height = Math.max(Math.max(body.scrollHeight, documentElement.scrollHeight), Math.max(body.offsetHeight, documentElement.offsetHeight), Math.max(body.clientHeight, documentElement.clientHeight)); - return new Bounds(0, 0, width, height); - }; - - /* - * css-line-break 2.1.0 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var toCodePoints$1 = function (str) { - var codePoints = []; - var i = 0; - var length = str.length; - while (i < length) { - var value = str.charCodeAt(i++); - if (value >= 0xd800 && value <= 0xdbff && i < length) { - var extra = str.charCodeAt(i++); - if ((extra & 0xfc00) === 0xdc00) { - codePoints.push(((value & 0x3ff) << 10) + (extra & 0x3ff) + 0x10000); - } - else { - codePoints.push(value); - i--; - } - } - else { - codePoints.push(value); - } - } - return codePoints; - }; - var fromCodePoint$1 = function () { - var codePoints = []; - for (var _i = 0; _i < arguments.length; _i++) { - codePoints[_i] = arguments[_i]; - } - if (String.fromCodePoint) { - return String.fromCodePoint.apply(String, codePoints); - } - var length = codePoints.length; - if (!length) { - return ''; - } - var codeUnits = []; - var index = -1; - var result = ''; - while (++index < length) { - var codePoint = codePoints[index]; - if (codePoint <= 0xffff) { - codeUnits.push(codePoint); - } - else { - codePoint -= 0x10000; - codeUnits.push((codePoint >> 10) + 0xd800, (codePoint % 0x400) + 0xdc00); - } - if (index + 1 === length || codeUnits.length > 0x4000) { - result += String.fromCharCode.apply(String, codeUnits); - codeUnits.length = 0; - } - } - return result; - }; - var chars$2 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$2 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$2 = 0; i$2 < chars$2.length; i$2++) { - lookup$2[chars$2.charCodeAt(i$2)] = i$2; - } - - /* - * utrie 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars$1$1 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$1$1 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$1$1 = 0; i$1$1 < chars$1$1.length; i$1$1++) { - lookup$1$1[chars$1$1.charCodeAt(i$1$1)] = i$1$1; - } - var decode$1 = function (base64) { - var bufferLength = base64.length * 0.75, len = base64.length, i, p = 0, encoded1, encoded2, encoded3, encoded4; - if (base64[base64.length - 1] === '=') { - bufferLength--; - if (base64[base64.length - 2] === '=') { - bufferLength--; - } - } - var buffer = typeof ArrayBuffer !== 'undefined' && - typeof Uint8Array !== 'undefined' && - typeof Uint8Array.prototype.slice !== 'undefined' - ? new ArrayBuffer(bufferLength) - : new Array(bufferLength); - var bytes = Array.isArray(buffer) ? buffer : new Uint8Array(buffer); - for (i = 0; i < len; i += 4) { - encoded1 = lookup$1$1[base64.charCodeAt(i)]; - encoded2 = lookup$1$1[base64.charCodeAt(i + 1)]; - encoded3 = lookup$1$1[base64.charCodeAt(i + 2)]; - encoded4 = lookup$1$1[base64.charCodeAt(i + 3)]; - bytes[p++] = (encoded1 << 2) | (encoded2 >> 4); - bytes[p++] = ((encoded2 & 15) << 4) | (encoded3 >> 2); - bytes[p++] = ((encoded3 & 3) << 6) | (encoded4 & 63); - } - return buffer; - }; - var polyUint16Array$1 = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 2) { - bytes.push((buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - var polyUint32Array$1 = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 4) { - bytes.push((buffer[i + 3] << 24) | (buffer[i + 2] << 16) | (buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - - /** Shift size for getting the index-2 table offset. */ - var UTRIE2_SHIFT_2$1 = 5; - /** Shift size for getting the index-1 table offset. */ - var UTRIE2_SHIFT_1$1 = 6 + 5; - /** - * Shift size for shifting left the index array values. - * Increases possible data size with 16-bit index values at the cost - * of compactability. - * This requires data blocks to be aligned by UTRIE2_DATA_GRANULARITY. - */ - var UTRIE2_INDEX_SHIFT$1 = 2; - /** - * Difference between the two shift sizes, - * for getting an index-1 offset from an index-2 offset. 6=11-5 - */ - var UTRIE2_SHIFT_1_2$1 = UTRIE2_SHIFT_1$1 - UTRIE2_SHIFT_2$1; - /** - * The part of the index-2 table for U+D800..U+DBFF stores values for - * lead surrogate code _units_ not code _points_. - * Values for lead surrogate code _points_ are indexed with this portion of the table. - * Length=32=0x20=0x400>>UTRIE2_SHIFT_2. (There are 1024=0x400 lead surrogates.) - */ - var UTRIE2_LSCP_INDEX_2_OFFSET$1 = 0x10000 >> UTRIE2_SHIFT_2$1; - /** Number of entries in a data block. 32=0x20 */ - var UTRIE2_DATA_BLOCK_LENGTH$1 = 1 << UTRIE2_SHIFT_2$1; - /** Mask for getting the lower bits for the in-data-block offset. */ - var UTRIE2_DATA_MASK$1 = UTRIE2_DATA_BLOCK_LENGTH$1 - 1; - var UTRIE2_LSCP_INDEX_2_LENGTH$1 = 0x400 >> UTRIE2_SHIFT_2$1; - /** Count the lengths of both BMP pieces. 2080=0x820 */ - var UTRIE2_INDEX_2_BMP_LENGTH$1 = UTRIE2_LSCP_INDEX_2_OFFSET$1 + UTRIE2_LSCP_INDEX_2_LENGTH$1; - /** - * The 2-byte UTF-8 version of the index-2 table follows at offset 2080=0x820. - * Length 32=0x20 for lead bytes C0..DF, regardless of UTRIE2_SHIFT_2. - */ - var UTRIE2_UTF8_2B_INDEX_2_OFFSET$1 = UTRIE2_INDEX_2_BMP_LENGTH$1; - var UTRIE2_UTF8_2B_INDEX_2_LENGTH$1 = 0x800 >> 6; /* U+0800 is the first code point after 2-byte UTF-8 */ - /** - * The index-1 table, only used for supplementary code points, at offset 2112=0x840. - * Variable length, for code points up to highStart, where the last single-value range starts. - * Maximum length 512=0x200=0x100000>>UTRIE2_SHIFT_1. - * (For 0x100000 supplementary code points U+10000..U+10ffff.) - * - * The part of the index-2 table for supplementary code points starts - * after this index-1 table. - * - * Both the index-1 table and the following part of the index-2 table - * are omitted completely if there is only BMP data. - */ - var UTRIE2_INDEX_1_OFFSET$1 = UTRIE2_UTF8_2B_INDEX_2_OFFSET$1 + UTRIE2_UTF8_2B_INDEX_2_LENGTH$1; - /** - * Number of index-1 entries for the BMP. 32=0x20 - * This part of the index-1 table is omitted from the serialized form. - */ - var UTRIE2_OMITTED_BMP_INDEX_1_LENGTH$1 = 0x10000 >> UTRIE2_SHIFT_1$1; - /** Number of entries in an index-2 block. 64=0x40 */ - var UTRIE2_INDEX_2_BLOCK_LENGTH$1 = 1 << UTRIE2_SHIFT_1_2$1; - /** Mask for getting the lower bits for the in-index-2-block offset. */ - var UTRIE2_INDEX_2_MASK$1 = UTRIE2_INDEX_2_BLOCK_LENGTH$1 - 1; - var slice16$1 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint16Array(Array.prototype.slice.call(view, start, end)); - }; - var slice32$1 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint32Array(Array.prototype.slice.call(view, start, end)); - }; - var createTrieFromBase64$1 = function (base64, _byteLength) { - var buffer = decode$1(base64); - var view32 = Array.isArray(buffer) ? polyUint32Array$1(buffer) : new Uint32Array(buffer); - var view16 = Array.isArray(buffer) ? polyUint16Array$1(buffer) : new Uint16Array(buffer); - var headerLength = 24; - var index = slice16$1(view16, headerLength / 2, view32[4] / 2); - var data = view32[5] === 2 - ? slice16$1(view16, (headerLength + view32[4]) / 2) - : slice32$1(view32, Math.ceil((headerLength + view32[4]) / 4)); - return new Trie$1(view32[0], view32[1], view32[2], view32[3], index, data); - }; - var Trie$1 = /** @class */ (function () { - function Trie(initialValue, errorValue, highStart, highValueIndex, index, data) { - this.initialValue = initialValue; - this.errorValue = errorValue; - this.highStart = highStart; - this.highValueIndex = highValueIndex; - this.index = index; - this.data = data; - } - /** - * Get the value for a code point as stored in the Trie. - * - * @param codePoint the code point - * @return the value - */ - Trie.prototype.get = function (codePoint) { - var ix; - if (codePoint >= 0) { - if (codePoint < 0x0d800 || (codePoint > 0x0dbff && codePoint <= 0x0ffff)) { - // Ordinary BMP code point, excluding leading surrogates. - // BMP uses a single level lookup. BMP index starts at offset 0 in the Trie2 index. - // 16 bit data is stored in the index array itself. - ix = this.index[codePoint >> UTRIE2_SHIFT_2$1]; - ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1); - return this.data[ix]; - } - if (codePoint <= 0xffff) { - // Lead Surrogate Code Point. A Separate index section is stored for - // lead surrogate code units and code points. - // The main index has the code unit data. - // For this function, we need the code point data. - // Note: this expression could be refactored for slightly improved efficiency, but - // surrogate code points will be so rare in practice that it's not worth it. - ix = this.index[UTRIE2_LSCP_INDEX_2_OFFSET$1 + ((codePoint - 0xd800) >> UTRIE2_SHIFT_2$1)]; - ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1); - return this.data[ix]; - } - if (codePoint < this.highStart) { - // Supplemental code point, use two-level lookup. - ix = UTRIE2_INDEX_1_OFFSET$1 - UTRIE2_OMITTED_BMP_INDEX_1_LENGTH$1 + (codePoint >> UTRIE2_SHIFT_1$1); - ix = this.index[ix]; - ix += (codePoint >> UTRIE2_SHIFT_2$1) & UTRIE2_INDEX_2_MASK$1; - ix = this.index[ix]; - ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1); - return this.data[ix]; - } - if (codePoint <= 0x10ffff) { - return this.data[this.highValueIndex]; - } - } - // Fall through. The code point is outside of the legal range of 0..0x10ffff. - return this.errorValue; - }; - return Trie; - }()); - - /* - * base64-arraybuffer 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars$3 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$3 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$3 = 0; i$3 < chars$3.length; i$3++) { - lookup$3[chars$3.charCodeAt(i$3)] = i$3; - } - - var base64$1 = 'KwAAAAAAAAAACA4AUD0AADAgAAACAAAAAAAIABAAGABAAEgAUABYAGAAaABgAGgAYgBqAF8AZwBgAGgAcQB5AHUAfQCFAI0AlQCdAKIAqgCyALoAYABoAGAAaABgAGgAwgDKAGAAaADGAM4A0wDbAOEA6QDxAPkAAQEJAQ8BFwF1AH0AHAEkASwBNAE6AUIBQQFJAVEBWQFhAWgBcAF4ATAAgAGGAY4BlQGXAZ8BpwGvAbUBvQHFAc0B0wHbAeMB6wHxAfkBAQIJAvEBEQIZAiECKQIxAjgCQAJGAk4CVgJeAmQCbAJ0AnwCgQKJApECmQKgAqgCsAK4ArwCxAIwAMwC0wLbAjAA4wLrAvMC+AIAAwcDDwMwABcDHQMlAy0DNQN1AD0DQQNJA0kDSQNRA1EDVwNZA1kDdQB1AGEDdQBpA20DdQN1AHsDdQCBA4kDkQN1AHUAmQOhA3UAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AKYDrgN1AHUAtgO+A8YDzgPWAxcD3gPjA+sD8wN1AHUA+wMDBAkEdQANBBUEHQQlBCoEFwMyBDgEYABABBcDSARQBFgEYARoBDAAcAQzAXgEgASIBJAEdQCXBHUAnwSnBK4EtgS6BMIEyAR1AHUAdQB1AHUAdQCVANAEYABgAGAAYABgAGAAYABgANgEYADcBOQEYADsBPQE/AQEBQwFFAUcBSQFLAU0BWQEPAVEBUsFUwVbBWAAYgVgAGoFcgV6BYIFigWRBWAAmQWfBaYFYABgAGAAYABgAKoFYACxBbAFuQW6BcEFwQXHBcEFwQXPBdMF2wXjBeoF8gX6BQIGCgYSBhoGIgYqBjIGOgZgAD4GRgZMBmAAUwZaBmAAYABgAGAAYABgAGAAYABgAGAAYABgAGIGYABpBnAGYABgAGAAYABgAGAAYABgAGAAYAB4Bn8GhQZgAGAAYAB1AHcDFQSLBmAAYABgAJMGdQA9A3UAmwajBqsGqwaVALMGuwbDBjAAywbSBtIG1QbSBtIG0gbSBtIG0gbdBuMG6wbzBvsGAwcLBxMHAwcbByMHJwcsBywHMQcsB9IGOAdAB0gHTgfSBkgHVgfSBtIG0gbSBtIG0gbSBtIG0gbSBiwHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAdgAGAALAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAdbB2MHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsB2kH0gZwB64EdQB1AHUAdQB1AHUAdQB1AHUHfQdgAIUHjQd1AHUAlQedB2AAYAClB6sHYACzB7YHvgfGB3UAzgfWBzMB3gfmB1EB7gf1B/0HlQENAQUIDQh1ABUIHQglCBcDLQg1CD0IRQhNCEEDUwh1AHUAdQBbCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIcAh3CHoIMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIgggwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAALAcsBywHLAcsBywHLAcsBywHLAcsB4oILAcsB44I0gaWCJ4Ipgh1AHUAqgiyCHUAdQB1AHUAdQB1AHUAdQB1AHUAtwh8AXUAvwh1AMUIyQjRCNkI4AjoCHUAdQB1AO4I9gj+CAYJDgkTCS0HGwkjCYIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiAAIAAAAFAAYABgAGIAXwBgAHEAdQBFAJUAogCyAKAAYABgAEIA4ABGANMA4QDxAMEBDwE1AFwBLAE6AQEBUQF4QkhCmEKoQrhCgAHIQsAB0MLAAcABwAHAAeDC6ABoAHDCwMMAAcABwAHAAdDDGMMAAcAB6MM4wwjDWMNow3jDaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAEjDqABWw6bDqABpg6gAaABoAHcDvwOPA+gAaABfA/8DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DpcPAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcAB9cPKwkyCToJMAB1AHUAdQBCCUoJTQl1AFUJXAljCWcJawkwADAAMAAwAHMJdQB2CX4JdQCECYoJjgmWCXUAngkwAGAAYABxAHUApgn3A64JtAl1ALkJdQDACTAAMAAwADAAdQB1AHUAdQB1AHUAdQB1AHUAowYNBMUIMAAwADAAMADICcsJ0wnZCRUE4QkwAOkJ8An4CTAAMAB1AAAKvwh1AAgKDwoXCh8KdQAwACcKLgp1ADYKqAmICT4KRgowADAAdQB1AE4KMAB1AFYKdQBeCnUAZQowADAAMAAwADAAMAAwADAAMAAVBHUAbQowADAAdQC5CXUKMAAwAHwBxAijBogEMgF9CoQKiASMCpQKmgqIBKIKqgquCogEDQG2Cr4KxgrLCjAAMADTCtsKCgHjCusK8Qr5CgELMAAwADAAMAB1AIsECQsRC3UANAEZCzAAMAAwADAAMAB1ACELKQswAHUANAExCzkLdQBBC0kLMABRC1kLMAAwADAAMAAwADAAdQBhCzAAMAAwAGAAYABpC3ELdwt/CzAAMACHC4sLkwubC58Lpwt1AK4Ltgt1APsDMAAwADAAMAAwADAAMAAwAL4LwwvLC9IL1wvdCzAAMADlC+kL8Qv5C/8LSQswADAAMAAwADAAMAAwADAAMAAHDDAAMAAwADAAMAAODBYMHgx1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1ACYMMAAwADAAdQB1AHUALgx1AHUAdQB1AHUAdQA2DDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AD4MdQBGDHUAdQB1AHUAdQB1AEkMdQB1AHUAdQB1AFAMMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQBYDHUAdQB1AF8MMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUA+wMVBGcMMAAwAHwBbwx1AHcMfwyHDI8MMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAYABgAJcMMAAwADAAdQB1AJ8MlQClDDAAMACtDCwHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsB7UMLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AA0EMAC9DDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAsBywHLAcsBywHLAcsBywHLQcwAMEMyAwsBywHLAcsBywHLAcsBywHLAcsBywHzAwwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1ANQM2QzhDDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMABgAGAAYABgAGAAYABgAOkMYADxDGAA+AwADQYNYABhCWAAYAAODTAAMAAwADAAFg1gAGAAHg37AzAAMAAwADAAYABgACYNYAAsDTQNPA1gAEMNPg1LDWAAYABgAGAAYABgAGAAYABgAGAAUg1aDYsGVglhDV0NcQBnDW0NdQ15DWAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAlQCBDZUAiA2PDZcNMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAnw2nDTAAMAAwADAAMAAwAHUArw23DTAAMAAwADAAMAAwADAAMAAwADAAMAB1AL8NMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAB1AHUAdQB1AHUAdQDHDTAAYABgAM8NMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAA1w11ANwNMAAwAD0B5A0wADAAMAAwADAAMADsDfQN/A0EDgwOFA4wABsOMAAwADAAMAAwADAAMAAwANIG0gbSBtIG0gbSBtIG0gYjDigOwQUuDsEFMw7SBjoO0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGQg5KDlIOVg7SBtIGXg5lDm0OdQ7SBtIGfQ6EDooOjQ6UDtIGmg6hDtIG0gaoDqwO0ga0DrwO0gZgAGAAYADEDmAAYAAkBtIGzA5gANIOYADaDokO0gbSBt8O5w7SBu8O0gb1DvwO0gZgAGAAxA7SBtIG0gbSBtIGYABgAGAAYAAED2AAsAUMD9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGFA8sBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAccD9IGLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHJA8sBywHLAcsBywHLAccDywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywPLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAc0D9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAccD9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGFA8sBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHPA/SBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gYUD0QPlQCVAJUAMAAwADAAMACVAJUAlQCVAJUAlQCVAEwPMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAA//8EAAQABAAEAAQABAAEAAQABAANAAMAAQABAAIABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQACgATABcAHgAbABoAHgAXABYAEgAeABsAGAAPABgAHABLAEsASwBLAEsASwBLAEsASwBLABgAGAAeAB4AHgATAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQABYAGwASAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAWAA0AEQAeAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAFAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAJABYAGgAbABsAGwAeAB0AHQAeAE8AFwAeAA0AHgAeABoAGwBPAE8ADgBQAB0AHQAdAE8ATwAXAE8ATwBPABYAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAFAAUABQAFAAUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAB4AHgAeAFAATwBAAE8ATwBPAEAATwBQAFAATwBQAB4AHgAeAB4AHgAeAB0AHQAdAB0AHgAdAB4ADgBQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgBQAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAJAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAkACQAJAAkACQAJAAkABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAFAAHgAeAB4AKwArAFAAUABQAFAAGABQACsAKwArACsAHgAeAFAAHgBQAFAAUAArAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAUAAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAYAA0AKwArAB4AHgAbACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQADQAEAB4ABAAEAB4ABAAEABMABAArACsAKwArACsAKwArACsAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAKwArACsAKwBWAFYAVgBWAB4AHgArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AGgAaABoAGAAYAB4AHgAEAAQABAAEAAQABAAEAAQABAAEAAQAEwAEACsAEwATAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABLAEsASwBLAEsASwBLAEsASwBLABoAGQAZAB4AUABQAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQABMAUAAEAAQABAAEAAQABAAEAB4AHgAEAAQABAAEAAQABABQAFAABAAEAB4ABAAEAAQABABQAFAASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUAAeAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAFAABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQAUABQAB4AHgAYABMAUAArACsABAAbABsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAFAABAAEAAQABAAEAFAABAAEAAQAUAAEAAQABAAEAAQAKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAArACsAHgArAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAUAAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAABAAEAA0ADQBLAEsASwBLAEsASwBLAEsASwBLAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUAArACsAKwBQAFAAUABQACsAKwAEAFAABAAEAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABABQACsAKwArACsAKwArACsAKwAEACsAKwArACsAUABQACsAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAFAAUAAaABoAUABQAFAAUABQAEwAHgAbAFAAHgAEACsAKwAEAAQABAArAFAAUABQAFAAUABQACsAKwArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQACsAUABQACsAKwAEACsABAAEAAQABAAEACsAKwArACsABAAEACsAKwAEAAQABAArACsAKwAEACsAKwArACsAKwArACsAUABQAFAAUAArAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLAAQABABQAFAAUAAEAB4AKwArACsAKwArACsAKwArACsAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQACsAKwAEAFAABAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAArACsAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAB4AGwArACsAKwArACsAKwArAFAABAAEAAQABAAEAAQAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABAArACsAKwArACsAKwArAAQABAAEACsAKwArACsAUABQACsAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAB4AUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAAQAUAArAFAAUABQAFAAUABQACsAKwArAFAAUABQACsAUABQAFAAUAArACsAKwBQAFAAKwBQACsAUABQACsAKwArAFAAUAArACsAKwBQAFAAUAArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArAAQABAAEAAQABAArACsAKwAEAAQABAArAAQABAAEAAQAKwArAFAAKwArACsAKwArACsABAArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAHgAeAB4AHgAeAB4AGwAeACsAKwArACsAKwAEAAQABAAEAAQAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAUAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAAEACsAKwArACsAKwArACsABAAEACsAUABQAFAAKwArACsAKwArAFAAUAAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAKwAOAFAAUABQAFAAUABQAFAAHgBQAAQABAAEAA4AUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAKwArAAQAUAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAAEACsAKwArACsAKwArACsABAAEACsAKwArACsAKwArACsAUAArAFAAUAAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwBQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAFAABAAEAAQABAAEAAQABAArAAQABAAEACsABAAEAAQABABQAB4AKwArACsAKwBQAFAAUAAEAFAAUABQAFAAUABQAFAAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAFAAUABQAFAAUABQAFAAUABQABoAUABQAFAAUABQAFAAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQACsAUAArACsAUABQAFAAUABQAFAAUAArACsAKwAEACsAKwArACsABAAEAAQABAAEAAQAKwAEACsABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArAAQABAAeACsAKwArACsAKwArACsAKwArACsAKwArAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAAqAFwAXAAqACoAKgAqACoAKgAqACsAKwArACsAGwBcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAeAEsASwBLAEsASwBLAEsASwBLAEsADQANACsAKwArACsAKwBcAFwAKwBcACsAXABcAFwAXABcACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACsAXAArAFwAXABcAFwAXABcAFwAXABcAFwAKgBcAFwAKgAqACoAKgAqACoAKgAqACoAXAArACsAXABcAFwAXABcACsAXAArACoAKgAqACoAKgAqACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwBcAFwAXABcAFAADgAOAA4ADgAeAA4ADgAJAA4ADgANAAkAEwATABMAEwATAAkAHgATAB4AHgAeAAQABAAeAB4AHgAeAB4AHgBLAEsASwBLAEsASwBLAEsASwBLAFAAUABQAFAAUABQAFAAUABQAFAADQAEAB4ABAAeAAQAFgARABYAEQAEAAQAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQADQAEAAQABAAEAAQADQAEAAQAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArAA0ADQAeAB4AHgAeAB4AHgAEAB4AHgAeAB4AHgAeACsAHgAeAA4ADgANAA4AHgAeAB4AHgAeAAkACQArACsAKwArACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgBcAEsASwBLAEsASwBLAEsASwBLAEsADQANAB4AHgAeAB4AXABcAFwAXABcAFwAKgAqACoAKgBcAFwAXABcACoAKgAqAFwAKgAqACoAXABcACoAKgAqACoAKgAqACoAXABcAFwAKgAqACoAKgBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKgAqAFwAKgBLAEsASwBLAEsASwBLAEsASwBLACoAKgAqACoAKgAqAFAAUABQAFAAUABQACsAUAArACsAKwArACsAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgBQAFAAUABQAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUAArACsAUABQAFAAUABQAFAAUAArAFAAKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAKwBQACsAUABQAFAAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsABAAEAAQAHgANAB4AHgAeAB4AHgAeAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUAArACsADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAANAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAWABEAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAA0ADQANAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAANAA0AKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUAArAAQABAArACsAKwArACsAKwArACsAKwArACsAKwBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqAA0ADQAVAFwADQAeAA0AGwBcACoAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwAeAB4AEwATAA0ADQAOAB4AEwATAB4ABAAEAAQACQArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUAAEAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAHgArACsAKwATABMASwBLAEsASwBLAEsASwBLAEsASwBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAArACsAXABcAFwAXABcACsAKwArACsAKwArACsAKwArACsAKwBcAFwAXABcAFwAXABcAFwAXABcAFwAXAArACsAKwArAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAXAArACsAKwAqACoAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAArACsAHgAeAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKwAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKwArAAQASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArACoAKgAqACoAKgAqACoAXAAqACoAKgAqACoAKgArACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABABQAFAAUABQAFAAUABQACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwANAA0AHgANAA0ADQANAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAEAAQAHgAeAB4AHgAeAB4AHgAeAB4AKwArACsABAAEAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwAeAB4AHgAeAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArAA0ADQANAA0ADQBLAEsASwBLAEsASwBLAEsASwBLACsAKwArAFAAUABQAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAA0ADQBQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUAAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArAAQABAAEAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAAQAUABQAFAAUABQAFAABABQAFAABAAEAAQAUAArACsAKwArACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsABAAEAAQABAAEAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAKwBQACsAUAArAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgBQAB4AHgAeAFAAUABQACsAHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQACsAKwAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQACsAHgAeAB4AHgAeAB4AHgAOAB4AKwANAA0ADQANAA0ADQANAAkADQANAA0ACAAEAAsABAAEAA0ACQANAA0ADAAdAB0AHgAXABcAFgAXABcAFwAWABcAHQAdAB4AHgAUABQAFAANAAEAAQAEAAQABAAEAAQACQAaABoAGgAaABoAGgAaABoAHgAXABcAHQAVABUAHgAeAB4AHgAeAB4AGAAWABEAFQAVABUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ADQAeAA0ADQANAA0AHgANAA0ADQAHAB4AHgAeAB4AKwAEAAQABAAEAAQABAAEAAQABAAEAFAAUAArACsATwBQAFAAUABQAFAAHgAeAB4AFgARAE8AUABPAE8ATwBPAFAAUABQAFAAUAAeAB4AHgAWABEAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArABsAGwAbABsAGwAbABsAGgAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGgAbABsAGwAbABoAGwAbABoAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAHgAeAFAAGgAeAB0AHgBQAB4AGgAeAB4AHgAeAB4AHgAeAB4AHgBPAB4AUAAbAB4AHgBQAFAAUABQAFAAHgAeAB4AHQAdAB4AUAAeAFAAHgBQAB4AUABPAFAAUAAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAHgBQAFAAUABQAE8ATwBQAFAAUABQAFAATwBQAFAATwBQAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAFAAUABQAFAATwBPAE8ATwBPAE8ATwBPAE8ATwBQAFAAUABQAFAAUABQAFAAUAAeAB4AUABQAFAAUABPAB4AHgArACsAKwArAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB4AHQAdAB4AHgAeAB0AHQAeAB4AHQAeAB4AHgAdAB4AHQAbABsAHgAdAB4AHgAeAB4AHQAeAB4AHQAdAB0AHQAeAB4AHQAeAB0AHgAdAB0AHQAdAB0AHQAeAB0AHgAeAB4AHgAeAB0AHQAdAB0AHgAeAB4AHgAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB4AHgAeAB0AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHgAeAB0AHQAdAB0AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAeAB4AHgAdAB4AHgAeAB4AHgAeAB4AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABYAEQAWABEAHgAeAB4AHgAeAB4AHQAeAB4AHgAeAB4AHgAeACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAWABEAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAFAAHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAeAB4AHQAdAB0AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHQAdAB4AHgAeAB4AHQAdAB0AHgAeAB0AHgAeAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlAB4AHQAdAB4AHgAdAB4AHgAeAB4AHQAdAB4AHgAeAB4AJQAlAB0AHQAlAB4AJQAlACUAIAAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAeAB4AHgAeAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHgAdAB0AHQAeAB0AJQAdAB0AHgAdAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAdAB0AHQAdACUAHgAlACUAJQAdACUAJQAdAB0AHQAlACUAHQAdACUAHQAdACUAJQAlAB4AHQAeAB4AHgAeAB0AHQAlAB0AHQAdAB0AHQAdACUAJQAlACUAJQAdACUAJQAgACUAHQAdACUAJQAlACUAJQAlACUAJQAeAB4AHgAlACUAIAAgACAAIAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AFwAXABcAFwAXABcAHgATABMAJQAeAB4AHgAWABEAFgARABYAEQAWABEAFgARABYAEQAWABEATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABYAEQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAWABEAFgARABYAEQAWABEAFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFgARABYAEQAWABEAFgARABYAEQAWABEAFgARABYAEQAWABEAFgARABYAEQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAWABEAFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAEAAQABAAeAB4AKwArACsAKwArABMADQANAA0AUAATAA0AUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAUAANACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAA0ADQANAA0ADQANAA0ADQAeAA0AFgANAB4AHgAXABcAHgAeABcAFwAWABEAFgARABYAEQAWABEADQANAA0ADQATAFAADQANAB4ADQANAB4AHgAeAB4AHgAMAAwADQANAA0AHgANAA0AFgANAA0ADQANAA0ADQANAA0AHgANAB4ADQANAB4AHgAeACsAKwArACsAKwArACsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwArACsAKwArACsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArAA0AEQARACUAJQBHAFcAVwAWABEAFgARABYAEQAWABEAFgARACUAJQAWABEAFgARABYAEQAWABEAFQAWABEAEQAlAFcAVwBXAFcAVwBXAFcAVwBXAAQABAAEAAQABAAEACUAVwBXAFcAVwA2ACUAJQBXAFcAVwBHAEcAJQAlACUAKwBRAFcAUQBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFEAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBRAFcAUQBXAFEAVwBXAFcAVwBXAFcAUQBXAFcAVwBXAFcAVwBRAFEAKwArAAQABAAVABUARwBHAFcAFQBRAFcAUQBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBRAFcAVwBXAFcAVwBXAFEAUQBXAFcAVwBXABUAUQBHAEcAVwArACsAKwArACsAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwAlACUAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACsAKwArACsAKwArACsAKwArACsAKwArAFEAUQBRAFEAUQBRAFEAUQBRAFEAUQBRAFEAUQBRAFEAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBPAE8ATwBPAE8ATwBPAE8AJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQAlAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAEcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAADQATAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABLAEsASwBLAEsASwBLAEsASwBLAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAABAAEAAQABAAeAAQABAAEAAQABAAEAAQABAAEAAQAHgBQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUABQAAQABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAeAA0ADQANAA0ADQArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAB4AHgAeAB4AHgAeAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAHgAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAeAB4AUABQAFAAUABQAFAAUABQAFAAUABQAAQAUABQAFAABABQAFAAUABQAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAeAB4AHgAeAAQAKwArACsAUABQAFAAUABQAFAAHgAeABoAHgArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAADgAOABMAEwArACsAKwArACsAKwArACsABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwANAA0ASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAFAAUAAeAB4AHgBQAA4AUABQAAQAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAA0ADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArAB4AWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYACsAKwArAAQAHgAeAB4AHgAeAB4ADQANAA0AHgAeAB4AHgArAFAASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArAB4AHgBcAFwAXABcAFwAKgBcAFwAXABcAFwAXABcAFwAXABcAEsASwBLAEsASwBLAEsASwBLAEsAXABcAFwAXABcACsAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArAFAAUABQAAQAUABQAFAAUABQAFAAUABQAAQABAArACsASwBLAEsASwBLAEsASwBLAEsASwArACsAHgANAA0ADQBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKgAqACoAXAAqACoAKgBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAAqAFwAKgAqACoAXABcACoAKgBcAFwAXABcAFwAKgAqAFwAKgBcACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFwAXABcACoAKgBQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAA0ADQBQAFAAUAAEAAQAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUAArACsAUABQAFAAUABQAFAAKwArAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQADQAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAVABVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBUAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVACsAKwArACsAKwArACsAKwArACsAKwArAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAKwArACsAKwBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAKwArACsAKwAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAKwArACsAKwArAFYABABWAFYAVgBWAFYAVgBWAFYAVgBWAB4AVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgArAFYAVgBWAFYAVgArAFYAKwBWAFYAKwBWAFYAKwBWAFYAVgBWAFYAVgBWAFYAVgBWAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAEQAWAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAaAB4AKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAGAARABEAGAAYABMAEwAWABEAFAArACsAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACUAJQAlACUAJQAWABEAFgARABYAEQAWABEAFgARABYAEQAlACUAFgARACUAJQAlACUAJQAlACUAEQAlABEAKwAVABUAEwATACUAFgARABYAEQAWABEAJQAlACUAJQAlACUAJQAlACsAJQAbABoAJQArACsAKwArAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAcAKwATACUAJQAbABoAJQAlABYAEQAlACUAEQAlABEAJQBXAFcAVwBXAFcAVwBXAFcAVwBXABUAFQAlACUAJQATACUAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXABYAJQARACUAJQAlAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAWACUAEQAlABYAEQARABYAEQARABUAVwBRAFEAUQBRAFEAUQBRAFEAUQBRAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAEcARwArACsAVwBXAFcAVwBXAFcAKwArAFcAVwBXAFcAVwBXACsAKwBXAFcAVwBXAFcAVwArACsAVwBXAFcAKwArACsAGgAbACUAJQAlABsAGwArAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwAEAAQABAAQAB0AKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsADQANAA0AKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAB4AHgAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAAQAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAA0AUABQAFAAUAArACsAKwArAFAAUABQAFAAUABQAFAAUAANAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwAeACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAKwArAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUAArACsAKwBQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwANAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAB4AUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAUABQAFAAUABQAAQABAAEACsABAAEACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAKwBQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEACsAKwArACsABABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAA0ADQANAA0ADQANAA0ADQAeACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAArACsAKwArAFAAUABQAFAAUAANAA0ADQANAA0ADQAUACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsADQANAA0ADQANAA0ADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAAQABAAEAAQAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArAAQABAANACsAKwBQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAB4AHgAeAB4AHgArACsAKwArACsAKwAEAAQABAAEAAQABAAEAA0ADQAeAB4AHgAeAB4AKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgANAA0ADQANACsAKwArACsAKwArACsAKwArACsAKwAeACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsASwBLAEsASwBLAEsASwBLAEsASwANAA0ADQANAFAABAAEAFAAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAeAA4AUAArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAADQANAB4ADQAEAAQABAAEAB4ABAAEAEsASwBLAEsASwBLAEsASwBLAEsAUAAOAFAADQANAA0AKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAANAA0AHgANAA0AHgAEACsAUABQAFAAUABQAFAAUAArAFAAKwBQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAA0AKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsABAAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQACsABAAEAFAABAAEAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABAArACsAUAArACsAKwArACsAKwAEACsAKwArACsAKwBQAFAAUABQAFAABAAEACsAKwAEAAQABAAEAAQABAAEACsAKwArAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwArACsABAAEAAQABAAEAAQABABQAFAAUABQAA0ADQANAA0AHgBLAEsASwBLAEsASwBLAEsASwBLAA0ADQArAB4ABABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAFAAUAAeAFAAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAArACsABAAEAAQABAAEAAQABAAEAAQADgANAA0AEwATAB4AHgAeAA0ADQANAA0ADQANAA0ADQANAA0ADQANAA0ADQANAFAAUABQAFAABAAEACsAKwAEAA0ADQAeAFAAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKwArACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBcAFwADQANAA0AKgBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAKwArAFAAKwArAFAAUABQAFAAUABQAFAAUAArAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQAKwAEAAQAKwArAAQABAAEAAQAUAAEAFAABAAEAA0ADQANACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAArACsABAAEAAQABAAEAAQABABQAA4AUAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAFAABAAEAAQABAAOAB4ADQANAA0ADQAOAB4ABAArACsAKwArACsAKwArACsAUAAEAAQABAAEAAQABAAEAAQABAAEAAQAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAA0ADQANAFAADgAOAA4ADQANACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEACsABAAEAAQABAAEAAQABAAEAFAADQANAA0ADQANACsAKwArACsAKwArACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwAOABMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAArACsAKwAEACsABAAEACsABAAEAAQABAAEAAQABABQAAQAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAKwBQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQAKwAEAAQAKwAEAAQABAAEAAQAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAaABoAGgAaAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsADQANAA0ADQANACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAASABIAEgAQwBDAEMAUABQAFAAUABDAFAAUABQAEgAQwBIAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAASABDAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwAJAAkACQAJAAkACQAJABYAEQArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABIAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwANAA0AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEAAQABAANACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAA0ADQANAB4AHgAeAB4AHgAeAFAAUABQAFAADQAeACsAKwArACsAKwArACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAANAA0AHgAeACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwAEAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAARwBHABUARwAJACsAKwArACsAKwArACsAKwArACsAKwAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACsAKwArACsAKwArACsAKwBXAFcAVwBXAFcAVwBXAFcAVwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUQBRAFEAKwArACsAKwArACsAKwArACsAKwArACsAKwBRAFEAUQBRACsAKwArACsAKwArACsAKwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArACsAHgAEAAQADQAEAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArAB4AHgAeAB4AHgAeAB4AKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAAQABAAEAAQABAAeAB4AHgAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAB4AHgAEAAQABAAEAAQABAAEAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQAHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwBQAFAAKwArAFAAKwArAFAAUAArACsAUABQAFAAUAArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAUAArAFAAUABQAFAAUABQAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAHgAeAFAAUABQAFAAUAArAFAAKwArACsAUABQAFAAUABQAFAAUAArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeACsAKwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4ABAAeAB4AHgAeAB4AHgAeAB4AHgAeAAQAHgAeAA0ADQANAA0AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAAQAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArAAQABAAEAAQABAAEAAQAKwAEAAQAKwAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwBQAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArABsAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArAB4AHgAeAB4ABAAEAAQABAAEAAQABABQACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArABYAFgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAGgBQAFAAUAAaAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAKwBQACsAKwBQACsAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwBQACsAUAArACsAKwArACsAKwBQACsAKwArACsAUAArAFAAKwBQACsAUABQAFAAKwBQAFAAKwBQACsAKwBQACsAUAArAFAAKwBQACsAUAArAFAAUAArAFAAKwArAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUAArAFAAUABQAFAAKwBQACsAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAUABQAFAAKwBQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8AJQAlACUAHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB4AHgAeACUAJQAlAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlAB4AHgAlACUAJQAlACUAHgAlACUAJQAlACUAIAAgACAAJQAlACAAJQAlACAAIAAgACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACEAIQAhACEAIQAlACUAIAAgACUAJQAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlACUAIAAlACUAJQAlACAAIAAgACUAIAAgACAAJQAlACUAJQAlACUAJQAgACUAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAlAB4AJQAeACUAJQAlACUAJQAgACUAJQAlACUAHgAlAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAgACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACAAIAAgACUAJQAlACAAIAAgACAAIAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABcAFwAXABUAFQAVAB4AHgAeAB4AJQAlACUAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAgACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAgACUAJQAgACUAJQAlACUAJQAlACUAJQAgACAAIAAgACAAIAAgACAAJQAlACUAJQAlACUAIAAlACUAJQAlACUAJQAlACUAJQAgACAAIAAgACAAIAAgACAAIAAgACUAJQAgACAAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAgACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAlACAAIAAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAgACAAIAAlACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwArAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACUAVwBXACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAA=='; - - var LETTER_NUMBER_MODIFIER = 50; - // Non-tailorable Line Breaking Classes - var BK = 1; // Cause a line break (after) - var CR$1 = 2; // Cause a line break (after), except between CR and LF - var LF$1 = 3; // Cause a line break (after) - var CM = 4; // Prohibit a line break between the character and the preceding character - var NL = 5; // Cause a line break (after) - var WJ = 7; // Prohibit line breaks before and after - var ZW = 8; // Provide a break opportunity - var GL = 9; // Prohibit line breaks before and after - var SP = 10; // Enable indirect line breaks - var ZWJ$1 = 11; // Prohibit line breaks within joiner sequences - // Break Opportunities - var B2 = 12; // Provide a line break opportunity before and after the character - var BA = 13; // Generally provide a line break opportunity after the character - var BB = 14; // Generally provide a line break opportunity before the character - var HY = 15; // Provide a line break opportunity after the character, except in numeric context - var CB = 16; // Provide a line break opportunity contingent on additional information - // Characters Prohibiting Certain Breaks - var CL = 17; // Prohibit line breaks before - var CP = 18; // Prohibit line breaks before - var EX = 19; // Prohibit line breaks before - var IN = 20; // Allow only indirect line breaks between pairs - var NS = 21; // Allow only indirect line breaks before - var OP = 22; // Prohibit line breaks after - var QU = 23; // Act like they are both opening and closing - // Numeric Context - var IS = 24; // Prevent breaks after any and before numeric - var NU = 25; // Form numeric expressions for line breaking purposes - var PO = 26; // Do not break following a numeric expression - var PR = 27; // Do not break in front of a numeric expression - var SY = 28; // Prevent a break before; and allow a break after - // Other Characters - var AI = 29; // Act like AL when the resolvedEAW is N; otherwise; act as ID - var AL = 30; // Are alphabetic characters or symbols that are used with alphabetic characters - var CJ = 31; // Treat as NS or ID for strict or normal breaking. - var EB = 32; // Do not break from following Emoji Modifier - var EM = 33; // Do not break from preceding Emoji Base - var H2 = 34; // Form Korean syllable blocks - var H3 = 35; // Form Korean syllable blocks - var HL = 36; // Do not break around a following hyphen; otherwise act as Alphabetic - var ID = 37; // Break before or after; except in some numeric context - var JL = 38; // Form Korean syllable blocks - var JV = 39; // Form Korean syllable blocks - var JT = 40; // Form Korean syllable blocks - var RI$1 = 41; // Keep pairs together. For pairs; break before and after other classes - var SA = 42; // Provide a line break opportunity contingent on additional, language-specific context analysis - var XX = 43; // Have as yet unknown line breaking behavior or unassigned code positions - var ea_OP = [0x2329, 0xff08]; - var BREAK_MANDATORY = '!'; - var BREAK_NOT_ALLOWED$1 = '×'; - var BREAK_ALLOWED$1 = '÷'; - var UnicodeTrie$1 = createTrieFromBase64$1(base64$1); - var ALPHABETICS = [AL, HL]; - var HARD_LINE_BREAKS = [BK, CR$1, LF$1, NL]; - var SPACE$1 = [SP, ZW]; - var PREFIX_POSTFIX = [PR, PO]; - var LINE_BREAKS = HARD_LINE_BREAKS.concat(SPACE$1); - var KOREAN_SYLLABLE_BLOCK = [JL, JV, JT, H2, H3]; - var HYPHEN = [HY, BA]; - var codePointsToCharacterClasses = function (codePoints, lineBreak) { - if (lineBreak === void 0) { lineBreak = 'strict'; } - var types = []; - var indices = []; - var categories = []; - codePoints.forEach(function (codePoint, index) { - var classType = UnicodeTrie$1.get(codePoint); - if (classType > LETTER_NUMBER_MODIFIER) { - categories.push(true); - classType -= LETTER_NUMBER_MODIFIER; - } - else { - categories.push(false); - } - if (['normal', 'auto', 'loose'].indexOf(lineBreak) !== -1) { - // U+2010, – U+2013, 〜 U+301C, ゠ U+30A0 - if ([0x2010, 0x2013, 0x301c, 0x30a0].indexOf(codePoint) !== -1) { - indices.push(index); - return types.push(CB); - } - } - if (classType === CM || classType === ZWJ$1) { - // LB10 Treat any remaining combining mark or ZWJ as AL. - if (index === 0) { - indices.push(index); - return types.push(AL); - } - // LB9 Do not break a combining character sequence; treat it as if it has the line breaking class of - // the base character in all of the following rules. Treat ZWJ as if it were CM. - var prev = types[index - 1]; - if (LINE_BREAKS.indexOf(prev) === -1) { - indices.push(indices[index - 1]); - return types.push(prev); - } - indices.push(index); - return types.push(AL); - } - indices.push(index); - if (classType === CJ) { - return types.push(lineBreak === 'strict' ? NS : ID); - } - if (classType === SA) { - return types.push(AL); - } - if (classType === AI) { - return types.push(AL); - } - // For supplementary characters, a useful default is to treat characters in the range 10000..1FFFD as AL - // and characters in the ranges 20000..2FFFD and 30000..3FFFD as ID, until the implementation can be revised - // to take into account the actual line breaking properties for these characters. - if (classType === XX) { - if ((codePoint >= 0x20000 && codePoint <= 0x2fffd) || (codePoint >= 0x30000 && codePoint <= 0x3fffd)) { - return types.push(ID); - } - else { - return types.push(AL); - } - } - types.push(classType); - }); - return [indices, types, categories]; - }; - var isAdjacentWithSpaceIgnored = function (a, b, currentIndex, classTypes) { - var current = classTypes[currentIndex]; - if (Array.isArray(a) ? a.indexOf(current) !== -1 : a === current) { - var i = currentIndex; - while (i <= classTypes.length) { - i++; - var next = classTypes[i]; - if (next === b) { - return true; - } - if (next !== SP) { - break; - } - } - } - if (current === SP) { - var i = currentIndex; - while (i > 0) { - i--; - var prev = classTypes[i]; - if (Array.isArray(a) ? a.indexOf(prev) !== -1 : a === prev) { - var n = currentIndex; - while (n <= classTypes.length) { - n++; - var next = classTypes[n]; - if (next === b) { - return true; - } - if (next !== SP) { - break; - } - } - } - if (prev !== SP) { - break; - } - } - } - return false; - }; - var previousNonSpaceClassType = function (currentIndex, classTypes) { - var i = currentIndex; - while (i >= 0) { - var type = classTypes[i]; - if (type === SP) { - i--; - } - else { - return type; - } - } - return 0; - }; - var _lineBreakAtIndex = function (codePoints, classTypes, indicies, index, forbiddenBreaks) { - if (indicies[index] === 0) { - return BREAK_NOT_ALLOWED$1; - } - var currentIndex = index - 1; - if (Array.isArray(forbiddenBreaks) && forbiddenBreaks[currentIndex] === true) { - return BREAK_NOT_ALLOWED$1; - } - var beforeIndex = currentIndex - 1; - var afterIndex = currentIndex + 1; - var current = classTypes[currentIndex]; - // LB4 Always break after hard line breaks. - // LB5 Treat CR followed by LF, as well as CR, LF, and NL as hard line breaks. - var before = beforeIndex >= 0 ? classTypes[beforeIndex] : 0; - var next = classTypes[afterIndex]; - if (current === CR$1 && next === LF$1) { - return BREAK_NOT_ALLOWED$1; - } - if (HARD_LINE_BREAKS.indexOf(current) !== -1) { - return BREAK_MANDATORY; - } - // LB6 Do not break before hard line breaks. - if (HARD_LINE_BREAKS.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB7 Do not break before spaces or zero width space. - if (SPACE$1.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB8 Break before any character following a zero-width space, even if one or more spaces intervene. - if (previousNonSpaceClassType(currentIndex, classTypes) === ZW) { - return BREAK_ALLOWED$1; - } - // LB8a Do not break after a zero width joiner. - if (UnicodeTrie$1.get(codePoints[currentIndex]) === ZWJ$1) { - return BREAK_NOT_ALLOWED$1; - } - // zwj emojis - if ((current === EB || current === EM) && UnicodeTrie$1.get(codePoints[afterIndex]) === ZWJ$1) { - return BREAK_NOT_ALLOWED$1; - } - // LB11 Do not break before or after Word joiner and related characters. - if (current === WJ || next === WJ) { - return BREAK_NOT_ALLOWED$1; - } - // LB12 Do not break after NBSP and related characters. - if (current === GL) { - return BREAK_NOT_ALLOWED$1; - } - // LB12a Do not break before NBSP and related characters, except after spaces and hyphens. - if ([SP, BA, HY].indexOf(current) === -1 && next === GL) { - return BREAK_NOT_ALLOWED$1; - } - // LB13 Do not break before ‘]’ or ‘!’ or ‘;’ or ‘/’, even after spaces. - if ([CL, CP, EX, IS, SY].indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB14 Do not break after ‘[’, even after spaces. - if (previousNonSpaceClassType(currentIndex, classTypes) === OP) { - return BREAK_NOT_ALLOWED$1; - } - // LB15 Do not break within ‘”[’, even with intervening spaces. - if (isAdjacentWithSpaceIgnored(QU, OP, currentIndex, classTypes)) { - return BREAK_NOT_ALLOWED$1; - } - // LB16 Do not break between closing punctuation and a nonstarter (lb=NS), even with intervening spaces. - if (isAdjacentWithSpaceIgnored([CL, CP], NS, currentIndex, classTypes)) { - return BREAK_NOT_ALLOWED$1; - } - // LB17 Do not break within ‘——’, even with intervening spaces. - if (isAdjacentWithSpaceIgnored(B2, B2, currentIndex, classTypes)) { - return BREAK_NOT_ALLOWED$1; - } - // LB18 Break after spaces. - if (current === SP) { - return BREAK_ALLOWED$1; - } - // LB19 Do not break before or after quotation marks, such as ‘ ” ’. - if (current === QU || next === QU) { - return BREAK_NOT_ALLOWED$1; - } - // LB20 Break before and after unresolved CB. - if (next === CB || current === CB) { - return BREAK_ALLOWED$1; - } - // LB21 Do not break before hyphen-minus, other hyphens, fixed-width spaces, small kana, and other non-starters, or after acute accents. - if ([BA, HY, NS].indexOf(next) !== -1 || current === BB) { - return BREAK_NOT_ALLOWED$1; - } - // LB21a Don't break after Hebrew + Hyphen. - if (before === HL && HYPHEN.indexOf(current) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB21b Don’t break between Solidus and Hebrew letters. - if (current === SY && next === HL) { - return BREAK_NOT_ALLOWED$1; - } - // LB22 Do not break before ellipsis. - if (next === IN) { - return BREAK_NOT_ALLOWED$1; - } - // LB23 Do not break between digits and letters. - if ((ALPHABETICS.indexOf(next) !== -1 && current === NU) || (ALPHABETICS.indexOf(current) !== -1 && next === NU)) { - return BREAK_NOT_ALLOWED$1; - } - // LB23a Do not break between numeric prefixes and ideographs, or between ideographs and numeric postfixes. - if ((current === PR && [ID, EB, EM].indexOf(next) !== -1) || - ([ID, EB, EM].indexOf(current) !== -1 && next === PO)) { - return BREAK_NOT_ALLOWED$1; - } - // LB24 Do not break between numeric prefix/postfix and letters, or between letters and prefix/postfix. - if ((ALPHABETICS.indexOf(current) !== -1 && PREFIX_POSTFIX.indexOf(next) !== -1) || - (PREFIX_POSTFIX.indexOf(current) !== -1 && ALPHABETICS.indexOf(next) !== -1)) { - return BREAK_NOT_ALLOWED$1; - } - // LB25 Do not break between the following pairs of classes relevant to numbers: - if ( - // (PR | PO) × ( OP | HY )? NU - ([PR, PO].indexOf(current) !== -1 && - (next === NU || ([OP, HY].indexOf(next) !== -1 && classTypes[afterIndex + 1] === NU))) || - // ( OP | HY ) × NU - ([OP, HY].indexOf(current) !== -1 && next === NU) || - // NU × (NU | SY | IS) - (current === NU && [NU, SY, IS].indexOf(next) !== -1)) { - return BREAK_NOT_ALLOWED$1; - } - // NU (NU | SY | IS)* × (NU | SY | IS | CL | CP) - if ([NU, SY, IS, CL, CP].indexOf(next) !== -1) { - var prevIndex = currentIndex; - while (prevIndex >= 0) { - var type = classTypes[prevIndex]; - if (type === NU) { - return BREAK_NOT_ALLOWED$1; - } - else if ([SY, IS].indexOf(type) !== -1) { - prevIndex--; - } - else { - break; - } - } - } - // NU (NU | SY | IS)* (CL | CP)? × (PO | PR)) - if ([PR, PO].indexOf(next) !== -1) { - var prevIndex = [CL, CP].indexOf(current) !== -1 ? beforeIndex : currentIndex; - while (prevIndex >= 0) { - var type = classTypes[prevIndex]; - if (type === NU) { - return BREAK_NOT_ALLOWED$1; - } - else if ([SY, IS].indexOf(type) !== -1) { - prevIndex--; - } - else { - break; - } - } - } - // LB26 Do not break a Korean syllable. - if ((JL === current && [JL, JV, H2, H3].indexOf(next) !== -1) || - ([JV, H2].indexOf(current) !== -1 && [JV, JT].indexOf(next) !== -1) || - ([JT, H3].indexOf(current) !== -1 && next === JT)) { - return BREAK_NOT_ALLOWED$1; - } - // LB27 Treat a Korean Syllable Block the same as ID. - if ((KOREAN_SYLLABLE_BLOCK.indexOf(current) !== -1 && [IN, PO].indexOf(next) !== -1) || - (KOREAN_SYLLABLE_BLOCK.indexOf(next) !== -1 && current === PR)) { - return BREAK_NOT_ALLOWED$1; - } - // LB28 Do not break between alphabetics (“at”). - if (ALPHABETICS.indexOf(current) !== -1 && ALPHABETICS.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB29 Do not break between numeric punctuation and alphabetics (“e.g.”). - if (current === IS && ALPHABETICS.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB30 Do not break between letters, numbers, or ordinary symbols and opening or closing parentheses. - if ((ALPHABETICS.concat(NU).indexOf(current) !== -1 && - next === OP && - ea_OP.indexOf(codePoints[afterIndex]) === -1) || - (ALPHABETICS.concat(NU).indexOf(next) !== -1 && current === CP)) { - return BREAK_NOT_ALLOWED$1; - } - // LB30a Break between two regional indicator symbols if and only if there are an even number of regional - // indicators preceding the position of the break. - if (current === RI$1 && next === RI$1) { - var i = indicies[currentIndex]; - var count = 1; - while (i > 0) { - i--; - if (classTypes[i] === RI$1) { - count++; - } - else { - break; - } - } - if (count % 2 !== 0) { - return BREAK_NOT_ALLOWED$1; - } - } - // LB30b Do not break between an emoji base and an emoji modifier. - if (current === EB && next === EM) { - return BREAK_NOT_ALLOWED$1; - } - return BREAK_ALLOWED$1; - }; - var cssFormattedClasses = function (codePoints, options) { - if (!options) { - options = { lineBreak: 'normal', wordBreak: 'normal' }; - } - var _a = codePointsToCharacterClasses(codePoints, options.lineBreak), indicies = _a[0], classTypes = _a[1], isLetterNumber = _a[2]; - if (options.wordBreak === 'break-all' || options.wordBreak === 'break-word') { - classTypes = classTypes.map(function (type) { return ([NU, AL, SA].indexOf(type) !== -1 ? ID : type); }); - } - var forbiddenBreakpoints = options.wordBreak === 'keep-all' - ? isLetterNumber.map(function (letterNumber, i) { - return letterNumber && codePoints[i] >= 0x4e00 && codePoints[i] <= 0x9fff; - }) - : undefined; - return [indicies, classTypes, forbiddenBreakpoints]; - }; - var Break = /** @class */ (function () { - function Break(codePoints, lineBreak, start, end) { - this.codePoints = codePoints; - this.required = lineBreak === BREAK_MANDATORY; - this.start = start; - this.end = end; - } - Break.prototype.slice = function () { - return fromCodePoint$1.apply(void 0, this.codePoints.slice(this.start, this.end)); - }; - return Break; - }()); - var LineBreaker = function (str, options) { - var codePoints = toCodePoints$1(str); - var _a = cssFormattedClasses(codePoints, options), indicies = _a[0], classTypes = _a[1], forbiddenBreakpoints = _a[2]; - var length = codePoints.length; - var lastEnd = 0; - var nextIndex = 0; - return { - next: function () { - if (nextIndex >= length) { - return { done: true, value: null }; - } - var lineBreak = BREAK_NOT_ALLOWED$1; - while (nextIndex < length && - (lineBreak = _lineBreakAtIndex(codePoints, classTypes, indicies, ++nextIndex, forbiddenBreakpoints)) === - BREAK_NOT_ALLOWED$1) { } - if (lineBreak !== BREAK_NOT_ALLOWED$1 || nextIndex === length) { - var value = new Break(codePoints, lineBreak, lastEnd, nextIndex); - lastEnd = nextIndex; - return { value: value, done: false }; - } - return { done: true, value: null }; - }, - }; - }; - - // https://www.w3.org/TR/css-syntax-3 - var FLAG_UNRESTRICTED = 1 << 0; - var FLAG_ID = 1 << 1; - var FLAG_INTEGER = 1 << 2; - var FLAG_NUMBER = 1 << 3; - var LINE_FEED = 0x000a; - var SOLIDUS = 0x002f; - var REVERSE_SOLIDUS = 0x005c; - var CHARACTER_TABULATION = 0x0009; - var SPACE = 0x0020; - var QUOTATION_MARK = 0x0022; - var EQUALS_SIGN = 0x003d; - var NUMBER_SIGN = 0x0023; - var DOLLAR_SIGN = 0x0024; - var PERCENTAGE_SIGN = 0x0025; - var APOSTROPHE = 0x0027; - var LEFT_PARENTHESIS = 0x0028; - var RIGHT_PARENTHESIS = 0x0029; - var LOW_LINE = 0x005f; - var HYPHEN_MINUS = 0x002d; - var EXCLAMATION_MARK = 0x0021; - var LESS_THAN_SIGN = 0x003c; - var GREATER_THAN_SIGN = 0x003e; - var COMMERCIAL_AT = 0x0040; - var LEFT_SQUARE_BRACKET = 0x005b; - var RIGHT_SQUARE_BRACKET = 0x005d; - var CIRCUMFLEX_ACCENT = 0x003d; - var LEFT_CURLY_BRACKET = 0x007b; - var QUESTION_MARK = 0x003f; - var RIGHT_CURLY_BRACKET = 0x007d; - var VERTICAL_LINE = 0x007c; - var TILDE = 0x007e; - var CONTROL = 0x0080; - var REPLACEMENT_CHARACTER = 0xfffd; - var ASTERISK = 0x002a; - var PLUS_SIGN = 0x002b; - var COMMA = 0x002c; - var COLON = 0x003a; - var SEMICOLON = 0x003b; - var FULL_STOP = 0x002e; - var NULL = 0x0000; - var BACKSPACE = 0x0008; - var LINE_TABULATION = 0x000b; - var SHIFT_OUT = 0x000e; - var INFORMATION_SEPARATOR_ONE = 0x001f; - var DELETE = 0x007f; - var EOF = -1; - var ZERO = 0x0030; - var a = 0x0061; - var e = 0x0065; - var f = 0x0066; - var u = 0x0075; - var z = 0x007a; - var A = 0x0041; - var E = 0x0045; - var F = 0x0046; - var U = 0x0055; - var Z = 0x005a; - var isDigit = function (codePoint) { return codePoint >= ZERO && codePoint <= 0x0039; }; - var isSurrogateCodePoint = function (codePoint) { return codePoint >= 0xd800 && codePoint <= 0xdfff; }; - var isHex = function (codePoint) { - return isDigit(codePoint) || (codePoint >= A && codePoint <= F) || (codePoint >= a && codePoint <= f); - }; - var isLowerCaseLetter = function (codePoint) { return codePoint >= a && codePoint <= z; }; - var isUpperCaseLetter = function (codePoint) { return codePoint >= A && codePoint <= Z; }; - var isLetter = function (codePoint) { return isLowerCaseLetter(codePoint) || isUpperCaseLetter(codePoint); }; - var isNonASCIICodePoint = function (codePoint) { return codePoint >= CONTROL; }; - var isWhiteSpace = function (codePoint) { - return codePoint === LINE_FEED || codePoint === CHARACTER_TABULATION || codePoint === SPACE; - }; - var isNameStartCodePoint = function (codePoint) { - return isLetter(codePoint) || isNonASCIICodePoint(codePoint) || codePoint === LOW_LINE; - }; - var isNameCodePoint = function (codePoint) { - return isNameStartCodePoint(codePoint) || isDigit(codePoint) || codePoint === HYPHEN_MINUS; - }; - var isNonPrintableCodePoint = function (codePoint) { - return ((codePoint >= NULL && codePoint <= BACKSPACE) || - codePoint === LINE_TABULATION || - (codePoint >= SHIFT_OUT && codePoint <= INFORMATION_SEPARATOR_ONE) || - codePoint === DELETE); - }; - var isValidEscape = function (c1, c2) { - if (c1 !== REVERSE_SOLIDUS) { - return false; - } - return c2 !== LINE_FEED; - }; - var isIdentifierStart = function (c1, c2, c3) { - if (c1 === HYPHEN_MINUS) { - return isNameStartCodePoint(c2) || isValidEscape(c2, c3); - } - else if (isNameStartCodePoint(c1)) { - return true; - } - else if (c1 === REVERSE_SOLIDUS && isValidEscape(c1, c2)) { - return true; - } - return false; - }; - var isNumberStart = function (c1, c2, c3) { - if (c1 === PLUS_SIGN || c1 === HYPHEN_MINUS) { - if (isDigit(c2)) { - return true; - } - return c2 === FULL_STOP && isDigit(c3); - } - if (c1 === FULL_STOP) { - return isDigit(c2); - } - return isDigit(c1); - }; - var stringToNumber = function (codePoints) { - var c = 0; - var sign = 1; - if (codePoints[c] === PLUS_SIGN || codePoints[c] === HYPHEN_MINUS) { - if (codePoints[c] === HYPHEN_MINUS) { - sign = -1; - } - c++; - } - var integers = []; - while (isDigit(codePoints[c])) { - integers.push(codePoints[c++]); - } - var int = integers.length ? parseInt(fromCodePoint$1.apply(void 0, integers), 10) : 0; - if (codePoints[c] === FULL_STOP) { - c++; - } - var fraction = []; - while (isDigit(codePoints[c])) { - fraction.push(codePoints[c++]); - } - var fracd = fraction.length; - var frac = fracd ? parseInt(fromCodePoint$1.apply(void 0, fraction), 10) : 0; - if (codePoints[c] === E || codePoints[c] === e) { - c++; - } - var expsign = 1; - if (codePoints[c] === PLUS_SIGN || codePoints[c] === HYPHEN_MINUS) { - if (codePoints[c] === HYPHEN_MINUS) { - expsign = -1; - } - c++; - } - var exponent = []; - while (isDigit(codePoints[c])) { - exponent.push(codePoints[c++]); - } - var exp = exponent.length ? parseInt(fromCodePoint$1.apply(void 0, exponent), 10) : 0; - return sign * (int + frac * Math.pow(10, -fracd)) * Math.pow(10, expsign * exp); - }; - var LEFT_PARENTHESIS_TOKEN = { - type: 2 /* LEFT_PARENTHESIS_TOKEN */ - }; - var RIGHT_PARENTHESIS_TOKEN = { - type: 3 /* RIGHT_PARENTHESIS_TOKEN */ - }; - var COMMA_TOKEN = { type: 4 /* COMMA_TOKEN */ }; - var SUFFIX_MATCH_TOKEN = { type: 13 /* SUFFIX_MATCH_TOKEN */ }; - var PREFIX_MATCH_TOKEN = { type: 8 /* PREFIX_MATCH_TOKEN */ }; - var COLUMN_TOKEN = { type: 21 /* COLUMN_TOKEN */ }; - var DASH_MATCH_TOKEN = { type: 9 /* DASH_MATCH_TOKEN */ }; - var INCLUDE_MATCH_TOKEN = { type: 10 /* INCLUDE_MATCH_TOKEN */ }; - var LEFT_CURLY_BRACKET_TOKEN = { - type: 11 /* LEFT_CURLY_BRACKET_TOKEN */ - }; - var RIGHT_CURLY_BRACKET_TOKEN = { - type: 12 /* RIGHT_CURLY_BRACKET_TOKEN */ - }; - var SUBSTRING_MATCH_TOKEN = { type: 14 /* SUBSTRING_MATCH_TOKEN */ }; - var BAD_URL_TOKEN = { type: 23 /* BAD_URL_TOKEN */ }; - var BAD_STRING_TOKEN = { type: 1 /* BAD_STRING_TOKEN */ }; - var CDO_TOKEN = { type: 25 /* CDO_TOKEN */ }; - var CDC_TOKEN = { type: 24 /* CDC_TOKEN */ }; - var COLON_TOKEN = { type: 26 /* COLON_TOKEN */ }; - var SEMICOLON_TOKEN = { type: 27 /* SEMICOLON_TOKEN */ }; - var LEFT_SQUARE_BRACKET_TOKEN = { - type: 28 /* LEFT_SQUARE_BRACKET_TOKEN */ - }; - var RIGHT_SQUARE_BRACKET_TOKEN = { - type: 29 /* RIGHT_SQUARE_BRACKET_TOKEN */ - }; - var WHITESPACE_TOKEN = { type: 31 /* WHITESPACE_TOKEN */ }; - var EOF_TOKEN = { type: 32 /* EOF_TOKEN */ }; - var Tokenizer = /** @class */ (function () { - function Tokenizer() { - this._value = []; - } - Tokenizer.prototype.write = function (chunk) { - this._value = this._value.concat(toCodePoints$1(chunk)); - }; - Tokenizer.prototype.read = function () { - var tokens = []; - var token = this.consumeToken(); - while (token !== EOF_TOKEN) { - tokens.push(token); - token = this.consumeToken(); - } - return tokens; - }; - Tokenizer.prototype.consumeToken = function () { - var codePoint = this.consumeCodePoint(); - switch (codePoint) { - case QUOTATION_MARK: - return this.consumeStringToken(QUOTATION_MARK); - case NUMBER_SIGN: - var c1 = this.peekCodePoint(0); - var c2 = this.peekCodePoint(1); - var c3 = this.peekCodePoint(2); - if (isNameCodePoint(c1) || isValidEscape(c2, c3)) { - var flags = isIdentifierStart(c1, c2, c3) ? FLAG_ID : FLAG_UNRESTRICTED; - var value = this.consumeName(); - return { type: 5 /* HASH_TOKEN */, value: value, flags: flags }; - } - break; - case DOLLAR_SIGN: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return SUFFIX_MATCH_TOKEN; - } - break; - case APOSTROPHE: - return this.consumeStringToken(APOSTROPHE); - case LEFT_PARENTHESIS: - return LEFT_PARENTHESIS_TOKEN; - case RIGHT_PARENTHESIS: - return RIGHT_PARENTHESIS_TOKEN; - case ASTERISK: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return SUBSTRING_MATCH_TOKEN; - } - break; - case PLUS_SIGN: - if (isNumberStart(codePoint, this.peekCodePoint(0), this.peekCodePoint(1))) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - break; - case COMMA: - return COMMA_TOKEN; - case HYPHEN_MINUS: - var e1 = codePoint; - var e2 = this.peekCodePoint(0); - var e3 = this.peekCodePoint(1); - if (isNumberStart(e1, e2, e3)) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - if (isIdentifierStart(e1, e2, e3)) { - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - } - if (e2 === HYPHEN_MINUS && e3 === GREATER_THAN_SIGN) { - this.consumeCodePoint(); - this.consumeCodePoint(); - return CDC_TOKEN; - } - break; - case FULL_STOP: - if (isNumberStart(codePoint, this.peekCodePoint(0), this.peekCodePoint(1))) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - break; - case SOLIDUS: - if (this.peekCodePoint(0) === ASTERISK) { - this.consumeCodePoint(); - while (true) { - var c = this.consumeCodePoint(); - if (c === ASTERISK) { - c = this.consumeCodePoint(); - if (c === SOLIDUS) { - return this.consumeToken(); - } - } - if (c === EOF) { - return this.consumeToken(); - } - } - } - break; - case COLON: - return COLON_TOKEN; - case SEMICOLON: - return SEMICOLON_TOKEN; - case LESS_THAN_SIGN: - if (this.peekCodePoint(0) === EXCLAMATION_MARK && - this.peekCodePoint(1) === HYPHEN_MINUS && - this.peekCodePoint(2) === HYPHEN_MINUS) { - this.consumeCodePoint(); - this.consumeCodePoint(); - return CDO_TOKEN; - } - break; - case COMMERCIAL_AT: - var a1 = this.peekCodePoint(0); - var a2 = this.peekCodePoint(1); - var a3 = this.peekCodePoint(2); - if (isIdentifierStart(a1, a2, a3)) { - var value = this.consumeName(); - return { type: 7 /* AT_KEYWORD_TOKEN */, value: value }; - } - break; - case LEFT_SQUARE_BRACKET: - return LEFT_SQUARE_BRACKET_TOKEN; - case REVERSE_SOLIDUS: - if (isValidEscape(codePoint, this.peekCodePoint(0))) { - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - } - break; - case RIGHT_SQUARE_BRACKET: - return RIGHT_SQUARE_BRACKET_TOKEN; - case CIRCUMFLEX_ACCENT: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return PREFIX_MATCH_TOKEN; - } - break; - case LEFT_CURLY_BRACKET: - return LEFT_CURLY_BRACKET_TOKEN; - case RIGHT_CURLY_BRACKET: - return RIGHT_CURLY_BRACKET_TOKEN; - case u: - case U: - var u1 = this.peekCodePoint(0); - var u2 = this.peekCodePoint(1); - if (u1 === PLUS_SIGN && (isHex(u2) || u2 === QUESTION_MARK)) { - this.consumeCodePoint(); - this.consumeUnicodeRangeToken(); - } - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - case VERTICAL_LINE: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return DASH_MATCH_TOKEN; - } - if (this.peekCodePoint(0) === VERTICAL_LINE) { - this.consumeCodePoint(); - return COLUMN_TOKEN; - } - break; - case TILDE: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return INCLUDE_MATCH_TOKEN; - } - break; - case EOF: - return EOF_TOKEN; - } - if (isWhiteSpace(codePoint)) { - this.consumeWhiteSpace(); - return WHITESPACE_TOKEN; - } - if (isDigit(codePoint)) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - if (isNameStartCodePoint(codePoint)) { - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - } - return { type: 6 /* DELIM_TOKEN */, value: fromCodePoint$1(codePoint) }; - }; - Tokenizer.prototype.consumeCodePoint = function () { - var value = this._value.shift(); - return typeof value === 'undefined' ? -1 : value; - }; - Tokenizer.prototype.reconsumeCodePoint = function (codePoint) { - this._value.unshift(codePoint); - }; - Tokenizer.prototype.peekCodePoint = function (delta) { - if (delta >= this._value.length) { - return -1; - } - return this._value[delta]; - }; - Tokenizer.prototype.consumeUnicodeRangeToken = function () { - var digits = []; - var codePoint = this.consumeCodePoint(); - while (isHex(codePoint) && digits.length < 6) { - digits.push(codePoint); - codePoint = this.consumeCodePoint(); - } - var questionMarks = false; - while (codePoint === QUESTION_MARK && digits.length < 6) { - digits.push(codePoint); - codePoint = this.consumeCodePoint(); - questionMarks = true; - } - if (questionMarks) { - var start_1 = parseInt(fromCodePoint$1.apply(void 0, digits.map(function (digit) { return (digit === QUESTION_MARK ? ZERO : digit); })), 16); - var end = parseInt(fromCodePoint$1.apply(void 0, digits.map(function (digit) { return (digit === QUESTION_MARK ? F : digit); })), 16); - return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start_1, end: end }; - } - var start = parseInt(fromCodePoint$1.apply(void 0, digits), 16); - if (this.peekCodePoint(0) === HYPHEN_MINUS && isHex(this.peekCodePoint(1))) { - this.consumeCodePoint(); - codePoint = this.consumeCodePoint(); - var endDigits = []; - while (isHex(codePoint) && endDigits.length < 6) { - endDigits.push(codePoint); - codePoint = this.consumeCodePoint(); - } - var end = parseInt(fromCodePoint$1.apply(void 0, endDigits), 16); - return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start, end: end }; - } - else { - return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start, end: start }; - } - }; - Tokenizer.prototype.consumeIdentLikeToken = function () { - var value = this.consumeName(); - if (value.toLowerCase() === 'url' && this.peekCodePoint(0) === LEFT_PARENTHESIS) { - this.consumeCodePoint(); - return this.consumeUrlToken(); - } - else if (this.peekCodePoint(0) === LEFT_PARENTHESIS) { - this.consumeCodePoint(); - return { type: 19 /* FUNCTION_TOKEN */, value: value }; - } - return { type: 20 /* IDENT_TOKEN */, value: value }; - }; - Tokenizer.prototype.consumeUrlToken = function () { - var value = []; - this.consumeWhiteSpace(); - if (this.peekCodePoint(0) === EOF) { - return { type: 22 /* URL_TOKEN */, value: '' }; - } - var next = this.peekCodePoint(0); - if (next === APOSTROPHE || next === QUOTATION_MARK) { - var stringToken = this.consumeStringToken(this.consumeCodePoint()); - if (stringToken.type === 0 /* STRING_TOKEN */) { - this.consumeWhiteSpace(); - if (this.peekCodePoint(0) === EOF || this.peekCodePoint(0) === RIGHT_PARENTHESIS) { - this.consumeCodePoint(); - return { type: 22 /* URL_TOKEN */, value: stringToken.value }; - } - } - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - while (true) { - var codePoint = this.consumeCodePoint(); - if (codePoint === EOF || codePoint === RIGHT_PARENTHESIS) { - return { type: 22 /* URL_TOKEN */, value: fromCodePoint$1.apply(void 0, value) }; - } - else if (isWhiteSpace(codePoint)) { - this.consumeWhiteSpace(); - if (this.peekCodePoint(0) === EOF || this.peekCodePoint(0) === RIGHT_PARENTHESIS) { - this.consumeCodePoint(); - return { type: 22 /* URL_TOKEN */, value: fromCodePoint$1.apply(void 0, value) }; - } - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - else if (codePoint === QUOTATION_MARK || - codePoint === APOSTROPHE || - codePoint === LEFT_PARENTHESIS || - isNonPrintableCodePoint(codePoint)) { - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - else if (codePoint === REVERSE_SOLIDUS) { - if (isValidEscape(codePoint, this.peekCodePoint(0))) { - value.push(this.consumeEscapedCodePoint()); - } - else { - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - } - else { - value.push(codePoint); - } - } - }; - Tokenizer.prototype.consumeWhiteSpace = function () { - while (isWhiteSpace(this.peekCodePoint(0))) { - this.consumeCodePoint(); - } - }; - Tokenizer.prototype.consumeBadUrlRemnants = function () { - while (true) { - var codePoint = this.consumeCodePoint(); - if (codePoint === RIGHT_PARENTHESIS || codePoint === EOF) { - return; - } - if (isValidEscape(codePoint, this.peekCodePoint(0))) { - this.consumeEscapedCodePoint(); - } - } - }; - Tokenizer.prototype.consumeStringSlice = function (count) { - var SLICE_STACK_SIZE = 50000; - var value = ''; - while (count > 0) { - var amount = Math.min(SLICE_STACK_SIZE, count); - value += fromCodePoint$1.apply(void 0, this._value.splice(0, amount)); - count -= amount; - } - this._value.shift(); - return value; - }; - Tokenizer.prototype.consumeStringToken = function (endingCodePoint) { - var value = ''; - var i = 0; - do { - var codePoint = this._value[i]; - if (codePoint === EOF || codePoint === undefined || codePoint === endingCodePoint) { - value += this.consumeStringSlice(i); - return { type: 0 /* STRING_TOKEN */, value: value }; - } - if (codePoint === LINE_FEED) { - this._value.splice(0, i); - return BAD_STRING_TOKEN; - } - if (codePoint === REVERSE_SOLIDUS) { - var next = this._value[i + 1]; - if (next !== EOF && next !== undefined) { - if (next === LINE_FEED) { - value += this.consumeStringSlice(i); - i = -1; - this._value.shift(); - } - else if (isValidEscape(codePoint, next)) { - value += this.consumeStringSlice(i); - value += fromCodePoint$1(this.consumeEscapedCodePoint()); - i = -1; - } - } - } - i++; - } while (true); - }; - Tokenizer.prototype.consumeNumber = function () { - var repr = []; - var type = FLAG_INTEGER; - var c1 = this.peekCodePoint(0); - if (c1 === PLUS_SIGN || c1 === HYPHEN_MINUS) { - repr.push(this.consumeCodePoint()); - } - while (isDigit(this.peekCodePoint(0))) { - repr.push(this.consumeCodePoint()); - } - c1 = this.peekCodePoint(0); - var c2 = this.peekCodePoint(1); - if (c1 === FULL_STOP && isDigit(c2)) { - repr.push(this.consumeCodePoint(), this.consumeCodePoint()); - type = FLAG_NUMBER; - while (isDigit(this.peekCodePoint(0))) { - repr.push(this.consumeCodePoint()); - } - } - c1 = this.peekCodePoint(0); - c2 = this.peekCodePoint(1); - var c3 = this.peekCodePoint(2); - if ((c1 === E || c1 === e) && (((c2 === PLUS_SIGN || c2 === HYPHEN_MINUS) && isDigit(c3)) || isDigit(c2))) { - repr.push(this.consumeCodePoint(), this.consumeCodePoint()); - type = FLAG_NUMBER; - while (isDigit(this.peekCodePoint(0))) { - repr.push(this.consumeCodePoint()); - } - } - return [stringToNumber(repr), type]; - }; - Tokenizer.prototype.consumeNumericToken = function () { - var _a = this.consumeNumber(), number = _a[0], flags = _a[1]; - var c1 = this.peekCodePoint(0); - var c2 = this.peekCodePoint(1); - var c3 = this.peekCodePoint(2); - if (isIdentifierStart(c1, c2, c3)) { - var unit = this.consumeName(); - return { type: 15 /* DIMENSION_TOKEN */, number: number, flags: flags, unit: unit }; - } - if (c1 === PERCENTAGE_SIGN) { - this.consumeCodePoint(); - return { type: 16 /* PERCENTAGE_TOKEN */, number: number, flags: flags }; - } - return { type: 17 /* NUMBER_TOKEN */, number: number, flags: flags }; - }; - Tokenizer.prototype.consumeEscapedCodePoint = function () { - var codePoint = this.consumeCodePoint(); - if (isHex(codePoint)) { - var hex = fromCodePoint$1(codePoint); - while (isHex(this.peekCodePoint(0)) && hex.length < 6) { - hex += fromCodePoint$1(this.consumeCodePoint()); - } - if (isWhiteSpace(this.peekCodePoint(0))) { - this.consumeCodePoint(); - } - var hexCodePoint = parseInt(hex, 16); - if (hexCodePoint === 0 || isSurrogateCodePoint(hexCodePoint) || hexCodePoint > 0x10ffff) { - return REPLACEMENT_CHARACTER; - } - return hexCodePoint; - } - if (codePoint === EOF) { - return REPLACEMENT_CHARACTER; - } - return codePoint; - }; - Tokenizer.prototype.consumeName = function () { - var result = ''; - while (true) { - var codePoint = this.consumeCodePoint(); - if (isNameCodePoint(codePoint)) { - result += fromCodePoint$1(codePoint); - } - else if (isValidEscape(codePoint, this.peekCodePoint(0))) { - result += fromCodePoint$1(this.consumeEscapedCodePoint()); - } - else { - this.reconsumeCodePoint(codePoint); - return result; - } - } - }; - return Tokenizer; - }()); - - var Parser = /** @class */ (function () { - function Parser(tokens) { - this._tokens = tokens; - } - Parser.create = function (value) { - var tokenizer = new Tokenizer(); - tokenizer.write(value); - return new Parser(tokenizer.read()); - }; - Parser.parseValue = function (value) { - return Parser.create(value).parseComponentValue(); - }; - Parser.parseValues = function (value) { - return Parser.create(value).parseComponentValues(); - }; - Parser.prototype.parseComponentValue = function () { - var token = this.consumeToken(); - while (token.type === 31 /* WHITESPACE_TOKEN */) { - token = this.consumeToken(); - } - if (token.type === 32 /* EOF_TOKEN */) { - throw new SyntaxError("Error parsing CSS component value, unexpected EOF"); - } - this.reconsumeToken(token); - var value = this.consumeComponentValue(); - do { - token = this.consumeToken(); - } while (token.type === 31 /* WHITESPACE_TOKEN */); - if (token.type === 32 /* EOF_TOKEN */) { - return value; - } - throw new SyntaxError("Error parsing CSS component value, multiple values found when expecting only one"); - }; - Parser.prototype.parseComponentValues = function () { - var values = []; - while (true) { - var value = this.consumeComponentValue(); - if (value.type === 32 /* EOF_TOKEN */) { - return values; - } - values.push(value); - values.push(); - } - }; - Parser.prototype.consumeComponentValue = function () { - var token = this.consumeToken(); - switch (token.type) { - case 11 /* LEFT_CURLY_BRACKET_TOKEN */: - case 28 /* LEFT_SQUARE_BRACKET_TOKEN */: - case 2 /* LEFT_PARENTHESIS_TOKEN */: - return this.consumeSimpleBlock(token.type); - case 19 /* FUNCTION_TOKEN */: - return this.consumeFunction(token); - } - return token; - }; - Parser.prototype.consumeSimpleBlock = function (type) { - var block = { type: type, values: [] }; - var token = this.consumeToken(); - while (true) { - if (token.type === 32 /* EOF_TOKEN */ || isEndingTokenFor(token, type)) { - return block; - } - this.reconsumeToken(token); - block.values.push(this.consumeComponentValue()); - token = this.consumeToken(); - } - }; - Parser.prototype.consumeFunction = function (functionToken) { - var cssFunction = { - name: functionToken.value, - values: [], - type: 18 /* FUNCTION */ - }; - while (true) { - var token = this.consumeToken(); - if (token.type === 32 /* EOF_TOKEN */ || token.type === 3 /* RIGHT_PARENTHESIS_TOKEN */) { - return cssFunction; - } - this.reconsumeToken(token); - cssFunction.values.push(this.consumeComponentValue()); - } - }; - Parser.prototype.consumeToken = function () { - var token = this._tokens.shift(); - return typeof token === 'undefined' ? EOF_TOKEN : token; - }; - Parser.prototype.reconsumeToken = function (token) { - this._tokens.unshift(token); - }; - return Parser; - }()); - var isDimensionToken = function (token) { return token.type === 15 /* DIMENSION_TOKEN */; }; - var isNumberToken = function (token) { return token.type === 17 /* NUMBER_TOKEN */; }; - var isIdentToken = function (token) { return token.type === 20 /* IDENT_TOKEN */; }; - var isStringToken = function (token) { return token.type === 0 /* STRING_TOKEN */; }; - var isIdentWithValue = function (token, value) { - return isIdentToken(token) && token.value === value; - }; - var nonWhiteSpace = function (token) { return token.type !== 31 /* WHITESPACE_TOKEN */; }; - var nonFunctionArgSeparator = function (token) { - return token.type !== 31 /* WHITESPACE_TOKEN */ && token.type !== 4 /* COMMA_TOKEN */; - }; - var parseFunctionArgs = function (tokens) { - var args = []; - var arg = []; - tokens.forEach(function (token) { - if (token.type === 4 /* COMMA_TOKEN */) { - if (arg.length === 0) { - throw new Error("Error parsing function args, zero tokens for arg"); - } - args.push(arg); - arg = []; - return; - } - if (token.type !== 31 /* WHITESPACE_TOKEN */) { - arg.push(token); - } - }); - if (arg.length) { - args.push(arg); - } - return args; - }; - var isEndingTokenFor = function (token, type) { - if (type === 11 /* LEFT_CURLY_BRACKET_TOKEN */ && token.type === 12 /* RIGHT_CURLY_BRACKET_TOKEN */) { - return true; - } - if (type === 28 /* LEFT_SQUARE_BRACKET_TOKEN */ && token.type === 29 /* RIGHT_SQUARE_BRACKET_TOKEN */) { - return true; - } - return type === 2 /* LEFT_PARENTHESIS_TOKEN */ && token.type === 3 /* RIGHT_PARENTHESIS_TOKEN */; - }; - - var isLength = function (token) { - return token.type === 17 /* NUMBER_TOKEN */ || token.type === 15 /* DIMENSION_TOKEN */; - }; - - var isLengthPercentage = function (token) { - return token.type === 16 /* PERCENTAGE_TOKEN */ || isLength(token); - }; - var parseLengthPercentageTuple = function (tokens) { - return tokens.length > 1 ? [tokens[0], tokens[1]] : [tokens[0]]; - }; - var ZERO_LENGTH = { - type: 17 /* NUMBER_TOKEN */, - number: 0, - flags: FLAG_INTEGER - }; - var FIFTY_PERCENT = { - type: 16 /* PERCENTAGE_TOKEN */, - number: 50, - flags: FLAG_INTEGER - }; - var HUNDRED_PERCENT = { - type: 16 /* PERCENTAGE_TOKEN */, - number: 100, - flags: FLAG_INTEGER - }; - var getAbsoluteValueForTuple = function (tuple, width, height) { - var x = tuple[0], y = tuple[1]; - return [getAbsoluteValue(x, width), getAbsoluteValue(typeof y !== 'undefined' ? y : x, height)]; - }; - var getAbsoluteValue = function (token, parent) { - if (token.type === 16 /* PERCENTAGE_TOKEN */) { - return (token.number / 100) * parent; - } - if (isDimensionToken(token)) { - switch (token.unit) { - case 'rem': - case 'em': - return 16 * token.number; // TODO use correct font-size - case 'px': - default: - return token.number; - } - } - return token.number; - }; - - var DEG = 'deg'; - var GRAD = 'grad'; - var RAD = 'rad'; - var TURN = 'turn'; - var angle = { - name: 'angle', - parse: function (_context, value) { - if (value.type === 15 /* DIMENSION_TOKEN */) { - switch (value.unit) { - case DEG: - return (Math.PI * value.number) / 180; - case GRAD: - return (Math.PI / 200) * value.number; - case RAD: - return value.number; - case TURN: - return Math.PI * 2 * value.number; - } - } - throw new Error("Unsupported angle type"); - } - }; - var isAngle = function (value) { - if (value.type === 15 /* DIMENSION_TOKEN */) { - if (value.unit === DEG || value.unit === GRAD || value.unit === RAD || value.unit === TURN) { - return true; - } - } - return false; - }; - var parseNamedSide = function (tokens) { - var sideOrCorner = tokens - .filter(isIdentToken) - .map(function (ident) { return ident.value; }) - .join(' '); - switch (sideOrCorner) { - case 'to bottom right': - case 'to right bottom': - case 'left top': - case 'top left': - return [ZERO_LENGTH, ZERO_LENGTH]; - case 'to top': - case 'bottom': - return deg(0); - case 'to bottom left': - case 'to left bottom': - case 'right top': - case 'top right': - return [ZERO_LENGTH, HUNDRED_PERCENT]; - case 'to right': - case 'left': - return deg(90); - case 'to top left': - case 'to left top': - case 'right bottom': - case 'bottom right': - return [HUNDRED_PERCENT, HUNDRED_PERCENT]; - case 'to bottom': - case 'top': - return deg(180); - case 'to top right': - case 'to right top': - case 'left bottom': - case 'bottom left': - return [HUNDRED_PERCENT, ZERO_LENGTH]; - case 'to left': - case 'right': - return deg(270); - } - return 0; - }; - var deg = function (deg) { return (Math.PI * deg) / 180; }; - - var color$1 = { - name: 'color', - parse: function (context, value) { - if (value.type === 18 /* FUNCTION */) { - var colorFunction = SUPPORTED_COLOR_FUNCTIONS[value.name]; - if (typeof colorFunction === 'undefined') { - throw new Error("Attempting to parse an unsupported color function \"" + value.name + "\""); - } - return colorFunction(context, value.values); - } - if (value.type === 5 /* HASH_TOKEN */) { - if (value.value.length === 3) { - var r = value.value.substring(0, 1); - var g = value.value.substring(1, 2); - var b = value.value.substring(2, 3); - return pack(parseInt(r + r, 16), parseInt(g + g, 16), parseInt(b + b, 16), 1); - } - if (value.value.length === 4) { - var r = value.value.substring(0, 1); - var g = value.value.substring(1, 2); - var b = value.value.substring(2, 3); - var a = value.value.substring(3, 4); - return pack(parseInt(r + r, 16), parseInt(g + g, 16), parseInt(b + b, 16), parseInt(a + a, 16) / 255); - } - if (value.value.length === 6) { - var r = value.value.substring(0, 2); - var g = value.value.substring(2, 4); - var b = value.value.substring(4, 6); - return pack(parseInt(r, 16), parseInt(g, 16), parseInt(b, 16), 1); - } - if (value.value.length === 8) { - var r = value.value.substring(0, 2); - var g = value.value.substring(2, 4); - var b = value.value.substring(4, 6); - var a = value.value.substring(6, 8); - return pack(parseInt(r, 16), parseInt(g, 16), parseInt(b, 16), parseInt(a, 16) / 255); - } - } - if (value.type === 20 /* IDENT_TOKEN */) { - var namedColor = COLORS[value.value.toUpperCase()]; - if (typeof namedColor !== 'undefined') { - return namedColor; - } - } - return COLORS.TRANSPARENT; - } - }; - var isTransparent = function (color) { return (0xff & color) === 0; }; - var asString = function (color) { - var alpha = 0xff & color; - var blue = 0xff & (color >> 8); - var green = 0xff & (color >> 16); - var red = 0xff & (color >> 24); - return alpha < 255 ? "rgba(" + red + "," + green + "," + blue + "," + alpha / 255 + ")" : "rgb(" + red + "," + green + "," + blue + ")"; - }; - var pack = function (r, g, b, a) { - return ((r << 24) | (g << 16) | (b << 8) | (Math.round(a * 255) << 0)) >>> 0; - }; - var getTokenColorValue = function (token, i) { - if (token.type === 17 /* NUMBER_TOKEN */) { - return token.number; - } - if (token.type === 16 /* PERCENTAGE_TOKEN */) { - var max = i === 3 ? 1 : 255; - return i === 3 ? (token.number / 100) * max : Math.round((token.number / 100) * max); - } - return 0; - }; - var rgb = function (_context, args) { - var tokens = args.filter(nonFunctionArgSeparator); - if (tokens.length === 3) { - var _a = tokens.map(getTokenColorValue), r = _a[0], g = _a[1], b = _a[2]; - return pack(r, g, b, 1); - } - if (tokens.length === 4) { - var _b = tokens.map(getTokenColorValue), r = _b[0], g = _b[1], b = _b[2], a = _b[3]; - return pack(r, g, b, a); - } - return 0; - }; - function hue2rgb(t1, t2, hue) { - if (hue < 0) { - hue += 1; - } - if (hue >= 1) { - hue -= 1; - } - if (hue < 1 / 6) { - return (t2 - t1) * hue * 6 + t1; - } - else if (hue < 1 / 2) { - return t2; - } - else if (hue < 2 / 3) { - return (t2 - t1) * 6 * (2 / 3 - hue) + t1; - } - else { - return t1; - } - } - var hsl = function (context, args) { - var tokens = args.filter(nonFunctionArgSeparator); - var hue = tokens[0], saturation = tokens[1], lightness = tokens[2], alpha = tokens[3]; - var h = (hue.type === 17 /* NUMBER_TOKEN */ ? deg(hue.number) : angle.parse(context, hue)) / (Math.PI * 2); - var s = isLengthPercentage(saturation) ? saturation.number / 100 : 0; - var l = isLengthPercentage(lightness) ? lightness.number / 100 : 0; - var a = typeof alpha !== 'undefined' && isLengthPercentage(alpha) ? getAbsoluteValue(alpha, 1) : 1; - if (s === 0) { - return pack(l * 255, l * 255, l * 255, 1); - } - var t2 = l <= 0.5 ? l * (s + 1) : l + s - l * s; - var t1 = l * 2 - t2; - var r = hue2rgb(t1, t2, h + 1 / 3); - var g = hue2rgb(t1, t2, h); - var b = hue2rgb(t1, t2, h - 1 / 3); - return pack(r * 255, g * 255, b * 255, a); - }; - var SUPPORTED_COLOR_FUNCTIONS = { - hsl: hsl, - hsla: hsl, - rgb: rgb, - rgba: rgb - }; - var parseColor = function (context, value) { - return color$1.parse(context, Parser.create(value).parseComponentValue()); - }; - var COLORS = { - ALICEBLUE: 0xf0f8ffff, - ANTIQUEWHITE: 0xfaebd7ff, - AQUA: 0x00ffffff, - AQUAMARINE: 0x7fffd4ff, - AZURE: 0xf0ffffff, - BEIGE: 0xf5f5dcff, - BISQUE: 0xffe4c4ff, - BLACK: 0x000000ff, - BLANCHEDALMOND: 0xffebcdff, - BLUE: 0x0000ffff, - BLUEVIOLET: 0x8a2be2ff, - BROWN: 0xa52a2aff, - BURLYWOOD: 0xdeb887ff, - CADETBLUE: 0x5f9ea0ff, - CHARTREUSE: 0x7fff00ff, - CHOCOLATE: 0xd2691eff, - CORAL: 0xff7f50ff, - CORNFLOWERBLUE: 0x6495edff, - CORNSILK: 0xfff8dcff, - CRIMSON: 0xdc143cff, - CYAN: 0x00ffffff, - DARKBLUE: 0x00008bff, - DARKCYAN: 0x008b8bff, - DARKGOLDENROD: 0xb886bbff, - DARKGRAY: 0xa9a9a9ff, - DARKGREEN: 0x006400ff, - DARKGREY: 0xa9a9a9ff, - DARKKHAKI: 0xbdb76bff, - DARKMAGENTA: 0x8b008bff, - DARKOLIVEGREEN: 0x556b2fff, - DARKORANGE: 0xff8c00ff, - DARKORCHID: 0x9932ccff, - DARKRED: 0x8b0000ff, - DARKSALMON: 0xe9967aff, - DARKSEAGREEN: 0x8fbc8fff, - DARKSLATEBLUE: 0x483d8bff, - DARKSLATEGRAY: 0x2f4f4fff, - DARKSLATEGREY: 0x2f4f4fff, - DARKTURQUOISE: 0x00ced1ff, - DARKVIOLET: 0x9400d3ff, - DEEPPINK: 0xff1493ff, - DEEPSKYBLUE: 0x00bfffff, - DIMGRAY: 0x696969ff, - DIMGREY: 0x696969ff, - DODGERBLUE: 0x1e90ffff, - FIREBRICK: 0xb22222ff, - FLORALWHITE: 0xfffaf0ff, - FORESTGREEN: 0x228b22ff, - FUCHSIA: 0xff00ffff, - GAINSBORO: 0xdcdcdcff, - GHOSTWHITE: 0xf8f8ffff, - GOLD: 0xffd700ff, - GOLDENROD: 0xdaa520ff, - GRAY: 0x808080ff, - GREEN: 0x008000ff, - GREENYELLOW: 0xadff2fff, - GREY: 0x808080ff, - HONEYDEW: 0xf0fff0ff, - HOTPINK: 0xff69b4ff, - INDIANRED: 0xcd5c5cff, - INDIGO: 0x4b0082ff, - IVORY: 0xfffff0ff, - KHAKI: 0xf0e68cff, - LAVENDER: 0xe6e6faff, - LAVENDERBLUSH: 0xfff0f5ff, - LAWNGREEN: 0x7cfc00ff, - LEMONCHIFFON: 0xfffacdff, - LIGHTBLUE: 0xadd8e6ff, - LIGHTCORAL: 0xf08080ff, - LIGHTCYAN: 0xe0ffffff, - LIGHTGOLDENRODYELLOW: 0xfafad2ff, - LIGHTGRAY: 0xd3d3d3ff, - LIGHTGREEN: 0x90ee90ff, - LIGHTGREY: 0xd3d3d3ff, - LIGHTPINK: 0xffb6c1ff, - LIGHTSALMON: 0xffa07aff, - LIGHTSEAGREEN: 0x20b2aaff, - LIGHTSKYBLUE: 0x87cefaff, - LIGHTSLATEGRAY: 0x778899ff, - LIGHTSLATEGREY: 0x778899ff, - LIGHTSTEELBLUE: 0xb0c4deff, - LIGHTYELLOW: 0xffffe0ff, - LIME: 0x00ff00ff, - LIMEGREEN: 0x32cd32ff, - LINEN: 0xfaf0e6ff, - MAGENTA: 0xff00ffff, - MAROON: 0x800000ff, - MEDIUMAQUAMARINE: 0x66cdaaff, - MEDIUMBLUE: 0x0000cdff, - MEDIUMORCHID: 0xba55d3ff, - MEDIUMPURPLE: 0x9370dbff, - MEDIUMSEAGREEN: 0x3cb371ff, - MEDIUMSLATEBLUE: 0x7b68eeff, - MEDIUMSPRINGGREEN: 0x00fa9aff, - MEDIUMTURQUOISE: 0x48d1ccff, - MEDIUMVIOLETRED: 0xc71585ff, - MIDNIGHTBLUE: 0x191970ff, - MINTCREAM: 0xf5fffaff, - MISTYROSE: 0xffe4e1ff, - MOCCASIN: 0xffe4b5ff, - NAVAJOWHITE: 0xffdeadff, - NAVY: 0x000080ff, - OLDLACE: 0xfdf5e6ff, - OLIVE: 0x808000ff, - OLIVEDRAB: 0x6b8e23ff, - ORANGE: 0xffa500ff, - ORANGERED: 0xff4500ff, - ORCHID: 0xda70d6ff, - PALEGOLDENROD: 0xeee8aaff, - PALEGREEN: 0x98fb98ff, - PALETURQUOISE: 0xafeeeeff, - PALEVIOLETRED: 0xdb7093ff, - PAPAYAWHIP: 0xffefd5ff, - PEACHPUFF: 0xffdab9ff, - PERU: 0xcd853fff, - PINK: 0xffc0cbff, - PLUM: 0xdda0ddff, - POWDERBLUE: 0xb0e0e6ff, - PURPLE: 0x800080ff, - REBECCAPURPLE: 0x663399ff, - RED: 0xff0000ff, - ROSYBROWN: 0xbc8f8fff, - ROYALBLUE: 0x4169e1ff, - SADDLEBROWN: 0x8b4513ff, - SALMON: 0xfa8072ff, - SANDYBROWN: 0xf4a460ff, - SEAGREEN: 0x2e8b57ff, - SEASHELL: 0xfff5eeff, - SIENNA: 0xa0522dff, - SILVER: 0xc0c0c0ff, - SKYBLUE: 0x87ceebff, - SLATEBLUE: 0x6a5acdff, - SLATEGRAY: 0x708090ff, - SLATEGREY: 0x708090ff, - SNOW: 0xfffafaff, - SPRINGGREEN: 0x00ff7fff, - STEELBLUE: 0x4682b4ff, - TAN: 0xd2b48cff, - TEAL: 0x008080ff, - THISTLE: 0xd8bfd8ff, - TOMATO: 0xff6347ff, - TRANSPARENT: 0x00000000, - TURQUOISE: 0x40e0d0ff, - VIOLET: 0xee82eeff, - WHEAT: 0xf5deb3ff, - WHITE: 0xffffffff, - WHITESMOKE: 0xf5f5f5ff, - YELLOW: 0xffff00ff, - YELLOWGREEN: 0x9acd32ff - }; - - var backgroundClip = { - name: 'background-clip', - initialValue: 'border-box', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.map(function (token) { - if (isIdentToken(token)) { - switch (token.value) { - case 'padding-box': - return 1 /* PADDING_BOX */; - case 'content-box': - return 2 /* CONTENT_BOX */; - } - } - return 0 /* BORDER_BOX */; - }); - } - }; - - var backgroundColor = { - name: "background-color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var parseColorStop = function (context, args) { - var color = color$1.parse(context, args[0]); - var stop = args[1]; - return stop && isLengthPercentage(stop) ? { color: color, stop: stop } : { color: color, stop: null }; - }; - var processColorStops = function (stops, lineLength) { - var first = stops[0]; - var last = stops[stops.length - 1]; - if (first.stop === null) { - first.stop = ZERO_LENGTH; - } - if (last.stop === null) { - last.stop = HUNDRED_PERCENT; - } - var processStops = []; - var previous = 0; - for (var i = 0; i < stops.length; i++) { - var stop_1 = stops[i].stop; - if (stop_1 !== null) { - var absoluteValue = getAbsoluteValue(stop_1, lineLength); - if (absoluteValue > previous) { - processStops.push(absoluteValue); - } - else { - processStops.push(previous); - } - previous = absoluteValue; - } - else { - processStops.push(null); - } - } - var gapBegin = null; - for (var i = 0; i < processStops.length; i++) { - var stop_2 = processStops[i]; - if (stop_2 === null) { - if (gapBegin === null) { - gapBegin = i; - } - } - else if (gapBegin !== null) { - var gapLength = i - gapBegin; - var beforeGap = processStops[gapBegin - 1]; - var gapValue = (stop_2 - beforeGap) / (gapLength + 1); - for (var g = 1; g <= gapLength; g++) { - processStops[gapBegin + g - 1] = gapValue * g; - } - gapBegin = null; - } - } - return stops.map(function (_a, i) { - var color = _a.color; - return { color: color, stop: Math.max(Math.min(1, processStops[i] / lineLength), 0) }; - }); - }; - var getAngleFromCorner = function (corner, width, height) { - var centerX = width / 2; - var centerY = height / 2; - var x = getAbsoluteValue(corner[0], width) - centerX; - var y = centerY - getAbsoluteValue(corner[1], height); - return (Math.atan2(y, x) + Math.PI * 2) % (Math.PI * 2); - }; - var calculateGradientDirection = function (angle, width, height) { - var radian = typeof angle === 'number' ? angle : getAngleFromCorner(angle, width, height); - var lineLength = Math.abs(width * Math.sin(radian)) + Math.abs(height * Math.cos(radian)); - var halfWidth = width / 2; - var halfHeight = height / 2; - var halfLineLength = lineLength / 2; - var yDiff = Math.sin(radian - Math.PI / 2) * halfLineLength; - var xDiff = Math.cos(radian - Math.PI / 2) * halfLineLength; - return [lineLength, halfWidth - xDiff, halfWidth + xDiff, halfHeight - yDiff, halfHeight + yDiff]; - }; - var distance = function (a, b) { return Math.sqrt(a * a + b * b); }; - var findCorner = function (width, height, x, y, closest) { - var corners = [ - [0, 0], - [0, height], - [width, 0], - [width, height] - ]; - return corners.reduce(function (stat, corner) { - var cx = corner[0], cy = corner[1]; - var d = distance(x - cx, y - cy); - if (closest ? d < stat.optimumDistance : d > stat.optimumDistance) { - return { - optimumCorner: corner, - optimumDistance: d - }; - } - return stat; - }, { - optimumDistance: closest ? Infinity : -Infinity, - optimumCorner: null - }).optimumCorner; - }; - var calculateRadius = function (gradient, x, y, width, height) { - var rx = 0; - var ry = 0; - switch (gradient.size) { - case 0 /* CLOSEST_SIDE */: - // The ending shape is sized so that that it exactly meets the side of the gradient box closest to the gradient’s center. - // If the shape is an ellipse, it exactly meets the closest side in each dimension. - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.min(Math.abs(x), Math.abs(x - width), Math.abs(y), Math.abs(y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - rx = Math.min(Math.abs(x), Math.abs(x - width)); - ry = Math.min(Math.abs(y), Math.abs(y - height)); - } - break; - case 2 /* CLOSEST_CORNER */: - // The ending shape is sized so that that it passes through the corner of the gradient box closest to the gradient’s center. - // If the shape is an ellipse, the ending shape is given the same aspect-ratio it would have if closest-side were specified. - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.min(distance(x, y), distance(x, y - height), distance(x - width, y), distance(x - width, y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - // Compute the ratio ry/rx (which is to be the same as for "closest-side") - var c = Math.min(Math.abs(y), Math.abs(y - height)) / Math.min(Math.abs(x), Math.abs(x - width)); - var _a = findCorner(width, height, x, y, true), cx = _a[0], cy = _a[1]; - rx = distance(cx - x, (cy - y) / c); - ry = c * rx; - } - break; - case 1 /* FARTHEST_SIDE */: - // Same as closest-side, except the ending shape is sized based on the farthest side(s) - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.max(Math.abs(x), Math.abs(x - width), Math.abs(y), Math.abs(y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - rx = Math.max(Math.abs(x), Math.abs(x - width)); - ry = Math.max(Math.abs(y), Math.abs(y - height)); - } - break; - case 3 /* FARTHEST_CORNER */: - // Same as closest-corner, except the ending shape is sized based on the farthest corner. - // If the shape is an ellipse, the ending shape is given the same aspect ratio it would have if farthest-side were specified. - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.max(distance(x, y), distance(x, y - height), distance(x - width, y), distance(x - width, y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - // Compute the ratio ry/rx (which is to be the same as for "farthest-side") - var c = Math.max(Math.abs(y), Math.abs(y - height)) / Math.max(Math.abs(x), Math.abs(x - width)); - var _b = findCorner(width, height, x, y, false), cx = _b[0], cy = _b[1]; - rx = distance(cx - x, (cy - y) / c); - ry = c * rx; - } - break; - } - if (Array.isArray(gradient.size)) { - rx = getAbsoluteValue(gradient.size[0], width); - ry = gradient.size.length === 2 ? getAbsoluteValue(gradient.size[1], height) : rx; - } - return [rx, ry]; - }; - - var linearGradient = function (context, tokens) { - var angle$1 = deg(180); - var stops = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - if (i === 0) { - var firstToken = arg[0]; - if (firstToken.type === 20 /* IDENT_TOKEN */ && firstToken.value === 'to') { - angle$1 = parseNamedSide(arg); - return; - } - else if (isAngle(firstToken)) { - angle$1 = angle.parse(context, firstToken); - return; - } - } - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - }); - return { angle: angle$1, stops: stops, type: 1 /* LINEAR_GRADIENT */ }; - }; - - var prefixLinearGradient = function (context, tokens) { - var angle$1 = deg(180); - var stops = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - if (i === 0) { - var firstToken = arg[0]; - if (firstToken.type === 20 /* IDENT_TOKEN */ && - ['top', 'left', 'right', 'bottom'].indexOf(firstToken.value) !== -1) { - angle$1 = parseNamedSide(arg); - return; - } - else if (isAngle(firstToken)) { - angle$1 = (angle.parse(context, firstToken) + deg(270)) % deg(360); - return; - } - } - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - }); - return { - angle: angle$1, - stops: stops, - type: 1 /* LINEAR_GRADIENT */ - }; - }; - - var webkitGradient = function (context, tokens) { - var angle = deg(180); - var stops = []; - var type = 1 /* LINEAR_GRADIENT */; - var shape = 0 /* CIRCLE */; - var size = 3 /* FARTHEST_CORNER */; - var position = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - var firstToken = arg[0]; - if (i === 0) { - if (isIdentToken(firstToken) && firstToken.value === 'linear') { - type = 1 /* LINEAR_GRADIENT */; - return; - } - else if (isIdentToken(firstToken) && firstToken.value === 'radial') { - type = 2 /* RADIAL_GRADIENT */; - return; - } - } - if (firstToken.type === 18 /* FUNCTION */) { - if (firstToken.name === 'from') { - var color = color$1.parse(context, firstToken.values[0]); - stops.push({ stop: ZERO_LENGTH, color: color }); - } - else if (firstToken.name === 'to') { - var color = color$1.parse(context, firstToken.values[0]); - stops.push({ stop: HUNDRED_PERCENT, color: color }); - } - else if (firstToken.name === 'color-stop') { - var values = firstToken.values.filter(nonFunctionArgSeparator); - if (values.length === 2) { - var color = color$1.parse(context, values[1]); - var stop_1 = values[0]; - if (isNumberToken(stop_1)) { - stops.push({ - stop: { type: 16 /* PERCENTAGE_TOKEN */, number: stop_1.number * 100, flags: stop_1.flags }, - color: color - }); - } - } - } - } - }); - return type === 1 /* LINEAR_GRADIENT */ - ? { - angle: (angle + deg(180)) % deg(360), - stops: stops, - type: type - } - : { size: size, shape: shape, stops: stops, position: position, type: type }; - }; - - var CLOSEST_SIDE = 'closest-side'; - var FARTHEST_SIDE = 'farthest-side'; - var CLOSEST_CORNER = 'closest-corner'; - var FARTHEST_CORNER = 'farthest-corner'; - var CIRCLE = 'circle'; - var ELLIPSE = 'ellipse'; - var COVER = 'cover'; - var CONTAIN = 'contain'; - var radialGradient = function (context, tokens) { - var shape = 0 /* CIRCLE */; - var size = 3 /* FARTHEST_CORNER */; - var stops = []; - var position = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - var isColorStop = true; - if (i === 0) { - var isAtPosition_1 = false; - isColorStop = arg.reduce(function (acc, token) { - if (isAtPosition_1) { - if (isIdentToken(token)) { - switch (token.value) { - case 'center': - position.push(FIFTY_PERCENT); - return acc; - case 'top': - case 'left': - position.push(ZERO_LENGTH); - return acc; - case 'right': - case 'bottom': - position.push(HUNDRED_PERCENT); - return acc; - } - } - else if (isLengthPercentage(token) || isLength(token)) { - position.push(token); - } - } - else if (isIdentToken(token)) { - switch (token.value) { - case CIRCLE: - shape = 0 /* CIRCLE */; - return false; - case ELLIPSE: - shape = 1 /* ELLIPSE */; - return false; - case 'at': - isAtPosition_1 = true; - return false; - case CLOSEST_SIDE: - size = 0 /* CLOSEST_SIDE */; - return false; - case COVER: - case FARTHEST_SIDE: - size = 1 /* FARTHEST_SIDE */; - return false; - case CONTAIN: - case CLOSEST_CORNER: - size = 2 /* CLOSEST_CORNER */; - return false; - case FARTHEST_CORNER: - size = 3 /* FARTHEST_CORNER */; - return false; - } - } - else if (isLength(token) || isLengthPercentage(token)) { - if (!Array.isArray(size)) { - size = []; - } - size.push(token); - return false; - } - return acc; - }, isColorStop); - } - if (isColorStop) { - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - } - }); - return { size: size, shape: shape, stops: stops, position: position, type: 2 /* RADIAL_GRADIENT */ }; - }; - - var prefixRadialGradient = function (context, tokens) { - var shape = 0 /* CIRCLE */; - var size = 3 /* FARTHEST_CORNER */; - var stops = []; - var position = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - var isColorStop = true; - if (i === 0) { - isColorStop = arg.reduce(function (acc, token) { - if (isIdentToken(token)) { - switch (token.value) { - case 'center': - position.push(FIFTY_PERCENT); - return false; - case 'top': - case 'left': - position.push(ZERO_LENGTH); - return false; - case 'right': - case 'bottom': - position.push(HUNDRED_PERCENT); - return false; - } - } - else if (isLengthPercentage(token) || isLength(token)) { - position.push(token); - return false; - } - return acc; - }, isColorStop); - } - else if (i === 1) { - isColorStop = arg.reduce(function (acc, token) { - if (isIdentToken(token)) { - switch (token.value) { - case CIRCLE: - shape = 0 /* CIRCLE */; - return false; - case ELLIPSE: - shape = 1 /* ELLIPSE */; - return false; - case CONTAIN: - case CLOSEST_SIDE: - size = 0 /* CLOSEST_SIDE */; - return false; - case FARTHEST_SIDE: - size = 1 /* FARTHEST_SIDE */; - return false; - case CLOSEST_CORNER: - size = 2 /* CLOSEST_CORNER */; - return false; - case COVER: - case FARTHEST_CORNER: - size = 3 /* FARTHEST_CORNER */; - return false; - } - } - else if (isLength(token) || isLengthPercentage(token)) { - if (!Array.isArray(size)) { - size = []; - } - size.push(token); - return false; - } - return acc; - }, isColorStop); - } - if (isColorStop) { - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - } - }); - return { size: size, shape: shape, stops: stops, position: position, type: 2 /* RADIAL_GRADIENT */ }; - }; - - var isLinearGradient = function (background) { - return background.type === 1 /* LINEAR_GRADIENT */; - }; - var isRadialGradient = function (background) { - return background.type === 2 /* RADIAL_GRADIENT */; - }; - var image = { - name: 'image', - parse: function (context, value) { - if (value.type === 22 /* URL_TOKEN */) { - var image_1 = { url: value.value, type: 0 /* URL */ }; - context.cache.addImage(value.value); - return image_1; - } - if (value.type === 18 /* FUNCTION */) { - var imageFunction = SUPPORTED_IMAGE_FUNCTIONS[value.name]; - if (typeof imageFunction === 'undefined') { - throw new Error("Attempting to parse an unsupported image function \"" + value.name + "\""); - } - return imageFunction(context, value.values); - } - throw new Error("Unsupported image type " + value.type); - } - }; - function isSupportedImage(value) { - return (!(value.type === 20 /* IDENT_TOKEN */ && value.value === 'none') && - (value.type !== 18 /* FUNCTION */ || !!SUPPORTED_IMAGE_FUNCTIONS[value.name])); - } - var SUPPORTED_IMAGE_FUNCTIONS = { - 'linear-gradient': linearGradient, - '-moz-linear-gradient': prefixLinearGradient, - '-ms-linear-gradient': prefixLinearGradient, - '-o-linear-gradient': prefixLinearGradient, - '-webkit-linear-gradient': prefixLinearGradient, - 'radial-gradient': radialGradient, - '-moz-radial-gradient': prefixRadialGradient, - '-ms-radial-gradient': prefixRadialGradient, - '-o-radial-gradient': prefixRadialGradient, - '-webkit-radial-gradient': prefixRadialGradient, - '-webkit-gradient': webkitGradient - }; - - var backgroundImage = { - name: 'background-image', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (context, tokens) { - if (tokens.length === 0) { - return []; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return []; - } - return tokens - .filter(function (value) { return nonFunctionArgSeparator(value) && isSupportedImage(value); }) - .map(function (value) { return image.parse(context, value); }); - } - }; - - var backgroundOrigin = { - name: 'background-origin', - initialValue: 'border-box', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.map(function (token) { - if (isIdentToken(token)) { - switch (token.value) { - case 'padding-box': - return 1 /* PADDING_BOX */; - case 'content-box': - return 2 /* CONTENT_BOX */; - } - } - return 0 /* BORDER_BOX */; - }); - } - }; - - var backgroundPosition = { - name: 'background-position', - initialValue: '0% 0%', - type: 1 /* LIST */, - prefix: false, - parse: function (_context, tokens) { - return parseFunctionArgs(tokens) - .map(function (values) { return values.filter(isLengthPercentage); }) - .map(parseLengthPercentageTuple); - } - }; - - var backgroundRepeat = { - name: 'background-repeat', - initialValue: 'repeat', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return parseFunctionArgs(tokens) - .map(function (values) { - return values - .filter(isIdentToken) - .map(function (token) { return token.value; }) - .join(' '); - }) - .map(parseBackgroundRepeat); - } - }; - var parseBackgroundRepeat = function (value) { - switch (value) { - case 'no-repeat': - return 1 /* NO_REPEAT */; - case 'repeat-x': - case 'repeat no-repeat': - return 2 /* REPEAT_X */; - case 'repeat-y': - case 'no-repeat repeat': - return 3 /* REPEAT_Y */; - case 'repeat': - default: - return 0 /* REPEAT */; - } - }; - - var BACKGROUND_SIZE; - (function (BACKGROUND_SIZE) { - BACKGROUND_SIZE["AUTO"] = "auto"; - BACKGROUND_SIZE["CONTAIN"] = "contain"; - BACKGROUND_SIZE["COVER"] = "cover"; - })(BACKGROUND_SIZE || (BACKGROUND_SIZE = {})); - var backgroundSize = { - name: 'background-size', - initialValue: '0', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return parseFunctionArgs(tokens).map(function (values) { return values.filter(isBackgroundSizeInfoToken); }); - } - }; - var isBackgroundSizeInfoToken = function (value) { - return isIdentToken(value) || isLengthPercentage(value); - }; - - var borderColorForSide = function (side) { return ({ - name: "border-" + side + "-color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }); }; - var borderTopColor = borderColorForSide('top'); - var borderRightColor = borderColorForSide('right'); - var borderBottomColor = borderColorForSide('bottom'); - var borderLeftColor = borderColorForSide('left'); - - var borderRadiusForSide = function (side) { return ({ - name: "border-radius-" + side, - initialValue: '0 0', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return parseLengthPercentageTuple(tokens.filter(isLengthPercentage)); - } - }); }; - var borderTopLeftRadius = borderRadiusForSide('top-left'); - var borderTopRightRadius = borderRadiusForSide('top-right'); - var borderBottomRightRadius = borderRadiusForSide('bottom-right'); - var borderBottomLeftRadius = borderRadiusForSide('bottom-left'); - - var borderStyleForSide = function (side) { return ({ - name: "border-" + side + "-style", - initialValue: 'solid', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, style) { - switch (style) { - case 'none': - return 0 /* NONE */; - case 'dashed': - return 2 /* DASHED */; - case 'dotted': - return 3 /* DOTTED */; - case 'double': - return 4 /* DOUBLE */; - } - return 1 /* SOLID */; - } - }); }; - var borderTopStyle = borderStyleForSide('top'); - var borderRightStyle = borderStyleForSide('right'); - var borderBottomStyle = borderStyleForSide('bottom'); - var borderLeftStyle = borderStyleForSide('left'); - - var borderWidthForSide = function (side) { return ({ - name: "border-" + side + "-width", - initialValue: '0', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isDimensionToken(token)) { - return token.number; - } - return 0; - } - }); }; - var borderTopWidth = borderWidthForSide('top'); - var borderRightWidth = borderWidthForSide('right'); - var borderBottomWidth = borderWidthForSide('bottom'); - var borderLeftWidth = borderWidthForSide('left'); - - var color = { - name: "color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var direction = { - name: 'direction', - initialValue: 'ltr', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, direction) { - switch (direction) { - case 'rtl': - return 1 /* RTL */; - case 'ltr': - default: - return 0 /* LTR */; - } - } - }; - - var display = { - name: 'display', - initialValue: 'inline-block', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.filter(isIdentToken).reduce(function (bit, token) { - return bit | parseDisplayValue(token.value); - }, 0 /* NONE */); - } - }; - var parseDisplayValue = function (display) { - switch (display) { - case 'block': - case '-webkit-box': - return 2 /* BLOCK */; - case 'inline': - return 4 /* INLINE */; - case 'run-in': - return 8 /* RUN_IN */; - case 'flow': - return 16 /* FLOW */; - case 'flow-root': - return 32 /* FLOW_ROOT */; - case 'table': - return 64 /* TABLE */; - case 'flex': - case '-webkit-flex': - return 128 /* FLEX */; - case 'grid': - case '-ms-grid': - return 256 /* GRID */; - case 'ruby': - return 512 /* RUBY */; - case 'subgrid': - return 1024 /* SUBGRID */; - case 'list-item': - return 2048 /* LIST_ITEM */; - case 'table-row-group': - return 4096 /* TABLE_ROW_GROUP */; - case 'table-header-group': - return 8192 /* TABLE_HEADER_GROUP */; - case 'table-footer-group': - return 16384 /* TABLE_FOOTER_GROUP */; - case 'table-row': - return 32768 /* TABLE_ROW */; - case 'table-cell': - return 65536 /* TABLE_CELL */; - case 'table-column-group': - return 131072 /* TABLE_COLUMN_GROUP */; - case 'table-column': - return 262144 /* TABLE_COLUMN */; - case 'table-caption': - return 524288 /* TABLE_CAPTION */; - case 'ruby-base': - return 1048576 /* RUBY_BASE */; - case 'ruby-text': - return 2097152 /* RUBY_TEXT */; - case 'ruby-base-container': - return 4194304 /* RUBY_BASE_CONTAINER */; - case 'ruby-text-container': - return 8388608 /* RUBY_TEXT_CONTAINER */; - case 'contents': - return 16777216 /* CONTENTS */; - case 'inline-block': - return 33554432 /* INLINE_BLOCK */; - case 'inline-list-item': - return 67108864 /* INLINE_LIST_ITEM */; - case 'inline-table': - return 134217728 /* INLINE_TABLE */; - case 'inline-flex': - return 268435456 /* INLINE_FLEX */; - case 'inline-grid': - return 536870912 /* INLINE_GRID */; - } - return 0 /* NONE */; - }; - - var float = { - name: 'float', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, float) { - switch (float) { - case 'left': - return 1 /* LEFT */; - case 'right': - return 2 /* RIGHT */; - case 'inline-start': - return 3 /* INLINE_START */; - case 'inline-end': - return 4 /* INLINE_END */; - } - return 0 /* NONE */; - } - }; - - var letterSpacing = { - name: 'letter-spacing', - initialValue: '0', - prefix: false, - type: 0 /* VALUE */, - parse: function (_context, token) { - if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'normal') { - return 0; - } - if (token.type === 17 /* NUMBER_TOKEN */) { - return token.number; - } - if (token.type === 15 /* DIMENSION_TOKEN */) { - return token.number; - } - return 0; - } - }; - - var LINE_BREAK; - (function (LINE_BREAK) { - LINE_BREAK["NORMAL"] = "normal"; - LINE_BREAK["STRICT"] = "strict"; - })(LINE_BREAK || (LINE_BREAK = {})); - var lineBreak = { - name: 'line-break', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, lineBreak) { - switch (lineBreak) { - case 'strict': - return LINE_BREAK.STRICT; - case 'normal': - default: - return LINE_BREAK.NORMAL; - } - } - }; - - var lineHeight = { - name: 'line-height', - initialValue: 'normal', - prefix: false, - type: 4 /* TOKEN_VALUE */ - }; - var computeLineHeight = function (token, fontSize) { - if (isIdentToken(token) && token.value === 'normal') { - return 1.2 * fontSize; - } - else if (token.type === 17 /* NUMBER_TOKEN */) { - return fontSize * token.number; - } - else if (isLengthPercentage(token)) { - return getAbsoluteValue(token, fontSize); - } - return fontSize; - }; - - var listStyleImage = { - name: 'list-style-image', - initialValue: 'none', - type: 0 /* VALUE */, - prefix: false, - parse: function (context, token) { - if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'none') { - return null; - } - return image.parse(context, token); - } - }; - - var listStylePosition = { - name: 'list-style-position', - initialValue: 'outside', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, position) { - switch (position) { - case 'inside': - return 0 /* INSIDE */; - case 'outside': - default: - return 1 /* OUTSIDE */; - } - } - }; - - var listStyleType = { - name: 'list-style-type', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, type) { - switch (type) { - case 'disc': - return 0 /* DISC */; - case 'circle': - return 1 /* CIRCLE */; - case 'square': - return 2 /* SQUARE */; - case 'decimal': - return 3 /* DECIMAL */; - case 'cjk-decimal': - return 4 /* CJK_DECIMAL */; - case 'decimal-leading-zero': - return 5 /* DECIMAL_LEADING_ZERO */; - case 'lower-roman': - return 6 /* LOWER_ROMAN */; - case 'upper-roman': - return 7 /* UPPER_ROMAN */; - case 'lower-greek': - return 8 /* LOWER_GREEK */; - case 'lower-alpha': - return 9 /* LOWER_ALPHA */; - case 'upper-alpha': - return 10 /* UPPER_ALPHA */; - case 'arabic-indic': - return 11 /* ARABIC_INDIC */; - case 'armenian': - return 12 /* ARMENIAN */; - case 'bengali': - return 13 /* BENGALI */; - case 'cambodian': - return 14 /* CAMBODIAN */; - case 'cjk-earthly-branch': - return 15 /* CJK_EARTHLY_BRANCH */; - case 'cjk-heavenly-stem': - return 16 /* CJK_HEAVENLY_STEM */; - case 'cjk-ideographic': - return 17 /* CJK_IDEOGRAPHIC */; - case 'devanagari': - return 18 /* DEVANAGARI */; - case 'ethiopic-numeric': - return 19 /* ETHIOPIC_NUMERIC */; - case 'georgian': - return 20 /* GEORGIAN */; - case 'gujarati': - return 21 /* GUJARATI */; - case 'gurmukhi': - return 22 /* GURMUKHI */; - case 'hebrew': - return 22 /* HEBREW */; - case 'hiragana': - return 23 /* HIRAGANA */; - case 'hiragana-iroha': - return 24 /* HIRAGANA_IROHA */; - case 'japanese-formal': - return 25 /* JAPANESE_FORMAL */; - case 'japanese-informal': - return 26 /* JAPANESE_INFORMAL */; - case 'kannada': - return 27 /* KANNADA */; - case 'katakana': - return 28 /* KATAKANA */; - case 'katakana-iroha': - return 29 /* KATAKANA_IROHA */; - case 'khmer': - return 30 /* KHMER */; - case 'korean-hangul-formal': - return 31 /* KOREAN_HANGUL_FORMAL */; - case 'korean-hanja-formal': - return 32 /* KOREAN_HANJA_FORMAL */; - case 'korean-hanja-informal': - return 33 /* KOREAN_HANJA_INFORMAL */; - case 'lao': - return 34 /* LAO */; - case 'lower-armenian': - return 35 /* LOWER_ARMENIAN */; - case 'malayalam': - return 36 /* MALAYALAM */; - case 'mongolian': - return 37 /* MONGOLIAN */; - case 'myanmar': - return 38 /* MYANMAR */; - case 'oriya': - return 39 /* ORIYA */; - case 'persian': - return 40 /* PERSIAN */; - case 'simp-chinese-formal': - return 41 /* SIMP_CHINESE_FORMAL */; - case 'simp-chinese-informal': - return 42 /* SIMP_CHINESE_INFORMAL */; - case 'tamil': - return 43 /* TAMIL */; - case 'telugu': - return 44 /* TELUGU */; - case 'thai': - return 45 /* THAI */; - case 'tibetan': - return 46 /* TIBETAN */; - case 'trad-chinese-formal': - return 47 /* TRAD_CHINESE_FORMAL */; - case 'trad-chinese-informal': - return 48 /* TRAD_CHINESE_INFORMAL */; - case 'upper-armenian': - return 49 /* UPPER_ARMENIAN */; - case 'disclosure-open': - return 50 /* DISCLOSURE_OPEN */; - case 'disclosure-closed': - return 51 /* DISCLOSURE_CLOSED */; - case 'none': - default: - return -1 /* NONE */; - } - } - }; - - var marginForSide = function (side) { return ({ - name: "margin-" + side, - initialValue: '0', - prefix: false, - type: 4 /* TOKEN_VALUE */ - }); }; - var marginTop = marginForSide('top'); - var marginRight = marginForSide('right'); - var marginBottom = marginForSide('bottom'); - var marginLeft = marginForSide('left'); - - var overflow = { - name: 'overflow', - initialValue: 'visible', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.filter(isIdentToken).map(function (overflow) { - switch (overflow.value) { - case 'hidden': - return 1 /* HIDDEN */; - case 'scroll': - return 2 /* SCROLL */; - case 'clip': - return 3 /* CLIP */; - case 'auto': - return 4 /* AUTO */; - case 'visible': - default: - return 0 /* VISIBLE */; - } - }); - } - }; - - var overflowWrap = { - name: 'overflow-wrap', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, overflow) { - switch (overflow) { - case 'break-word': - return "break-word" /* BREAK_WORD */; - case 'normal': - default: - return "normal" /* NORMAL */; - } - } - }; - - var paddingForSide = function (side) { return ({ - name: "padding-" + side, - initialValue: '0', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'length-percentage' - }); }; - var paddingTop = paddingForSide('top'); - var paddingRight = paddingForSide('right'); - var paddingBottom = paddingForSide('bottom'); - var paddingLeft = paddingForSide('left'); - - var textAlign = { - name: 'text-align', - initialValue: 'left', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, textAlign) { - switch (textAlign) { - case 'right': - return 2 /* RIGHT */; - case 'center': - case 'justify': - return 1 /* CENTER */; - case 'left': - default: - return 0 /* LEFT */; - } - } - }; - - var position = { - name: 'position', - initialValue: 'static', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, position) { - switch (position) { - case 'relative': - return 1 /* RELATIVE */; - case 'absolute': - return 2 /* ABSOLUTE */; - case 'fixed': - return 3 /* FIXED */; - case 'sticky': - return 4 /* STICKY */; - } - return 0 /* STATIC */; - } - }; - - var textShadow = { - name: 'text-shadow', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (context, tokens) { - if (tokens.length === 1 && isIdentWithValue(tokens[0], 'none')) { - return []; - } - return parseFunctionArgs(tokens).map(function (values) { - var shadow = { - color: COLORS.TRANSPARENT, - offsetX: ZERO_LENGTH, - offsetY: ZERO_LENGTH, - blur: ZERO_LENGTH - }; - var c = 0; - for (var i = 0; i < values.length; i++) { - var token = values[i]; - if (isLength(token)) { - if (c === 0) { - shadow.offsetX = token; - } - else if (c === 1) { - shadow.offsetY = token; - } - else { - shadow.blur = token; - } - c++; - } - else { - shadow.color = color$1.parse(context, token); - } - } - return shadow; - }); - } - }; - - var textTransform = { - name: 'text-transform', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, textTransform) { - switch (textTransform) { - case 'uppercase': - return 2 /* UPPERCASE */; - case 'lowercase': - return 1 /* LOWERCASE */; - case 'capitalize': - return 3 /* CAPITALIZE */; - } - return 0 /* NONE */; - } - }; - - var transform$1 = { - name: 'transform', - initialValue: 'none', - prefix: true, - type: 0 /* VALUE */, - parse: function (_context, token) { - if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'none') { - return null; - } - if (token.type === 18 /* FUNCTION */) { - var transformFunction = SUPPORTED_TRANSFORM_FUNCTIONS[token.name]; - if (typeof transformFunction === 'undefined') { - throw new Error("Attempting to parse an unsupported transform function \"" + token.name + "\""); - } - return transformFunction(token.values); - } - return null; - } - }; - var matrix = function (args) { - var values = args.filter(function (arg) { return arg.type === 17 /* NUMBER_TOKEN */; }).map(function (arg) { return arg.number; }); - return values.length === 6 ? values : null; - }; - // doesn't support 3D transforms at the moment - var matrix3d = function (args) { - var values = args.filter(function (arg) { return arg.type === 17 /* NUMBER_TOKEN */; }).map(function (arg) { return arg.number; }); - var a1 = values[0], b1 = values[1]; values[2]; values[3]; var a2 = values[4], b2 = values[5]; values[6]; values[7]; values[8]; values[9]; values[10]; values[11]; var a4 = values[12], b4 = values[13]; values[14]; values[15]; - return values.length === 16 ? [a1, b1, a2, b2, a4, b4] : null; - }; - var SUPPORTED_TRANSFORM_FUNCTIONS = { - matrix: matrix, - matrix3d: matrix3d - }; - - var DEFAULT_VALUE = { - type: 16 /* PERCENTAGE_TOKEN */, - number: 50, - flags: FLAG_INTEGER - }; - var DEFAULT = [DEFAULT_VALUE, DEFAULT_VALUE]; - var transformOrigin = { - name: 'transform-origin', - initialValue: '50% 50%', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - var origins = tokens.filter(isLengthPercentage); - if (origins.length !== 2) { - return DEFAULT; - } - return [origins[0], origins[1]]; - } - }; - - var visibility = { - name: 'visible', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, visibility) { - switch (visibility) { - case 'hidden': - return 1 /* HIDDEN */; - case 'collapse': - return 2 /* COLLAPSE */; - case 'visible': - default: - return 0 /* VISIBLE */; - } - } - }; - - var WORD_BREAK; - (function (WORD_BREAK) { - WORD_BREAK["NORMAL"] = "normal"; - WORD_BREAK["BREAK_ALL"] = "break-all"; - WORD_BREAK["KEEP_ALL"] = "keep-all"; - })(WORD_BREAK || (WORD_BREAK = {})); - var wordBreak = { - name: 'word-break', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, wordBreak) { - switch (wordBreak) { - case 'break-all': - return WORD_BREAK.BREAK_ALL; - case 'keep-all': - return WORD_BREAK.KEEP_ALL; - case 'normal': - default: - return WORD_BREAK.NORMAL; - } - } - }; - - var zIndex = { - name: 'z-index', - initialValue: 'auto', - prefix: false, - type: 0 /* VALUE */, - parse: function (_context, token) { - if (token.type === 20 /* IDENT_TOKEN */) { - return { auto: true, order: 0 }; - } - if (isNumberToken(token)) { - return { auto: false, order: token.number }; - } - throw new Error("Invalid z-index number parsed"); - } - }; - - var time = { - name: 'time', - parse: function (_context, value) { - if (value.type === 15 /* DIMENSION_TOKEN */) { - switch (value.unit.toLowerCase()) { - case 's': - return 1000 * value.number; - case 'ms': - return value.number; - } - } - throw new Error("Unsupported time type"); - } - }; - - var opacity = { - name: 'opacity', - initialValue: '1', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isNumberToken(token)) { - return token.number; - } - return 1; - } - }; - - var textDecorationColor = { - name: "text-decoration-color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var textDecorationLine = { - name: 'text-decoration-line', - initialValue: 'none', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens - .filter(isIdentToken) - .map(function (token) { - switch (token.value) { - case 'underline': - return 1 /* UNDERLINE */; - case 'overline': - return 2 /* OVERLINE */; - case 'line-through': - return 3 /* LINE_THROUGH */; - case 'none': - return 4 /* BLINK */; - } - return 0 /* NONE */; - }) - .filter(function (line) { return line !== 0 /* NONE */; }); - } - }; - - var fontFamily = { - name: "font-family", - initialValue: '', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - var accumulator = []; - var results = []; - tokens.forEach(function (token) { - switch (token.type) { - case 20 /* IDENT_TOKEN */: - case 0 /* STRING_TOKEN */: - accumulator.push(token.value); - break; - case 17 /* NUMBER_TOKEN */: - accumulator.push(token.number.toString()); - break; - case 4 /* COMMA_TOKEN */: - results.push(accumulator.join(' ')); - accumulator.length = 0; - break; - } - }); - if (accumulator.length) { - results.push(accumulator.join(' ')); - } - return results.map(function (result) { return (result.indexOf(' ') === -1 ? result : "'" + result + "'"); }); - } - }; - - var fontSize = { - name: "font-size", - initialValue: '0', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'length' - }; - - var fontWeight = { - name: 'font-weight', - initialValue: 'normal', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isNumberToken(token)) { - return token.number; - } - if (isIdentToken(token)) { - switch (token.value) { - case 'bold': - return 700; - case 'normal': - default: - return 400; - } - } - return 400; - } - }; - - var fontVariant = { - name: 'font-variant', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (_context, tokens) { - return tokens.filter(isIdentToken).map(function (token) { return token.value; }); - } - }; - - var fontStyle = { - name: 'font-style', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, overflow) { - switch (overflow) { - case 'oblique': - return "oblique" /* OBLIQUE */; - case 'italic': - return "italic" /* ITALIC */; - case 'normal': - default: - return "normal" /* NORMAL */; - } - } - }; - - var contains = function (bit, value) { return (bit & value) !== 0; }; - - var content = { - name: 'content', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return []; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return []; - } - return tokens; - } - }; - - var counterIncrement = { - name: 'counter-increment', - initialValue: 'none', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return null; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return null; - } - var increments = []; - var filtered = tokens.filter(nonWhiteSpace); - for (var i = 0; i < filtered.length; i++) { - var counter = filtered[i]; - var next = filtered[i + 1]; - if (counter.type === 20 /* IDENT_TOKEN */) { - var increment = next && isNumberToken(next) ? next.number : 1; - increments.push({ counter: counter.value, increment: increment }); - } - } - return increments; - } - }; - - var counterReset = { - name: 'counter-reset', - initialValue: 'none', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return []; - } - var resets = []; - var filtered = tokens.filter(nonWhiteSpace); - for (var i = 0; i < filtered.length; i++) { - var counter = filtered[i]; - var next = filtered[i + 1]; - if (isIdentToken(counter) && counter.value !== 'none') { - var reset = next && isNumberToken(next) ? next.number : 0; - resets.push({ counter: counter.value, reset: reset }); - } - } - return resets; - } - }; - - var duration = { - name: 'duration', - initialValue: '0s', - prefix: false, - type: 1 /* LIST */, - parse: function (context, tokens) { - return tokens.filter(isDimensionToken).map(function (token) { return time.parse(context, token); }); - } - }; - - var quotes = { - name: 'quotes', - initialValue: 'none', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return null; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return null; - } - var quotes = []; - var filtered = tokens.filter(isStringToken); - if (filtered.length % 2 !== 0) { - return null; - } - for (var i = 0; i < filtered.length; i += 2) { - var open_1 = filtered[i].value; - var close_1 = filtered[i + 1].value; - quotes.push({ open: open_1, close: close_1 }); - } - return quotes; - } - }; - var getQuote = function (quotes, depth, open) { - if (!quotes) { - return ''; - } - var quote = quotes[Math.min(depth, quotes.length - 1)]; - if (!quote) { - return ''; - } - return open ? quote.open : quote.close; - }; - - var paintOrder = { - name: 'paint-order', - initialValue: 'normal', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - var DEFAULT_VALUE = [0 /* FILL */, 1 /* STROKE */, 2 /* MARKERS */]; - var layers = []; - tokens.filter(isIdentToken).forEach(function (token) { - switch (token.value) { - case 'stroke': - layers.push(1 /* STROKE */); - break; - case 'fill': - layers.push(0 /* FILL */); - break; - case 'markers': - layers.push(2 /* MARKERS */); - break; - } - }); - DEFAULT_VALUE.forEach(function (value) { - if (layers.indexOf(value) === -1) { - layers.push(value); - } - }); - return layers; - } - }; - - var webkitTextStrokeColor = { - name: "-webkit-text-stroke-color", - initialValue: 'currentcolor', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var webkitTextStrokeWidth = { - name: "-webkit-text-stroke-width", - initialValue: '0', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isDimensionToken(token)) { - return token.number; - } - return 0; - } - }; - - var CSSParsedDeclaration = /** @class */ (function () { - function CSSParsedDeclaration(context, declaration) { - var _a, _b; - this.animationDuration = parse(context, duration, declaration.animationDuration); - this.backgroundClip = parse(context, backgroundClip, declaration.backgroundClip); - this.backgroundColor = parse(context, backgroundColor, declaration.backgroundColor); - this.backgroundImage = parse(context, backgroundImage, declaration.backgroundImage); - this.backgroundOrigin = parse(context, backgroundOrigin, declaration.backgroundOrigin); - this.backgroundPosition = parse(context, backgroundPosition, declaration.backgroundPosition); - this.backgroundRepeat = parse(context, backgroundRepeat, declaration.backgroundRepeat); - this.backgroundSize = parse(context, backgroundSize, declaration.backgroundSize); - this.borderTopColor = parse(context, borderTopColor, declaration.borderTopColor); - this.borderRightColor = parse(context, borderRightColor, declaration.borderRightColor); - this.borderBottomColor = parse(context, borderBottomColor, declaration.borderBottomColor); - this.borderLeftColor = parse(context, borderLeftColor, declaration.borderLeftColor); - this.borderTopLeftRadius = parse(context, borderTopLeftRadius, declaration.borderTopLeftRadius); - this.borderTopRightRadius = parse(context, borderTopRightRadius, declaration.borderTopRightRadius); - this.borderBottomRightRadius = parse(context, borderBottomRightRadius, declaration.borderBottomRightRadius); - this.borderBottomLeftRadius = parse(context, borderBottomLeftRadius, declaration.borderBottomLeftRadius); - this.borderTopStyle = parse(context, borderTopStyle, declaration.borderTopStyle); - this.borderRightStyle = parse(context, borderRightStyle, declaration.borderRightStyle); - this.borderBottomStyle = parse(context, borderBottomStyle, declaration.borderBottomStyle); - this.borderLeftStyle = parse(context, borderLeftStyle, declaration.borderLeftStyle); - this.borderTopWidth = parse(context, borderTopWidth, declaration.borderTopWidth); - this.borderRightWidth = parse(context, borderRightWidth, declaration.borderRightWidth); - this.borderBottomWidth = parse(context, borderBottomWidth, declaration.borderBottomWidth); - this.borderLeftWidth = parse(context, borderLeftWidth, declaration.borderLeftWidth); - this.color = parse(context, color, declaration.color); - this.direction = parse(context, direction, declaration.direction); - this.display = parse(context, display, declaration.display); - this.float = parse(context, float, declaration.cssFloat); - this.fontFamily = parse(context, fontFamily, declaration.fontFamily); - this.fontSize = parse(context, fontSize, declaration.fontSize); - this.fontStyle = parse(context, fontStyle, declaration.fontStyle); - this.fontVariant = parse(context, fontVariant, declaration.fontVariant); - this.fontWeight = parse(context, fontWeight, declaration.fontWeight); - this.letterSpacing = parse(context, letterSpacing, declaration.letterSpacing); - this.lineBreak = parse(context, lineBreak, declaration.lineBreak); - this.lineHeight = parse(context, lineHeight, declaration.lineHeight); - this.listStyleImage = parse(context, listStyleImage, declaration.listStyleImage); - this.listStylePosition = parse(context, listStylePosition, declaration.listStylePosition); - this.listStyleType = parse(context, listStyleType, declaration.listStyleType); - this.marginTop = parse(context, marginTop, declaration.marginTop); - this.marginRight = parse(context, marginRight, declaration.marginRight); - this.marginBottom = parse(context, marginBottom, declaration.marginBottom); - this.marginLeft = parse(context, marginLeft, declaration.marginLeft); - this.opacity = parse(context, opacity, declaration.opacity); - var overflowTuple = parse(context, overflow, declaration.overflow); - this.overflowX = overflowTuple[0]; - this.overflowY = overflowTuple[overflowTuple.length > 1 ? 1 : 0]; - this.overflowWrap = parse(context, overflowWrap, declaration.overflowWrap); - this.paddingTop = parse(context, paddingTop, declaration.paddingTop); - this.paddingRight = parse(context, paddingRight, declaration.paddingRight); - this.paddingBottom = parse(context, paddingBottom, declaration.paddingBottom); - this.paddingLeft = parse(context, paddingLeft, declaration.paddingLeft); - this.paintOrder = parse(context, paintOrder, declaration.paintOrder); - this.position = parse(context, position, declaration.position); - this.textAlign = parse(context, textAlign, declaration.textAlign); - this.textDecorationColor = parse(context, textDecorationColor, (_a = declaration.textDecorationColor) !== null && _a !== void 0 ? _a : declaration.color); - this.textDecorationLine = parse(context, textDecorationLine, (_b = declaration.textDecorationLine) !== null && _b !== void 0 ? _b : declaration.textDecoration); - this.textShadow = parse(context, textShadow, declaration.textShadow); - this.textTransform = parse(context, textTransform, declaration.textTransform); - this.transform = parse(context, transform$1, declaration.transform); - this.transformOrigin = parse(context, transformOrigin, declaration.transformOrigin); - this.visibility = parse(context, visibility, declaration.visibility); - this.webkitTextStrokeColor = parse(context, webkitTextStrokeColor, declaration.webkitTextStrokeColor); - this.webkitTextStrokeWidth = parse(context, webkitTextStrokeWidth, declaration.webkitTextStrokeWidth); - this.wordBreak = parse(context, wordBreak, declaration.wordBreak); - this.zIndex = parse(context, zIndex, declaration.zIndex); - } - CSSParsedDeclaration.prototype.isVisible = function () { - return this.display > 0 && this.opacity > 0 && this.visibility === 0 /* VISIBLE */; - }; - CSSParsedDeclaration.prototype.isTransparent = function () { - return isTransparent(this.backgroundColor); - }; - CSSParsedDeclaration.prototype.isTransformed = function () { - return this.transform !== null; - }; - CSSParsedDeclaration.prototype.isPositioned = function () { - return this.position !== 0 /* STATIC */; - }; - CSSParsedDeclaration.prototype.isPositionedWithZIndex = function () { - return this.isPositioned() && !this.zIndex.auto; - }; - CSSParsedDeclaration.prototype.isFloating = function () { - return this.float !== 0 /* NONE */; - }; - CSSParsedDeclaration.prototype.isInlineLevel = function () { - return (contains(this.display, 4 /* INLINE */) || - contains(this.display, 33554432 /* INLINE_BLOCK */) || - contains(this.display, 268435456 /* INLINE_FLEX */) || - contains(this.display, 536870912 /* INLINE_GRID */) || - contains(this.display, 67108864 /* INLINE_LIST_ITEM */) || - contains(this.display, 134217728 /* INLINE_TABLE */)); - }; - return CSSParsedDeclaration; - }()); - var CSSParsedPseudoDeclaration = /** @class */ (function () { - function CSSParsedPseudoDeclaration(context, declaration) { - this.content = parse(context, content, declaration.content); - this.quotes = parse(context, quotes, declaration.quotes); - } - return CSSParsedPseudoDeclaration; - }()); - var CSSParsedCounterDeclaration = /** @class */ (function () { - function CSSParsedCounterDeclaration(context, declaration) { - this.counterIncrement = parse(context, counterIncrement, declaration.counterIncrement); - this.counterReset = parse(context, counterReset, declaration.counterReset); - } - return CSSParsedCounterDeclaration; - }()); - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var parse = function (context, descriptor, style) { - var tokenizer = new Tokenizer(); - var value = style !== null && typeof style !== 'undefined' ? style.toString() : descriptor.initialValue; - tokenizer.write(value); - var parser = new Parser(tokenizer.read()); - switch (descriptor.type) { - case 2 /* IDENT_VALUE */: - var token = parser.parseComponentValue(); - return descriptor.parse(context, isIdentToken(token) ? token.value : descriptor.initialValue); - case 0 /* VALUE */: - return descriptor.parse(context, parser.parseComponentValue()); - case 1 /* LIST */: - return descriptor.parse(context, parser.parseComponentValues()); - case 4 /* TOKEN_VALUE */: - return parser.parseComponentValue(); - case 3 /* TYPE_VALUE */: - switch (descriptor.format) { - case 'angle': - return angle.parse(context, parser.parseComponentValue()); - case 'color': - return color$1.parse(context, parser.parseComponentValue()); - case 'image': - return image.parse(context, parser.parseComponentValue()); - case 'length': - var length_1 = parser.parseComponentValue(); - return isLength(length_1) ? length_1 : ZERO_LENGTH; - case 'length-percentage': - var value_1 = parser.parseComponentValue(); - return isLengthPercentage(value_1) ? value_1 : ZERO_LENGTH; - case 'time': - return time.parse(context, parser.parseComponentValue()); - } - break; - } - }; - - var elementDebuggerAttribute = 'data-html2canvas-debug'; - var getElementDebugType = function (element) { - var attribute = element.getAttribute(elementDebuggerAttribute); - switch (attribute) { - case 'all': - return 1 /* ALL */; - case 'clone': - return 2 /* CLONE */; - case 'parse': - return 3 /* PARSE */; - case 'render': - return 4 /* RENDER */; - default: - return 0 /* NONE */; - } - }; - var isDebugging = function (element, type) { - var elementType = getElementDebugType(element); - return elementType === 1 /* ALL */ || type === elementType; - }; - - var ElementContainer = /** @class */ (function () { - function ElementContainer(context, element) { - this.context = context; - this.textNodes = []; - this.elements = []; - this.flags = 0; - if (isDebugging(element, 3 /* PARSE */)) { - debugger; - } - this.styles = new CSSParsedDeclaration(context, window.getComputedStyle(element, null)); - if (isHTMLElementNode(element)) { - if (this.styles.animationDuration.some(function (duration) { return duration > 0; })) { - element.style.animationDuration = '0s'; - } - if (this.styles.transform !== null) { - // getBoundingClientRect takes transforms into account - element.style.transform = 'none'; - } - } - this.bounds = parseBounds(this.context, element); - if (isDebugging(element, 4 /* RENDER */)) { - this.flags |= 16 /* DEBUG_RENDER */; - } - } - return ElementContainer; - }()); - - /* - * text-segmentation 1.0.3 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var base64 = 'AAAAAAAAAAAAEA4AGBkAAFAaAAACAAAAAAAIABAAGAAwADgACAAQAAgAEAAIABAACAAQAAgAEAAIABAACAAQAAgAEAAIABAAQABIAEQATAAIABAACAAQAAgAEAAIABAAVABcAAgAEAAIABAACAAQAGAAaABwAHgAgACIAI4AlgAIABAAmwCjAKgAsAC2AL4AvQDFAMoA0gBPAVYBWgEIAAgACACMANoAYgFkAWwBdAF8AX0BhQGNAZUBlgGeAaMBlQGWAasBswF8AbsBwwF0AcsBYwHTAQgA2wG/AOMBdAF8AekB8QF0AfkB+wHiAHQBfAEIAAMC5gQIAAsCEgIIAAgAFgIeAggAIgIpAggAMQI5AkACygEIAAgASAJQAlgCYAIIAAgACAAKBQoFCgUTBRMFGQUrBSsFCAAIAAgACAAIAAgACAAIAAgACABdAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABoAmgCrwGvAQgAbgJ2AggAHgEIAAgACADnAXsCCAAIAAgAgwIIAAgACAAIAAgACACKAggAkQKZAggAPADJAAgAoQKkAqwCsgK6AsICCADJAggA0AIIAAgACAAIANYC3gIIAAgACAAIAAgACABAAOYCCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAkASoB+QIEAAgACAA8AEMCCABCBQgACABJBVAFCAAIAAgACAAIAAgACAAIAAgACABTBVoFCAAIAFoFCABfBWUFCAAIAAgACAAIAAgAbQUIAAgACAAIAAgACABzBXsFfQWFBYoFigWKBZEFigWKBYoFmAWfBaYFrgWxBbkFCAAIAAgACAAIAAgACAAIAAgACAAIAMEFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAMgFCADQBQgACAAIAAgACAAIAAgACAAIAAgACAAIAO4CCAAIAAgAiQAIAAgACABAAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAD0AggACAD8AggACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIANYFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAMDvwAIAAgAJAIIAAgACAAIAAgACAAIAAgACwMTAwgACAB9BOsEGwMjAwgAKwMyAwsFYgE3A/MEPwMIAEUDTQNRAwgAWQOsAGEDCAAIAAgACAAIAAgACABpAzQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFIQUoBSwFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABtAwgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABMAEwACAAIAAgACAAIABgACAAIAAgACAC/AAgACAAyAQgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACAAIAAwAAgACAAIAAgACAAIAAgACAAIAAAARABIAAgACAAIABQASAAIAAgAIABwAEAAjgCIABsAqAC2AL0AigDQAtwC+IJIQqVAZUBWQqVAZUBlQGVAZUBlQGrC5UBlQGVAZUBlQGVAZUBlQGVAXsKlQGVAbAK6wsrDGUMpQzlDJUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAfAKAAuZA64AtwCJALoC6ADwAAgAuACgA/oEpgO6AqsD+AAIAAgAswMIAAgACAAIAIkAuwP5AfsBwwPLAwgACAAIAAgACADRA9kDCAAIAOED6QMIAAgACAAIAAgACADuA/YDCAAIAP4DyQAIAAgABgQIAAgAXQAOBAgACAAIAAgACAAIABMECAAIAAgACAAIAAgACAD8AAQBCAAIAAgAGgQiBCoECAExBAgAEAEIAAgACAAIAAgACAAIAAgACAAIAAgACAA4BAgACABABEYECAAIAAgATAQYAQgAVAQIAAgACAAIAAgACAAIAAgACAAIAFoECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAOQEIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAB+BAcACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAEABhgSMBAgACAAIAAgAlAQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAwAEAAQABAADAAMAAwADAAQABAAEAAQABAAEAAQABHATAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAdQMIAAgACAAIAAgACAAIAMkACAAIAAgAfQMIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACFA4kDCAAIAAgACAAIAOcBCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAIcDCAAIAAgACAAIAAgACAAIAAgACAAIAJEDCAAIAAgACADFAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABgBAgAZgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAbAQCBXIECAAIAHkECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABAAJwEQACjBKoEsgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAC6BMIECAAIAAgACAAIAAgACABmBAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAxwQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAGYECAAIAAgAzgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBd0FXwUIAOIF6gXxBYoF3gT5BQAGCAaKBYoFigWKBYoFigWKBYoFigWKBYoFigXWBIoFigWKBYoFigWKBYoFigWKBYsFEAaKBYoFigWKBYoFigWKBRQGCACKBYoFigWKBQgACAAIANEECAAIABgGigUgBggAJgYIAC4GMwaKBYoF0wQ3Bj4GigWKBYoFigWKBYoFigWKBYoFigWKBYoFigUIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWLBf///////wQABAAEAAQABAAEAAQABAAEAAQAAwAEAAQAAgAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAQADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUAAAAFAAUAAAAFAAUAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAQAAAAUABQAFAAUABQAFAAAAAAAFAAUAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAFAAUAAQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAAABwAHAAcAAAAHAAcABwAFAAEAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAcABwAFAAUAAAAAAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAAAAQABAAAAAAAAAAAAAAAFAAUABQAFAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAHAAcAAAAHAAcAAAAAAAUABQAHAAUAAQAHAAEABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwABAAUABQAFAAUAAAAAAAAAAAAAAAEAAQABAAEAAQABAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABQANAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAABQAHAAUABQAFAAAAAAAAAAcABQAFAAUABQAFAAQABAAEAAQABAAEAAQABAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUAAAAFAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAUAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAcABwAFAAcABwAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUABwAHAAUABQAFAAUAAAAAAAcABwAAAAAABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAAAAAAAAAAABQAFAAAAAAAFAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAFAAUABQAFAAUAAAAFAAUABwAAAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABwAFAAUABQAFAAAAAAAHAAcAAAAAAAcABwAFAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAAAAAAAAAHAAcABwAAAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAUABQAFAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAHAAcABQAHAAcAAAAFAAcABwAAAAcABwAFAAUAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAFAAcABwAFAAUABQAAAAUAAAAHAAcABwAHAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAHAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUAAAAFAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAUAAAAFAAUAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABwAFAAUABQAFAAUABQAAAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABQAFAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAFAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAHAAUABQAFAAUABQAFAAUABwAHAAcABwAHAAcABwAHAAUABwAHAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABwAHAAcABwAFAAUABwAHAAcAAAAAAAAAAAAHAAcABQAHAAcABwAHAAcABwAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAUABQAFAAUABQAFAAUAAAAFAAAABQAAAAAABQAFAAUABQAFAAUABQAFAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAUABQAFAAUABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABwAFAAcABwAHAAcABwAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAUABQAFAAUABwAHAAUABQAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABQAFAAcABwAHAAUABwAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAcABQAFAAUABQAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAAAAAABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAAAAAAAAAFAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAUABQAHAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAFAAUABQAFAAcABwAFAAUABwAHAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAcABwAFAAUABwAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABQAAAAAABQAFAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAcABwAAAAAAAAAAAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAcABwAFAAcABwAAAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAFAAUABQAAAAUABQAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABwAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAHAAcABQAHAAUABQAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAAABwAHAAAAAAAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAFAAUABwAFAAcABwAFAAcABQAFAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAAAAAABwAHAAcABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAFAAcABwAFAAUABQAFAAUABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAUABQAFAAcABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABQAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAAAAAAFAAUABwAHAAcABwAFAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAHAAUABQAFAAUABQAFAAUABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAABQAAAAUABQAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAHAAcAAAAFAAUAAAAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABQAFAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAABQAFAAUABQAFAAUABQAAAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAFAAUABQAFAAUADgAOAA4ADgAOAA4ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAMAAwADAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkAAAAAAAAAAAAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAAAAAAAAAAAAsADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwACwAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAADgAOAA4AAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAAAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4AAAAOAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAAAAAAAAAAAA4AAAAOAAAAAAAAAAAADgAOAA4AAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAA='; - - /* - * utrie 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars$1 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$1 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$1 = 0; i$1 < chars$1.length; i$1++) { - lookup$1[chars$1.charCodeAt(i$1)] = i$1; - } - var decode = function (base64) { - var bufferLength = base64.length * 0.75, len = base64.length, i, p = 0, encoded1, encoded2, encoded3, encoded4; - if (base64[base64.length - 1] === '=') { - bufferLength--; - if (base64[base64.length - 2] === '=') { - bufferLength--; - } - } - var buffer = typeof ArrayBuffer !== 'undefined' && - typeof Uint8Array !== 'undefined' && - typeof Uint8Array.prototype.slice !== 'undefined' - ? new ArrayBuffer(bufferLength) - : new Array(bufferLength); - var bytes = Array.isArray(buffer) ? buffer : new Uint8Array(buffer); - for (i = 0; i < len; i += 4) { - encoded1 = lookup$1[base64.charCodeAt(i)]; - encoded2 = lookup$1[base64.charCodeAt(i + 1)]; - encoded3 = lookup$1[base64.charCodeAt(i + 2)]; - encoded4 = lookup$1[base64.charCodeAt(i + 3)]; - bytes[p++] = (encoded1 << 2) | (encoded2 >> 4); - bytes[p++] = ((encoded2 & 15) << 4) | (encoded3 >> 2); - bytes[p++] = ((encoded3 & 3) << 6) | (encoded4 & 63); - } - return buffer; - }; - var polyUint16Array = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 2) { - bytes.push((buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - var polyUint32Array = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 4) { - bytes.push((buffer[i + 3] << 24) | (buffer[i + 2] << 16) | (buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - - /** Shift size for getting the index-2 table offset. */ - var UTRIE2_SHIFT_2 = 5; - /** Shift size for getting the index-1 table offset. */ - var UTRIE2_SHIFT_1 = 6 + 5; - /** - * Shift size for shifting left the index array values. - * Increases possible data size with 16-bit index values at the cost - * of compactability. - * This requires data blocks to be aligned by UTRIE2_DATA_GRANULARITY. - */ - var UTRIE2_INDEX_SHIFT = 2; - /** - * Difference between the two shift sizes, - * for getting an index-1 offset from an index-2 offset. 6=11-5 - */ - var UTRIE2_SHIFT_1_2 = UTRIE2_SHIFT_1 - UTRIE2_SHIFT_2; - /** - * The part of the index-2 table for U+D800..U+DBFF stores values for - * lead surrogate code _units_ not code _points_. - * Values for lead surrogate code _points_ are indexed with this portion of the table. - * Length=32=0x20=0x400>>UTRIE2_SHIFT_2. (There are 1024=0x400 lead surrogates.) - */ - var UTRIE2_LSCP_INDEX_2_OFFSET = 0x10000 >> UTRIE2_SHIFT_2; - /** Number of entries in a data block. 32=0x20 */ - var UTRIE2_DATA_BLOCK_LENGTH = 1 << UTRIE2_SHIFT_2; - /** Mask for getting the lower bits for the in-data-block offset. */ - var UTRIE2_DATA_MASK = UTRIE2_DATA_BLOCK_LENGTH - 1; - var UTRIE2_LSCP_INDEX_2_LENGTH = 0x400 >> UTRIE2_SHIFT_2; - /** Count the lengths of both BMP pieces. 2080=0x820 */ - var UTRIE2_INDEX_2_BMP_LENGTH = UTRIE2_LSCP_INDEX_2_OFFSET + UTRIE2_LSCP_INDEX_2_LENGTH; - /** - * The 2-byte UTF-8 version of the index-2 table follows at offset 2080=0x820. - * Length 32=0x20 for lead bytes C0..DF, regardless of UTRIE2_SHIFT_2. - */ - var UTRIE2_UTF8_2B_INDEX_2_OFFSET = UTRIE2_INDEX_2_BMP_LENGTH; - var UTRIE2_UTF8_2B_INDEX_2_LENGTH = 0x800 >> 6; /* U+0800 is the first code point after 2-byte UTF-8 */ - /** - * The index-1 table, only used for supplementary code points, at offset 2112=0x840. - * Variable length, for code points up to highStart, where the last single-value range starts. - * Maximum length 512=0x200=0x100000>>UTRIE2_SHIFT_1. - * (For 0x100000 supplementary code points U+10000..U+10ffff.) - * - * The part of the index-2 table for supplementary code points starts - * after this index-1 table. - * - * Both the index-1 table and the following part of the index-2 table - * are omitted completely if there is only BMP data. - */ - var UTRIE2_INDEX_1_OFFSET = UTRIE2_UTF8_2B_INDEX_2_OFFSET + UTRIE2_UTF8_2B_INDEX_2_LENGTH; - /** - * Number of index-1 entries for the BMP. 32=0x20 - * This part of the index-1 table is omitted from the serialized form. - */ - var UTRIE2_OMITTED_BMP_INDEX_1_LENGTH = 0x10000 >> UTRIE2_SHIFT_1; - /** Number of entries in an index-2 block. 64=0x40 */ - var UTRIE2_INDEX_2_BLOCK_LENGTH = 1 << UTRIE2_SHIFT_1_2; - /** Mask for getting the lower bits for the in-index-2-block offset. */ - var UTRIE2_INDEX_2_MASK = UTRIE2_INDEX_2_BLOCK_LENGTH - 1; - var slice16 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint16Array(Array.prototype.slice.call(view, start, end)); - }; - var slice32 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint32Array(Array.prototype.slice.call(view, start, end)); - }; - var createTrieFromBase64 = function (base64, _byteLength) { - var buffer = decode(base64); - var view32 = Array.isArray(buffer) ? polyUint32Array(buffer) : new Uint32Array(buffer); - var view16 = Array.isArray(buffer) ? polyUint16Array(buffer) : new Uint16Array(buffer); - var headerLength = 24; - var index = slice16(view16, headerLength / 2, view32[4] / 2); - var data = view32[5] === 2 - ? slice16(view16, (headerLength + view32[4]) / 2) - : slice32(view32, Math.ceil((headerLength + view32[4]) / 4)); - return new Trie(view32[0], view32[1], view32[2], view32[3], index, data); - }; - var Trie = /** @class */ (function () { - function Trie(initialValue, errorValue, highStart, highValueIndex, index, data) { - this.initialValue = initialValue; - this.errorValue = errorValue; - this.highStart = highStart; - this.highValueIndex = highValueIndex; - this.index = index; - this.data = data; - } - /** - * Get the value for a code point as stored in the Trie. - * - * @param codePoint the code point - * @return the value - */ - Trie.prototype.get = function (codePoint) { - var ix; - if (codePoint >= 0) { - if (codePoint < 0x0d800 || (codePoint > 0x0dbff && codePoint <= 0x0ffff)) { - // Ordinary BMP code point, excluding leading surrogates. - // BMP uses a single level lookup. BMP index starts at offset 0 in the Trie2 index. - // 16 bit data is stored in the index array itself. - ix = this.index[codePoint >> UTRIE2_SHIFT_2]; - ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK); - return this.data[ix]; - } - if (codePoint <= 0xffff) { - // Lead Surrogate Code Point. A Separate index section is stored for - // lead surrogate code units and code points. - // The main index has the code unit data. - // For this function, we need the code point data. - // Note: this expression could be refactored for slightly improved efficiency, but - // surrogate code points will be so rare in practice that it's not worth it. - ix = this.index[UTRIE2_LSCP_INDEX_2_OFFSET + ((codePoint - 0xd800) >> UTRIE2_SHIFT_2)]; - ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK); - return this.data[ix]; - } - if (codePoint < this.highStart) { - // Supplemental code point, use two-level lookup. - ix = UTRIE2_INDEX_1_OFFSET - UTRIE2_OMITTED_BMP_INDEX_1_LENGTH + (codePoint >> UTRIE2_SHIFT_1); - ix = this.index[ix]; - ix += (codePoint >> UTRIE2_SHIFT_2) & UTRIE2_INDEX_2_MASK; - ix = this.index[ix]; - ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK); - return this.data[ix]; - } - if (codePoint <= 0x10ffff) { - return this.data[this.highValueIndex]; - } - } - // Fall through. The code point is outside of the legal range of 0..0x10ffff. - return this.errorValue; - }; - return Trie; - }()); - - /* - * base64-arraybuffer 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i = 0; i < chars.length; i++) { - lookup[chars.charCodeAt(i)] = i; - } - - var Prepend = 1; - var CR = 2; - var LF = 3; - var Control = 4; - var Extend = 5; - var SpacingMark = 7; - var L = 8; - var V = 9; - var T = 10; - var LV = 11; - var LVT = 12; - var ZWJ = 13; - var Extended_Pictographic = 14; - var RI = 15; - var toCodePoints = function (str) { - var codePoints = []; - var i = 0; - var length = str.length; - while (i < length) { - var value = str.charCodeAt(i++); - if (value >= 0xd800 && value <= 0xdbff && i < length) { - var extra = str.charCodeAt(i++); - if ((extra & 0xfc00) === 0xdc00) { - codePoints.push(((value & 0x3ff) << 10) + (extra & 0x3ff) + 0x10000); - } - else { - codePoints.push(value); - i--; - } - } - else { - codePoints.push(value); - } - } - return codePoints; - }; - var fromCodePoint = function () { - var codePoints = []; - for (var _i = 0; _i < arguments.length; _i++) { - codePoints[_i] = arguments[_i]; - } - if (String.fromCodePoint) { - return String.fromCodePoint.apply(String, codePoints); - } - var length = codePoints.length; - if (!length) { - return ''; - } - var codeUnits = []; - var index = -1; - var result = ''; - while (++index < length) { - var codePoint = codePoints[index]; - if (codePoint <= 0xffff) { - codeUnits.push(codePoint); - } - else { - codePoint -= 0x10000; - codeUnits.push((codePoint >> 10) + 0xd800, (codePoint % 0x400) + 0xdc00); - } - if (index + 1 === length || codeUnits.length > 0x4000) { - result += String.fromCharCode.apply(String, codeUnits); - codeUnits.length = 0; - } - } - return result; - }; - var UnicodeTrie = createTrieFromBase64(base64); - var BREAK_NOT_ALLOWED = '×'; - var BREAK_ALLOWED = '÷'; - var codePointToClass = function (codePoint) { return UnicodeTrie.get(codePoint); }; - var _graphemeBreakAtIndex = function (_codePoints, classTypes, index) { - var prevIndex = index - 2; - var prev = classTypes[prevIndex]; - var current = classTypes[index - 1]; - var next = classTypes[index]; - // GB3 Do not break between a CR and LF - if (current === CR && next === LF) { - return BREAK_NOT_ALLOWED; - } - // GB4 Otherwise, break before and after controls. - if (current === CR || current === LF || current === Control) { - return BREAK_ALLOWED; - } - // GB5 - if (next === CR || next === LF || next === Control) { - return BREAK_ALLOWED; - } - // Do not break Hangul syllable sequences. - // GB6 - if (current === L && [L, V, LV, LVT].indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED; - } - // GB7 - if ((current === LV || current === V) && (next === V || next === T)) { - return BREAK_NOT_ALLOWED; - } - // GB8 - if ((current === LVT || current === T) && next === T) { - return BREAK_NOT_ALLOWED; - } - // GB9 Do not break before extending characters or ZWJ. - if (next === ZWJ || next === Extend) { - return BREAK_NOT_ALLOWED; - } - // Do not break before SpacingMarks, or after Prepend characters. - // GB9a - if (next === SpacingMark) { - return BREAK_NOT_ALLOWED; - } - // GB9a - if (current === Prepend) { - return BREAK_NOT_ALLOWED; - } - // GB11 Do not break within emoji modifier sequences or emoji zwj sequences. - if (current === ZWJ && next === Extended_Pictographic) { - while (prev === Extend) { - prev = classTypes[--prevIndex]; - } - if (prev === Extended_Pictographic) { - return BREAK_NOT_ALLOWED; - } - } - // GB12 Do not break within emoji flag sequences. - // That is, do not break between regional indicator (RI) symbols - // if there is an odd number of RI characters before the break point. - if (current === RI && next === RI) { - var countRI = 0; - while (prev === RI) { - countRI++; - prev = classTypes[--prevIndex]; - } - if (countRI % 2 === 0) { - return BREAK_NOT_ALLOWED; - } - } - return BREAK_ALLOWED; - }; - var GraphemeBreaker = function (str) { - var codePoints = toCodePoints(str); - var length = codePoints.length; - var index = 0; - var lastEnd = 0; - var classTypes = codePoints.map(codePointToClass); - return { - next: function () { - if (index >= length) { - return { done: true, value: null }; - } - var graphemeBreak = BREAK_NOT_ALLOWED; - while (index < length && - (graphemeBreak = _graphemeBreakAtIndex(codePoints, classTypes, ++index)) === BREAK_NOT_ALLOWED) { } - if (graphemeBreak !== BREAK_NOT_ALLOWED || index === length) { - var value = fromCodePoint.apply(null, codePoints.slice(lastEnd, index)); - lastEnd = index; - return { value: value, done: false }; - } - return { done: true, value: null }; - }, - }; - }; - var splitGraphemes = function (str) { - var breaker = GraphemeBreaker(str); - var graphemes = []; - var bk; - while (!(bk = breaker.next()).done) { - if (bk.value) { - graphemes.push(bk.value.slice()); - } - } - return graphemes; - }; - - var testRangeBounds = function (document) { - var TEST_HEIGHT = 123; - if (document.createRange) { - var range = document.createRange(); - if (range.getBoundingClientRect) { - var testElement = document.createElement('boundtest'); - testElement.style.height = TEST_HEIGHT + "px"; - testElement.style.display = 'block'; - document.body.appendChild(testElement); - range.selectNode(testElement); - var rangeBounds = range.getBoundingClientRect(); - var rangeHeight = Math.round(rangeBounds.height); - document.body.removeChild(testElement); - if (rangeHeight === TEST_HEIGHT) { - return true; - } - } - } - return false; - }; - var testIOSLineBreak = function (document) { - var testElement = document.createElement('boundtest'); - testElement.style.width = '50px'; - testElement.style.display = 'block'; - testElement.style.fontSize = '12px'; - testElement.style.letterSpacing = '0px'; - testElement.style.wordSpacing = '0px'; - document.body.appendChild(testElement); - var range = document.createRange(); - testElement.innerHTML = typeof ''.repeat === 'function' ? '👨'.repeat(10) : ''; - var node = testElement.firstChild; - var textList = toCodePoints$1(node.data).map(function (i) { return fromCodePoint$1(i); }); - var offset = 0; - var prev = {}; - // ios 13 does not handle range getBoundingClientRect line changes correctly #2177 - var supports = textList.every(function (text, i) { - range.setStart(node, offset); - range.setEnd(node, offset + text.length); - var rect = range.getBoundingClientRect(); - offset += text.length; - var boundAhead = rect.x > prev.x || rect.y > prev.y; - prev = rect; - if (i === 0) { - return true; - } - return boundAhead; - }); - document.body.removeChild(testElement); - return supports; - }; - var testCORS = function () { return typeof new Image().crossOrigin !== 'undefined'; }; - var testResponseType = function () { return typeof new XMLHttpRequest().responseType === 'string'; }; - var testSVG = function (document) { - var img = new Image(); - var canvas = document.createElement('canvas'); - var ctx = canvas.getContext('2d'); - if (!ctx) { - return false; - } - img.src = "data:image/svg+xml,"; - try { - ctx.drawImage(img, 0, 0); - canvas.toDataURL(); - } - catch (e) { - return false; - } - return true; - }; - var isGreenPixel = function (data) { - return data[0] === 0 && data[1] === 255 && data[2] === 0 && data[3] === 255; - }; - var testForeignObject = function (document) { - var canvas = document.createElement('canvas'); - var size = 100; - canvas.width = size; - canvas.height = size; - var ctx = canvas.getContext('2d'); - if (!ctx) { - return Promise.reject(false); - } - ctx.fillStyle = 'rgb(0, 255, 0)'; - ctx.fillRect(0, 0, size, size); - var img = new Image(); - var greenImageSrc = canvas.toDataURL(); - img.src = greenImageSrc; - var svg = createForeignObjectSVG(size, size, 0, 0, img); - ctx.fillStyle = 'red'; - ctx.fillRect(0, 0, size, size); - return loadSerializedSVG$1(svg) - .then(function (img) { - ctx.drawImage(img, 0, 0); - var data = ctx.getImageData(0, 0, size, size).data; - ctx.fillStyle = 'red'; - ctx.fillRect(0, 0, size, size); - var node = document.createElement('div'); - node.style.backgroundImage = "url(" + greenImageSrc + ")"; - node.style.height = size + "px"; - // Firefox 55 does not render inline tags - return isGreenPixel(data) - ? loadSerializedSVG$1(createForeignObjectSVG(size, size, 0, 0, node)) - : Promise.reject(false); - }) - .then(function (img) { - ctx.drawImage(img, 0, 0); - // Edge does not render background-images - return isGreenPixel(ctx.getImageData(0, 0, size, size).data); - }) - .catch(function () { return false; }); - }; - var createForeignObjectSVG = function (width, height, x, y, node) { - var xmlns = 'http://www.w3.org/2000/svg'; - var svg = document.createElementNS(xmlns, 'svg'); - var foreignObject = document.createElementNS(xmlns, 'foreignObject'); - svg.setAttributeNS(null, 'width', width.toString()); - svg.setAttributeNS(null, 'height', height.toString()); - foreignObject.setAttributeNS(null, 'width', '100%'); - foreignObject.setAttributeNS(null, 'height', '100%'); - foreignObject.setAttributeNS(null, 'x', x.toString()); - foreignObject.setAttributeNS(null, 'y', y.toString()); - foreignObject.setAttributeNS(null, 'externalResourcesRequired', 'true'); - svg.appendChild(foreignObject); - foreignObject.appendChild(node); - return svg; - }; - var loadSerializedSVG$1 = function (svg) { - return new Promise(function (resolve, reject) { - var img = new Image(); - img.onload = function () { return resolve(img); }; - img.onerror = reject; - img.src = "data:image/svg+xml;charset=utf-8," + encodeURIComponent(new XMLSerializer().serializeToString(svg)); - }); - }; - var FEATURES = { - get SUPPORT_RANGE_BOUNDS() { - var value = testRangeBounds(document); - Object.defineProperty(FEATURES, 'SUPPORT_RANGE_BOUNDS', { value: value }); - return value; - }, - get SUPPORT_WORD_BREAKING() { - var value = FEATURES.SUPPORT_RANGE_BOUNDS && testIOSLineBreak(document); - Object.defineProperty(FEATURES, 'SUPPORT_WORD_BREAKING', { value: value }); - return value; - }, - get SUPPORT_SVG_DRAWING() { - var value = testSVG(document); - Object.defineProperty(FEATURES, 'SUPPORT_SVG_DRAWING', { value: value }); - return value; - }, - get SUPPORT_FOREIGNOBJECT_DRAWING() { - var value = typeof Array.from === 'function' && typeof window.fetch === 'function' - ? testForeignObject(document) - : Promise.resolve(false); - Object.defineProperty(FEATURES, 'SUPPORT_FOREIGNOBJECT_DRAWING', { value: value }); - return value; - }, - get SUPPORT_CORS_IMAGES() { - var value = testCORS(); - Object.defineProperty(FEATURES, 'SUPPORT_CORS_IMAGES', { value: value }); - return value; - }, - get SUPPORT_RESPONSE_TYPE() { - var value = testResponseType(); - Object.defineProperty(FEATURES, 'SUPPORT_RESPONSE_TYPE', { value: value }); - return value; - }, - get SUPPORT_CORS_XHR() { - var value = 'withCredentials' in new XMLHttpRequest(); - Object.defineProperty(FEATURES, 'SUPPORT_CORS_XHR', { value: value }); - return value; - }, - get SUPPORT_NATIVE_TEXT_SEGMENTATION() { - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var value = !!(typeof Intl !== 'undefined' && Intl.Segmenter); - Object.defineProperty(FEATURES, 'SUPPORT_NATIVE_TEXT_SEGMENTATION', { value: value }); - return value; - } - }; - - var TextBounds = /** @class */ (function () { - function TextBounds(text, bounds) { - this.text = text; - this.bounds = bounds; - } - return TextBounds; - }()); - var parseTextBounds = function (context, value, styles, node) { - var textList = breakText(value, styles); - var textBounds = []; - var offset = 0; - textList.forEach(function (text) { - if (styles.textDecorationLine.length || text.trim().length > 0) { - if (FEATURES.SUPPORT_RANGE_BOUNDS) { - var clientRects = createRange(node, offset, text.length).getClientRects(); - if (clientRects.length > 1) { - var subSegments = segmentGraphemes(text); - var subOffset_1 = 0; - subSegments.forEach(function (subSegment) { - textBounds.push(new TextBounds(subSegment, Bounds.fromDOMRectList(context, createRange(node, subOffset_1 + offset, subSegment.length).getClientRects()))); - subOffset_1 += subSegment.length; - }); - } - else { - textBounds.push(new TextBounds(text, Bounds.fromDOMRectList(context, clientRects))); - } - } - else { - var replacementNode = node.splitText(text.length); - textBounds.push(new TextBounds(text, getWrapperBounds(context, node))); - node = replacementNode; - } - } - else if (!FEATURES.SUPPORT_RANGE_BOUNDS) { - node = node.splitText(text.length); - } - offset += text.length; - }); - return textBounds; - }; - var getWrapperBounds = function (context, node) { - var ownerDocument = node.ownerDocument; - if (ownerDocument) { - var wrapper = ownerDocument.createElement('html2canvaswrapper'); - wrapper.appendChild(node.cloneNode(true)); - var parentNode = node.parentNode; - if (parentNode) { - parentNode.replaceChild(wrapper, node); - var bounds = parseBounds(context, wrapper); - if (wrapper.firstChild) { - parentNode.replaceChild(wrapper.firstChild, wrapper); - } - return bounds; - } - } - return Bounds.EMPTY; - }; - var createRange = function (node, offset, length) { - var ownerDocument = node.ownerDocument; - if (!ownerDocument) { - throw new Error('Node has no owner document'); - } - var range = ownerDocument.createRange(); - range.setStart(node, offset); - range.setEnd(node, offset + length); - return range; - }; - var segmentGraphemes = function (value) { - if (FEATURES.SUPPORT_NATIVE_TEXT_SEGMENTATION) { - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var segmenter = new Intl.Segmenter(void 0, { granularity: 'grapheme' }); - // eslint-disable-next-line @typescript-eslint/no-explicit-any - return Array.from(segmenter.segment(value)).map(function (segment) { return segment.segment; }); - } - return splitGraphemes(value); - }; - var segmentWords = function (value, styles) { - if (FEATURES.SUPPORT_NATIVE_TEXT_SEGMENTATION) { - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var segmenter = new Intl.Segmenter(void 0, { - granularity: 'word' - }); - // eslint-disable-next-line @typescript-eslint/no-explicit-any - return Array.from(segmenter.segment(value)).map(function (segment) { return segment.segment; }); - } - return breakWords(value, styles); - }; - var breakText = function (value, styles) { - return styles.letterSpacing !== 0 ? segmentGraphemes(value) : segmentWords(value, styles); - }; - // https://drafts.csswg.org/css-text/#word-separator - var wordSeparators = [0x0020, 0x00a0, 0x1361, 0x10100, 0x10101, 0x1039, 0x1091]; - var breakWords = function (str, styles) { - var breaker = LineBreaker(str, { - lineBreak: styles.lineBreak, - wordBreak: styles.overflowWrap === "break-word" /* BREAK_WORD */ ? 'break-word' : styles.wordBreak - }); - var words = []; - var bk; - var _loop_1 = function () { - if (bk.value) { - var value = bk.value.slice(); - var codePoints = toCodePoints$1(value); - var word_1 = ''; - codePoints.forEach(function (codePoint) { - if (wordSeparators.indexOf(codePoint) === -1) { - word_1 += fromCodePoint$1(codePoint); - } - else { - if (word_1.length) { - words.push(word_1); - } - words.push(fromCodePoint$1(codePoint)); - word_1 = ''; - } - }); - if (word_1.length) { - words.push(word_1); - } - } - }; - while (!(bk = breaker.next()).done) { - _loop_1(); - } - return words; - }; - - var TextContainer = /** @class */ (function () { - function TextContainer(context, node, styles) { - this.text = transform(node.data, styles.textTransform); - this.textBounds = parseTextBounds(context, this.text, styles, node); - } - return TextContainer; - }()); - var transform = function (text, transform) { - switch (transform) { - case 1 /* LOWERCASE */: - return text.toLowerCase(); - case 3 /* CAPITALIZE */: - return text.replace(CAPITALIZE, capitalize); - case 2 /* UPPERCASE */: - return text.toUpperCase(); - default: - return text; - } - }; - var CAPITALIZE = /(^|\s|:|-|\(|\))([a-z])/g; - var capitalize = function (m, p1, p2) { - if (m.length > 0) { - return p1 + p2.toUpperCase(); - } - return m; - }; - - var ImageElementContainer = /** @class */ (function (_super) { - __extends(ImageElementContainer, _super); - function ImageElementContainer(context, img) { - var _this = _super.call(this, context, img) || this; - _this.src = img.currentSrc || img.src; - _this.intrinsicWidth = img.naturalWidth; - _this.intrinsicHeight = img.naturalHeight; - _this.context.cache.addImage(_this.src); - return _this; - } - return ImageElementContainer; - }(ElementContainer)); - - var CanvasElementContainer = /** @class */ (function (_super) { - __extends(CanvasElementContainer, _super); - function CanvasElementContainer(context, canvas) { - var _this = _super.call(this, context, canvas) || this; - _this.canvas = canvas; - _this.intrinsicWidth = canvas.width; - _this.intrinsicHeight = canvas.height; - return _this; - } - return CanvasElementContainer; - }(ElementContainer)); - - var SVGElementContainer = /** @class */ (function (_super) { - __extends(SVGElementContainer, _super); - function SVGElementContainer(context, img) { - var _this = _super.call(this, context, img) || this; - var s = new XMLSerializer(); - var bounds = parseBounds(context, img); - img.setAttribute('width', bounds.width + "px"); - img.setAttribute('height', bounds.height + "px"); - _this.svg = "data:image/svg+xml," + encodeURIComponent(s.serializeToString(img)); - _this.intrinsicWidth = img.width.baseVal.value; - _this.intrinsicHeight = img.height.baseVal.value; - _this.context.cache.addImage(_this.svg); - return _this; - } - return SVGElementContainer; - }(ElementContainer)); - - var LIElementContainer = /** @class */ (function (_super) { - __extends(LIElementContainer, _super); - function LIElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - _this.value = element.value; - return _this; - } - return LIElementContainer; - }(ElementContainer)); - - var OLElementContainer = /** @class */ (function (_super) { - __extends(OLElementContainer, _super); - function OLElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - _this.start = element.start; - _this.reversed = typeof element.reversed === 'boolean' && element.reversed === true; - return _this; - } - return OLElementContainer; - }(ElementContainer)); - - var CHECKBOX_BORDER_RADIUS = [ - { - type: 15 /* DIMENSION_TOKEN */, - flags: 0, - unit: 'px', - number: 3 - } - ]; - var RADIO_BORDER_RADIUS = [ - { - type: 16 /* PERCENTAGE_TOKEN */, - flags: 0, - number: 50 - } - ]; - var reformatInputBounds = function (bounds) { - if (bounds.width > bounds.height) { - return new Bounds(bounds.left + (bounds.width - bounds.height) / 2, bounds.top, bounds.height, bounds.height); - } - else if (bounds.width < bounds.height) { - return new Bounds(bounds.left, bounds.top + (bounds.height - bounds.width) / 2, bounds.width, bounds.width); - } - return bounds; - }; - var getInputValue = function (node) { - var value = node.type === PASSWORD ? new Array(node.value.length + 1).join('\u2022') : node.value; - return value.length === 0 ? node.placeholder || '' : value; - }; - var CHECKBOX = 'checkbox'; - var RADIO = 'radio'; - var PASSWORD = 'password'; - var INPUT_COLOR = 0x2a2a2aff; - var InputElementContainer = /** @class */ (function (_super) { - __extends(InputElementContainer, _super); - function InputElementContainer(context, input) { - var _this = _super.call(this, context, input) || this; - _this.type = input.type.toLowerCase(); - _this.checked = input.checked; - _this.value = getInputValue(input); - if (_this.type === CHECKBOX || _this.type === RADIO) { - _this.styles.backgroundColor = 0xdededeff; - _this.styles.borderTopColor = - _this.styles.borderRightColor = - _this.styles.borderBottomColor = - _this.styles.borderLeftColor = - 0xa5a5a5ff; - _this.styles.borderTopWidth = - _this.styles.borderRightWidth = - _this.styles.borderBottomWidth = - _this.styles.borderLeftWidth = - 1; - _this.styles.borderTopStyle = - _this.styles.borderRightStyle = - _this.styles.borderBottomStyle = - _this.styles.borderLeftStyle = - 1 /* SOLID */; - _this.styles.backgroundClip = [0 /* BORDER_BOX */]; - _this.styles.backgroundOrigin = [0 /* BORDER_BOX */]; - _this.bounds = reformatInputBounds(_this.bounds); - } - switch (_this.type) { - case CHECKBOX: - _this.styles.borderTopRightRadius = - _this.styles.borderTopLeftRadius = - _this.styles.borderBottomRightRadius = - _this.styles.borderBottomLeftRadius = - CHECKBOX_BORDER_RADIUS; - break; - case RADIO: - _this.styles.borderTopRightRadius = - _this.styles.borderTopLeftRadius = - _this.styles.borderBottomRightRadius = - _this.styles.borderBottomLeftRadius = - RADIO_BORDER_RADIUS; - break; - } - return _this; - } - return InputElementContainer; - }(ElementContainer)); - - var SelectElementContainer = /** @class */ (function (_super) { - __extends(SelectElementContainer, _super); - function SelectElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - var option = element.options[element.selectedIndex || 0]; - _this.value = option ? option.text || '' : ''; - return _this; - } - return SelectElementContainer; - }(ElementContainer)); - - var TextareaElementContainer = /** @class */ (function (_super) { - __extends(TextareaElementContainer, _super); - function TextareaElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - _this.value = element.value; - return _this; - } - return TextareaElementContainer; - }(ElementContainer)); - - var IFrameElementContainer = /** @class */ (function (_super) { - __extends(IFrameElementContainer, _super); - function IFrameElementContainer(context, iframe) { - var _this = _super.call(this, context, iframe) || this; - _this.src = iframe.src; - _this.width = parseInt(iframe.width, 10) || 0; - _this.height = parseInt(iframe.height, 10) || 0; - _this.backgroundColor = _this.styles.backgroundColor; - try { - if (iframe.contentWindow && - iframe.contentWindow.document && - iframe.contentWindow.document.documentElement) { - _this.tree = parseTree(context, iframe.contentWindow.document.documentElement); - // http://www.w3.org/TR/css3-background/#special-backgrounds - var documentBackgroundColor = iframe.contentWindow.document.documentElement - ? parseColor(context, getComputedStyle(iframe.contentWindow.document.documentElement).backgroundColor) - : COLORS.TRANSPARENT; - var bodyBackgroundColor = iframe.contentWindow.document.body - ? parseColor(context, getComputedStyle(iframe.contentWindow.document.body).backgroundColor) - : COLORS.TRANSPARENT; - _this.backgroundColor = isTransparent(documentBackgroundColor) - ? isTransparent(bodyBackgroundColor) - ? _this.styles.backgroundColor - : bodyBackgroundColor - : documentBackgroundColor; - } - } - catch (e) { } - return _this; - } - return IFrameElementContainer; - }(ElementContainer)); - - var LIST_OWNERS = ['OL', 'UL', 'MENU']; - var parseNodeTree = function (context, node, parent, root) { - for (var childNode = node.firstChild, nextNode = void 0; childNode; childNode = nextNode) { - nextNode = childNode.nextSibling; - if (isTextNode(childNode) && childNode.data.trim().length > 0) { - parent.textNodes.push(new TextContainer(context, childNode, parent.styles)); - } - else if (isElementNode(childNode)) { - if (isSlotElement(childNode) && childNode.assignedNodes) { - childNode.assignedNodes().forEach(function (childNode) { return parseNodeTree(context, childNode, parent, root); }); - } - else { - var container = createContainer(context, childNode); - if (container.styles.isVisible()) { - if (createsRealStackingContext(childNode, container, root)) { - container.flags |= 4 /* CREATES_REAL_STACKING_CONTEXT */; - } - else if (createsStackingContext(container.styles)) { - container.flags |= 2 /* CREATES_STACKING_CONTEXT */; - } - if (LIST_OWNERS.indexOf(childNode.tagName) !== -1) { - container.flags |= 8 /* IS_LIST_OWNER */; - } - parent.elements.push(container); - childNode.slot; - if (childNode.shadowRoot) { - parseNodeTree(context, childNode.shadowRoot, container, root); - } - else if (!isTextareaElement(childNode) && - !isSVGElement(childNode) && - !isSelectElement(childNode)) { - parseNodeTree(context, childNode, container, root); - } - } - } - } - } - }; - var createContainer = function (context, element) { - if (isImageElement(element)) { - return new ImageElementContainer(context, element); - } - if (isCanvasElement(element)) { - return new CanvasElementContainer(context, element); - } - if (isSVGElement(element)) { - return new SVGElementContainer(context, element); - } - if (isLIElement(element)) { - return new LIElementContainer(context, element); - } - if (isOLElement(element)) { - return new OLElementContainer(context, element); - } - if (isInputElement(element)) { - return new InputElementContainer(context, element); - } - if (isSelectElement(element)) { - return new SelectElementContainer(context, element); - } - if (isTextareaElement(element)) { - return new TextareaElementContainer(context, element); - } - if (isIFrameElement(element)) { - return new IFrameElementContainer(context, element); - } - return new ElementContainer(context, element); - }; - var parseTree = function (context, element) { - var container = createContainer(context, element); - container.flags |= 4 /* CREATES_REAL_STACKING_CONTEXT */; - parseNodeTree(context, element, container, container); - return container; - }; - var createsRealStackingContext = function (node, container, root) { - return (container.styles.isPositionedWithZIndex() || - container.styles.opacity < 1 || - container.styles.isTransformed() || - (isBodyElement(node) && root.styles.isTransparent())); - }; - var createsStackingContext = function (styles) { return styles.isPositioned() || styles.isFloating(); }; - var isTextNode = function (node) { return node.nodeType === Node.TEXT_NODE; }; - var isElementNode = function (node) { return node.nodeType === Node.ELEMENT_NODE; }; - var isHTMLElementNode = function (node) { - return isElementNode(node) && typeof node.style !== 'undefined' && !isSVGElementNode(node); - }; - var isSVGElementNode = function (element) { - return typeof element.className === 'object'; - }; - var isLIElement = function (node) { return node.tagName === 'LI'; }; - var isOLElement = function (node) { return node.tagName === 'OL'; }; - var isInputElement = function (node) { return node.tagName === 'INPUT'; }; - var isHTMLElement = function (node) { return node.tagName === 'HTML'; }; - var isSVGElement = function (node) { return node.tagName === 'svg'; }; - var isBodyElement = function (node) { return node.tagName === 'BODY'; }; - var isCanvasElement = function (node) { return node.tagName === 'CANVAS'; }; - var isVideoElement = function (node) { return node.tagName === 'VIDEO'; }; - var isImageElement = function (node) { return node.tagName === 'IMG'; }; - var isIFrameElement = function (node) { return node.tagName === 'IFRAME'; }; - var isStyleElement = function (node) { return node.tagName === 'STYLE'; }; - var isScriptElement = function (node) { return node.tagName === 'SCRIPT'; }; - var isTextareaElement = function (node) { return node.tagName === 'TEXTAREA'; }; - var isSelectElement = function (node) { return node.tagName === 'SELECT'; }; - var isSlotElement = function (node) { return node.tagName === 'SLOT'; }; - // https://html.spec.whatwg.org/multipage/custom-elements.html#valid-custom-element-name - var isCustomElement = function (node) { return node.tagName.indexOf('-') > 0; }; - - var CounterState = /** @class */ (function () { - function CounterState() { - this.counters = {}; - } - CounterState.prototype.getCounterValue = function (name) { - var counter = this.counters[name]; - if (counter && counter.length) { - return counter[counter.length - 1]; - } - return 1; - }; - CounterState.prototype.getCounterValues = function (name) { - var counter = this.counters[name]; - return counter ? counter : []; - }; - CounterState.prototype.pop = function (counters) { - var _this = this; - counters.forEach(function (counter) { return _this.counters[counter].pop(); }); - }; - CounterState.prototype.parse = function (style) { - var _this = this; - var counterIncrement = style.counterIncrement; - var counterReset = style.counterReset; - var canReset = true; - if (counterIncrement !== null) { - counterIncrement.forEach(function (entry) { - var counter = _this.counters[entry.counter]; - if (counter && entry.increment !== 0) { - canReset = false; - if (!counter.length) { - counter.push(1); - } - counter[Math.max(0, counter.length - 1)] += entry.increment; - } - }); - } - var counterNames = []; - if (canReset) { - counterReset.forEach(function (entry) { - var counter = _this.counters[entry.counter]; - counterNames.push(entry.counter); - if (!counter) { - counter = _this.counters[entry.counter] = []; - } - counter.push(entry.reset); - }); - } - return counterNames; - }; - return CounterState; - }()); - var ROMAN_UPPER = { - integers: [1000, 900, 500, 400, 100, 90, 50, 40, 10, 9, 5, 4, 1], - values: ['M', 'CM', 'D', 'CD', 'C', 'XC', 'L', 'XL', 'X', 'IX', 'V', 'IV', 'I'] - }; - var ARMENIAN = { - integers: [ - 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 900, 800, 700, 600, 500, 400, 300, 200, 100, 90, 80, 70, - 60, 50, 40, 30, 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 - ], - values: [ - 'Ք', - 'Փ', - 'Ւ', - 'Ց', - 'Ր', - 'Տ', - 'Վ', - 'Ս', - 'Ռ', - 'Ջ', - 'Պ', - 'Չ', - 'Ո', - 'Շ', - 'Ն', - 'Յ', - 'Մ', - 'Ճ', - 'Ղ', - 'Ձ', - 'Հ', - 'Կ', - 'Ծ', - 'Խ', - 'Լ', - 'Ի', - 'Ժ', - 'Թ', - 'Ը', - 'Է', - 'Զ', - 'Ե', - 'Դ', - 'Գ', - 'Բ', - 'Ա' - ] - }; - var HEBREW = { - integers: [ - 10000, 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 400, 300, 200, 100, 90, 80, 70, 60, 50, 40, 30, 20, - 19, 18, 17, 16, 15, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 - ], - values: [ - 'י׳', - 'ט׳', - 'ח׳', - 'ז׳', - 'ו׳', - 'ה׳', - 'ד׳', - 'ג׳', - 'ב׳', - 'א׳', - 'ת', - 'ש', - 'ר', - 'ק', - 'צ', - 'פ', - 'ע', - 'ס', - 'נ', - 'מ', - 'ל', - 'כ', - 'יט', - 'יח', - 'יז', - 'טז', - 'טו', - 'י', - 'ט', - 'ח', - 'ז', - 'ו', - 'ה', - 'ד', - 'ג', - 'ב', - 'א' - ] - }; - var GEORGIAN = { - integers: [ - 10000, 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 900, 800, 700, 600, 500, 400, 300, 200, 100, 90, - 80, 70, 60, 50, 40, 30, 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 - ], - values: [ - 'ჵ', - 'ჰ', - 'ჯ', - 'ჴ', - 'ხ', - 'ჭ', - 'წ', - 'ძ', - 'ც', - 'ჩ', - 'შ', - 'ყ', - 'ღ', - 'ქ', - 'ფ', - 'ჳ', - 'ტ', - 'ს', - 'რ', - 'ჟ', - 'პ', - 'ო', - 'ჲ', - 'ნ', - 'მ', - 'ლ', - 'კ', - 'ი', - 'თ', - 'ჱ', - 'ზ', - 'ვ', - 'ე', - 'დ', - 'გ', - 'ბ', - 'ა' - ] - }; - var createAdditiveCounter = function (value, min, max, symbols, fallback, suffix) { - if (value < min || value > max) { - return createCounterText(value, fallback, suffix.length > 0); - } - return (symbols.integers.reduce(function (string, integer, index) { - while (value >= integer) { - value -= integer; - string += symbols.values[index]; - } - return string; - }, '') + suffix); - }; - var createCounterStyleWithSymbolResolver = function (value, codePointRangeLength, isNumeric, resolver) { - var string = ''; - do { - if (!isNumeric) { - value--; - } - string = resolver(value) + string; - value /= codePointRangeLength; - } while (value * codePointRangeLength >= codePointRangeLength); - return string; - }; - var createCounterStyleFromRange = function (value, codePointRangeStart, codePointRangeEnd, isNumeric, suffix) { - var codePointRangeLength = codePointRangeEnd - codePointRangeStart + 1; - return ((value < 0 ? '-' : '') + - (createCounterStyleWithSymbolResolver(Math.abs(value), codePointRangeLength, isNumeric, function (codePoint) { - return fromCodePoint$1(Math.floor(codePoint % codePointRangeLength) + codePointRangeStart); - }) + - suffix)); - }; - var createCounterStyleFromSymbols = function (value, symbols, suffix) { - if (suffix === void 0) { suffix = '. '; } - var codePointRangeLength = symbols.length; - return (createCounterStyleWithSymbolResolver(Math.abs(value), codePointRangeLength, false, function (codePoint) { return symbols[Math.floor(codePoint % codePointRangeLength)]; }) + suffix); - }; - var CJK_ZEROS = 1 << 0; - var CJK_TEN_COEFFICIENTS = 1 << 1; - var CJK_TEN_HIGH_COEFFICIENTS = 1 << 2; - var CJK_HUNDRED_COEFFICIENTS = 1 << 3; - var createCJKCounter = function (value, numbers, multipliers, negativeSign, suffix, flags) { - if (value < -9999 || value > 9999) { - return createCounterText(value, 4 /* CJK_DECIMAL */, suffix.length > 0); - } - var tmp = Math.abs(value); - var string = suffix; - if (tmp === 0) { - return numbers[0] + string; - } - for (var digit = 0; tmp > 0 && digit <= 4; digit++) { - var coefficient = tmp % 10; - if (coefficient === 0 && contains(flags, CJK_ZEROS) && string !== '') { - string = numbers[coefficient] + string; - } - else if (coefficient > 1 || - (coefficient === 1 && digit === 0) || - (coefficient === 1 && digit === 1 && contains(flags, CJK_TEN_COEFFICIENTS)) || - (coefficient === 1 && digit === 1 && contains(flags, CJK_TEN_HIGH_COEFFICIENTS) && value > 100) || - (coefficient === 1 && digit > 1 && contains(flags, CJK_HUNDRED_COEFFICIENTS))) { - string = numbers[coefficient] + (digit > 0 ? multipliers[digit - 1] : '') + string; - } - else if (coefficient === 1 && digit > 0) { - string = multipliers[digit - 1] + string; - } - tmp = Math.floor(tmp / 10); - } - return (value < 0 ? negativeSign : '') + string; - }; - var CHINESE_INFORMAL_MULTIPLIERS = '十百千萬'; - var CHINESE_FORMAL_MULTIPLIERS = '拾佰仟萬'; - var JAPANESE_NEGATIVE = 'マイナス'; - var KOREAN_NEGATIVE = '마이너스'; - var createCounterText = function (value, type, appendSuffix) { - var defaultSuffix = appendSuffix ? '. ' : ''; - var cjkSuffix = appendSuffix ? '、' : ''; - var koreanSuffix = appendSuffix ? ', ' : ''; - var spaceSuffix = appendSuffix ? ' ' : ''; - switch (type) { - case 0 /* DISC */: - return '•' + spaceSuffix; - case 1 /* CIRCLE */: - return '◦' + spaceSuffix; - case 2 /* SQUARE */: - return '◾' + spaceSuffix; - case 5 /* DECIMAL_LEADING_ZERO */: - var string = createCounterStyleFromRange(value, 48, 57, true, defaultSuffix); - return string.length < 4 ? "0" + string : string; - case 4 /* CJK_DECIMAL */: - return createCounterStyleFromSymbols(value, '〇一二三四五六七八九', cjkSuffix); - case 6 /* LOWER_ROMAN */: - return createAdditiveCounter(value, 1, 3999, ROMAN_UPPER, 3 /* DECIMAL */, defaultSuffix).toLowerCase(); - case 7 /* UPPER_ROMAN */: - return createAdditiveCounter(value, 1, 3999, ROMAN_UPPER, 3 /* DECIMAL */, defaultSuffix); - case 8 /* LOWER_GREEK */: - return createCounterStyleFromRange(value, 945, 969, false, defaultSuffix); - case 9 /* LOWER_ALPHA */: - return createCounterStyleFromRange(value, 97, 122, false, defaultSuffix); - case 10 /* UPPER_ALPHA */: - return createCounterStyleFromRange(value, 65, 90, false, defaultSuffix); - case 11 /* ARABIC_INDIC */: - return createCounterStyleFromRange(value, 1632, 1641, true, defaultSuffix); - case 12 /* ARMENIAN */: - case 49 /* UPPER_ARMENIAN */: - return createAdditiveCounter(value, 1, 9999, ARMENIAN, 3 /* DECIMAL */, defaultSuffix); - case 35 /* LOWER_ARMENIAN */: - return createAdditiveCounter(value, 1, 9999, ARMENIAN, 3 /* DECIMAL */, defaultSuffix).toLowerCase(); - case 13 /* BENGALI */: - return createCounterStyleFromRange(value, 2534, 2543, true, defaultSuffix); - case 14 /* CAMBODIAN */: - case 30 /* KHMER */: - return createCounterStyleFromRange(value, 6112, 6121, true, defaultSuffix); - case 15 /* CJK_EARTHLY_BRANCH */: - return createCounterStyleFromSymbols(value, '子丑寅卯辰巳午未申酉戌亥', cjkSuffix); - case 16 /* CJK_HEAVENLY_STEM */: - return createCounterStyleFromSymbols(value, '甲乙丙丁戊己庚辛壬癸', cjkSuffix); - case 17 /* CJK_IDEOGRAPHIC */: - case 48 /* TRAD_CHINESE_INFORMAL */: - return createCJKCounter(value, '零一二三四五六七八九', CHINESE_INFORMAL_MULTIPLIERS, '負', cjkSuffix, CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 47 /* TRAD_CHINESE_FORMAL */: - return createCJKCounter(value, '零壹貳參肆伍陸柒捌玖', CHINESE_FORMAL_MULTIPLIERS, '負', cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 42 /* SIMP_CHINESE_INFORMAL */: - return createCJKCounter(value, '零一二三四五六七八九', CHINESE_INFORMAL_MULTIPLIERS, '负', cjkSuffix, CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 41 /* SIMP_CHINESE_FORMAL */: - return createCJKCounter(value, '零壹贰叁肆伍陆柒捌玖', CHINESE_FORMAL_MULTIPLIERS, '负', cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 26 /* JAPANESE_INFORMAL */: - return createCJKCounter(value, '〇一二三四五六七八九', '十百千万', JAPANESE_NEGATIVE, cjkSuffix, 0); - case 25 /* JAPANESE_FORMAL */: - return createCJKCounter(value, '零壱弐参四伍六七八九', '拾百千万', JAPANESE_NEGATIVE, cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS); - case 31 /* KOREAN_HANGUL_FORMAL */: - return createCJKCounter(value, '영일이삼사오육칠팔구', '십백천만', KOREAN_NEGATIVE, koreanSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS); - case 33 /* KOREAN_HANJA_INFORMAL */: - return createCJKCounter(value, '零一二三四五六七八九', '十百千萬', KOREAN_NEGATIVE, koreanSuffix, 0); - case 32 /* KOREAN_HANJA_FORMAL */: - return createCJKCounter(value, '零壹貳參四五六七八九', '拾百千', KOREAN_NEGATIVE, koreanSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS); - case 18 /* DEVANAGARI */: - return createCounterStyleFromRange(value, 0x966, 0x96f, true, defaultSuffix); - case 20 /* GEORGIAN */: - return createAdditiveCounter(value, 1, 19999, GEORGIAN, 3 /* DECIMAL */, defaultSuffix); - case 21 /* GUJARATI */: - return createCounterStyleFromRange(value, 0xae6, 0xaef, true, defaultSuffix); - case 22 /* GURMUKHI */: - return createCounterStyleFromRange(value, 0xa66, 0xa6f, true, defaultSuffix); - case 22 /* HEBREW */: - return createAdditiveCounter(value, 1, 10999, HEBREW, 3 /* DECIMAL */, defaultSuffix); - case 23 /* HIRAGANA */: - return createCounterStyleFromSymbols(value, 'あいうえおかきくけこさしすせそたちつてとなにぬねのはひふへほまみむめもやゆよらりるれろわゐゑをん'); - case 24 /* HIRAGANA_IROHA */: - return createCounterStyleFromSymbols(value, 'いろはにほへとちりぬるをわかよたれそつねならむうゐのおくやまけふこえてあさきゆめみしゑひもせす'); - case 27 /* KANNADA */: - return createCounterStyleFromRange(value, 0xce6, 0xcef, true, defaultSuffix); - case 28 /* KATAKANA */: - return createCounterStyleFromSymbols(value, 'アイウエオカキクケコサシスセソタチツテトナニヌネノハヒフヘホマミムメモヤユヨラリルレロワヰヱヲン', cjkSuffix); - case 29 /* KATAKANA_IROHA */: - return createCounterStyleFromSymbols(value, 'イロハニホヘトチリヌルヲワカヨタレソツネナラムウヰノオクヤマケフコエテアサキユメミシヱヒモセス', cjkSuffix); - case 34 /* LAO */: - return createCounterStyleFromRange(value, 0xed0, 0xed9, true, defaultSuffix); - case 37 /* MONGOLIAN */: - return createCounterStyleFromRange(value, 0x1810, 0x1819, true, defaultSuffix); - case 38 /* MYANMAR */: - return createCounterStyleFromRange(value, 0x1040, 0x1049, true, defaultSuffix); - case 39 /* ORIYA */: - return createCounterStyleFromRange(value, 0xb66, 0xb6f, true, defaultSuffix); - case 40 /* PERSIAN */: - return createCounterStyleFromRange(value, 0x6f0, 0x6f9, true, defaultSuffix); - case 43 /* TAMIL */: - return createCounterStyleFromRange(value, 0xbe6, 0xbef, true, defaultSuffix); - case 44 /* TELUGU */: - return createCounterStyleFromRange(value, 0xc66, 0xc6f, true, defaultSuffix); - case 45 /* THAI */: - return createCounterStyleFromRange(value, 0xe50, 0xe59, true, defaultSuffix); - case 46 /* TIBETAN */: - return createCounterStyleFromRange(value, 0xf20, 0xf29, true, defaultSuffix); - case 3 /* DECIMAL */: - default: - return createCounterStyleFromRange(value, 48, 57, true, defaultSuffix); - } - }; - - var IGNORE_ATTRIBUTE = 'data-html2canvas-ignore'; - var DocumentCloner = /** @class */ (function () { - function DocumentCloner(context, element, options) { - this.context = context; - this.options = options; - this.scrolledElements = []; - this.referenceElement = element; - this.counters = new CounterState(); - this.quoteDepth = 0; - if (!element.ownerDocument) { - throw new Error('Cloned element does not have an owner document'); - } - this.documentElement = this.cloneNode(element.ownerDocument.documentElement, false); - } - DocumentCloner.prototype.toIFrame = function (ownerDocument, windowSize) { - var _this = this; - var iframe = createIFrameContainer(ownerDocument, windowSize); - if (!iframe.contentWindow) { - return Promise.reject("Unable to find iframe window"); - } - var scrollX = ownerDocument.defaultView.pageXOffset; - var scrollY = ownerDocument.defaultView.pageYOffset; - var cloneWindow = iframe.contentWindow; - var documentClone = cloneWindow.document; - /* Chrome doesn't detect relative background-images assigned in inline - - - -
    -

    Daniel Chih

    - -
    -
    1- How did you hear about SharpestMinds? What motivated you to do a mentorship with SM?
    - Found on Google search - first result.  Was looking to start mentoring to help and guide the next generation of people and make them comfortable with career transition. Also, a great way to reinforce learning by teaching and mentoring. 
    Have tried ISA before but didn't have good experience so not very keen on trying this - but open to working with PAYG. 

    2- How's your career journey in Data engineering been like?
    - Pursued Mech Engg. and was working as a Design engg for the first 1 and half year. 
    - Moved on to application project management role for 4 years which also involved sales. Got introduced to Data and cloud in this role. Helped company save $$ while working.
    - Did a D.E. bootcamp and landed a job as a Data engineer consultant. 
    - Currently working as senior data engineer at Nasdaq - Leading projects and managing cloud services. Always wanted to work in capital markets and financial institutions. 

    3- Preivous mentorship experience?
    - Mentoring with the BootCamp from where he did a Data engineer course. 

    4- What mistakes do beginners make or challenges do they face when breaking into the Data engineering field? 
    - Having the right mindset and motivation. Having a good support system to have a positive impact w.r.t to what they want to achieve and how to go about it. Understanding that not everyone learns the same way and stop looking at other people and their journey. Have focused goal and a good mentor can be helpful. D.E. is a broad and massive field and it can be easy to get overwhelmed by lot of information available online. SQL is a good start w.r.t. to learning a language in beginning but also understanding how to work with data is important.

    5- Questions about SM?
    - What are the next steps and what does the process look like?
    -
    - -
    - - - \ No newline at end of file diff --git a/spaces/awacke1/AIArtReviewStreamlit/app.py b/spaces/awacke1/AIArtReviewStreamlit/app.py deleted file mode 100644 index c5596cc1419e6d76b9792a698f82611e6f0fcf49..0000000000000000000000000000000000000000 --- a/spaces/awacke1/AIArtReviewStreamlit/app.py +++ /dev/null @@ -1,45 +0,0 @@ -import streamlit as st -import gradio as gr -import IPython -import streamlit as st -import streamlit.components.v1 as components -from IPython.display import IFrame - -src='' # URL parameter to change the iframe url -def SetIframeURL(option_selected): - if (option_selected=='Collager'): - src='https://www.artbreeder.com/' - if (option_selected=='Midjourney'): - src='https://www.midjourney.com/' - if (option_selected=='DreamStudio'): - src='https://beta.dreamstudio.ai/' - if (option_selected=='NightCafe'): - src='https://creator.nightcafe.studio/' - if (option_selected=='RunwayML'): - src='https://app.runwayml.com/' - if (option_selected=='ArtFromTextandImages'): - src='https://huggingface.co/spaces/awacke1/Art-from-Text-and-Images' - if (option_selected=='Boomy'): - src='https://boomy.com/' - - width = st.sidebar.slider("Width", 200, 1500, 800, 100) - height = st.sidebar.slider("Height", 200, 1500, 900, 100) - st.components.v1.iframe(src, width, height, scrolling=True) - -try: - options = ['Midjourney', 'RunwayML', 'ArtFromTextandImages', 'Boomy', 'Collager', 'DreamStudio', 'NightCafe' ] - query_params = st.experimental_get_query_params() - query_option = query_params['option'][0] #throws an exception when visiting http://host:port - option_selected = st.sidebar.selectbox('Pick option', options, index=options.index(query_option)) - if option_selected: - st.experimental_set_query_params(option=option_selected) - SetIframeURL(option_selected) -except: - options = ['Midjourney', 'RunwayML', 'ArtFromTextandImages', 'Boomy', 'Collager', 'DreamStudio', 'NightCafe' ] - st.experimental_set_query_params(option=options[1]) # defaults to 1 - query_params = st.experimental_get_query_params() - query_option = query_params['option'][0] - option_selected = st.sidebar.selectbox('Pick option', options, index=options.index(query_option)) - if option_selected: - st.experimental_set_query_params(option=option_selected) - SetIframeURL(option_selected) \ No newline at end of file diff --git a/spaces/awacke1/DatasetAnalyzer/app.py b/spaces/awacke1/DatasetAnalyzer/app.py deleted file mode 100644 index fe5983a53b9f2dc3b19dd4eefebeb46c8c5d1527..0000000000000000000000000000000000000000 --- a/spaces/awacke1/DatasetAnalyzer/app.py +++ /dev/null @@ -1,99 +0,0 @@ -from typing import List, Dict -import httpx -import gradio as gr -import pandas as pd - -async def get_splits(dataset_name: str) -> Dict[str, List[Dict]]: - URL = f"https://datasets-server.huggingface.co/splits?dataset={dataset_name}" - async with httpx.AsyncClient() as session: - response = await session.get(URL) - return response.json() - -async def get_valid_datasets() -> Dict[str, List[str]]: - URL = f"https://datasets-server.huggingface.co/valid" - async with httpx.AsyncClient() as session: - response = await session.get(URL) - datasets = response.json()["valid"] - return gr.Dropdown.update(choices=datasets, value="awacke1/ChatbotMemory.csv") - # The one to watch: https://huggingface.co/rungalileo - # rungalileo/medical_transcription_40 - -async def get_first_rows(dataset: str, config: str, split: str) -> Dict[str, Dict[str, List[Dict]]]: - URL = f"https://datasets-server.huggingface.co/first-rows?dataset={dataset}&config={config}&split={split}" - async with httpx.AsyncClient() as session: - response = await session.get(URL) - print(URL) - gr.Markdown(URL) - return response.json() - -def get_df_from_rows(api_output): - dfFromSort = pd.DataFrame([row["row"] for row in api_output["rows"]]) - try: - dfFromSort.sort_values(by=1, axis=1, ascending=True, inplace=False, kind='mergesort', na_position='last', ignore_index=False, key=None) - except: - print("Exception sorting due to keyerror?") - return dfFromSort - -async def update_configs(dataset_name: str): - splits = await get_splits(dataset_name) - all_configs = sorted(set([s["config"] for s in splits["splits"]])) - return (gr.Dropdown.update(choices=all_configs, value=all_configs[0]), - splits) - -async def update_splits(config_name: str, state: gr.State): - splits_for_config = sorted(set([s["split"] for s in state["splits"] if s["config"] == config_name])) - dataset_name = state["splits"][0]["dataset"] - dataset = await update_dataset(splits_for_config[0], config_name, dataset_name) - return (gr.Dropdown.update(choices=splits_for_config, value=splits_for_config[0]), dataset) - -async def update_dataset(split_name: str, config_name: str, dataset_name: str): - rows = await get_first_rows(dataset_name, config_name, split_name) - df = get_df_from_rows(rows) - return df - -# Guido von Roissum: https://www.youtube.com/watch?v=-DVyjdw4t9I -async def update_URL(dataset: str, config: str, split: str) -> str: - URL = f"https://datasets-server.huggingface.co/first-rows?dataset={dataset}&config={config}&split={split}" - URL = f"https://huggingface.co/datasets/{split}" - return (URL) - -async def openurl(URL: str) -> str: - html = f"{URL}" - return (html) - -with gr.Blocks() as demo: - gr.Markdown("

    🥫Datasetter📊 Datasets Analyzer and Transformer

    ") - gr.Markdown("""
    Curated Datasets: Kaggle. NLM UMLS. LOINC. ICD10 Diagnosis. ICD11. Papers,Code,Datasets for SOTA in Medicine. Mental. Behavior. CMS Downloads. CMS CPT and HCPCS Procedures and Services """) - - splits_data = gr.State() - - with gr.Row(): - dataset_name = gr.Dropdown(label="Dataset", interactive=True) - config = gr.Dropdown(label="Subset", interactive=True) - split = gr.Dropdown(label="Split", interactive=True) - - with gr.Row(): - #filterleft = gr.Textbox(label="First Column Filter",placeholder="Filter Column 1") - URLcenter = gr.Textbox(label="Dataset URL", placeholder="URL") - btn = gr.Button("Use Dataset") - #URLoutput = gr.Textbox(label="Output",placeholder="URL Output") - URLoutput = gr.HTML(label="Output",placeholder="URL Output") - - with gr.Row(): - dataset = gr.DataFrame(wrap=True, interactive=True) - - demo.load(get_valid_datasets, inputs=None, outputs=[dataset_name]) - - dataset_name.change(update_configs, inputs=[dataset_name], outputs=[config, splits_data]) - config.change(update_splits, inputs=[config, splits_data], outputs=[split, dataset]) - split.change(update_dataset, inputs=[split, config, dataset_name], outputs=[dataset]) - - dataset_name.change(update_URL, inputs=[split, config, dataset_name], outputs=[URLcenter]) - - btn.click(openurl, [URLcenter], URLoutput) - -demo.launch(debug=True) - -# original: https://huggingface.co/spaces/freddyaboulton/dataset-viewer -- Freddy thanks! Your examples are the best. -# playlist on Gradio and Mermaid: https://www.youtube.com/watch?v=o7kCD4aWMR4&list=PLHgX2IExbFosW7hWNryq8hs2bt2aj91R- -# Link to Mermaid model and code: [![](https://mermaid.ink/img/pako:eNp1U8mO2zAM_RXCZ-eQpZccCmSZTIpOMQESIAdnDrRMx0JkydXSNDOYfy_lpUgD1AfBfnx8fCTlj0SYgpJ5UipzFRVaD4flSQM_YjwafcVJ9-FCfrbYVGA0ZQeLUkt9futiOM72pEh4QFijR9iTf2tzsx3Z0ti6hxslvb_Lm0TSNPvBDhQsg1TFXXAag7NBef_9hdDqFA6knbEbdgvGwu7mjRXVkDOLOV-yNXmytdQEsoROvTfi4EhK9XTSxUNz_mo4uVHm1lPyce-uR1k_n2RHymHRNPAvNXaTT7NVZYwjeDECVbS4UiYUAyc2lc-yFoPXxkujHaAl2G54PCjIpfBssZAGtsZ5KlLYkjWXkMLiuOfjPVhiymr3_x4qS7wicneTFuMW6Gdxlb6Cb7oJvt1LbEpMso08sza8MnqskA9jL27Ij72Jafb0G-tGkQNTdgKOy_XcFP5GDxFbWsJLV3FQid2LWfZsfpHVqAXBCBYa1e2dAHUBu5Ar6dgby0ghPWxQWk2Oh_L0M0h_S2Ep0YHUrXFHXD_msefo5XEkfFWBK8atdkA7mgfoalpATJI0qfnWoCz4b_iI0VPiK6rplMz5taASg_Kn5KQ_mYrBm_1Ni2TubaA0CU2BntYSeQl1Mi9ROfr8A8FBGds?type=png)](https://mermaid.live/edit#pako:eNp1U8mO2zAM_RXCZ-eQpZccCmSZTIpOMQESIAdnDrRMx0JkydXSNDOYfy_lpUgD1AfBfnx8fCTlj0SYgpJ5UipzFRVaD4flSQM_YjwafcVJ9-FCfrbYVGA0ZQeLUkt9futiOM72pEh4QFijR9iTf2tzsx3Z0ti6hxslvb_Lm0TSNPvBDhQsg1TFXXAag7NBef_9hdDqFA6knbEbdgvGwu7mjRXVkDOLOV-yNXmytdQEsoROvTfi4EhK9XTSxUNz_mo4uVHm1lPyce-uR1k_n2RHymHRNPAvNXaTT7NVZYwjeDECVbS4UiYUAyc2lc-yFoPXxkujHaAl2G54PCjIpfBssZAGtsZ5KlLYkjWXkMLiuOfjPVhiymr3_x4qS7wicneTFuMW6Gdxlb6Cb7oJvt1LbEpMso08sza8MnqskA9jL27Ij72Jafb0G-tGkQNTdgKOy_XcFP5GDxFbWsJLV3FQid2LWfZsfpHVqAXBCBYa1e2dAHUBu5Ar6dgby0ghPWxQWk2Oh_L0M0h_S2Ep0YHUrXFHXD_msefo5XEkfFWBK8atdkA7mgfoalpATJI0qfnWoCz4b_iI0VPiK6rplMz5taASg_Kn5KQ_mYrBm_1Ni2TubaA0CU2BntYSeQl1Mi9ROfr8A8FBGds) diff --git a/spaces/awacke1/Gradio-Gallery-Iceland/README.md b/spaces/awacke1/Gradio-Gallery-Iceland/README.md deleted file mode 100644 index 5fedd10e15b2ebb9a0572726e1555457efa703ab..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Gradio-Gallery-Iceland/README.md +++ /dev/null @@ -1,106 +0,0 @@ ---- -title: 👁🥽UI Gallery of Icon Sets for AI Animated User Interfaces 📱👁 Gradio -emoji: 👁🥽📱👁 -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false -license: mit -duplicated_from: awacke1/Gradio-Gallery-Health-Medical-Icon-Sets ---- -# Integration in Health Care -1. Interoperability -2. Data Standardization -3. Predictive Analytics -4. Clinical Decision Support -5. Data Quality - -# Prior Authorization Medical Necessity Requirements by Policy -1. Policy Understanding -2. Claims Processing -3. Policy Compliance -4. Fraud Detection -5. Policy Optimization - -# CCD Summarization -1. Data Extraction -2. Data Standardization -3. Summarization -4. Data Visualization -5. Longitudinal Analysis - -# 🩺 AI Applications in Healthcare - -## 🏥 Integration in Health Care - -AI can be used in multiple ways in healthcare integration, especially with HL7v2, v3, v4 for ADT, SIU, ORM, CCDA, and FHIR. - -### 🔁 Interoperability -- 🧩 Understanding and mapping different versions of HL7 messages. -- 🔄 Seamless data exchange between disparate systems. - -### 📊 Data Standardization -- 🔄 Transforming data in different standards to a common format. -- ✔️ Enabling more effective data usage across different healthcare systems. - -### 📈 Predictive Analytics -- 🔮 Predicting patient outcomes based on data from different health care standards. -- 🎯 Enabling better patient care. - -### 🩺 Clinical Decision Support -- 🧠 Providing clinical decision support by analyzing data from different healthcare standards. - -### ✅ Data Quality -- 🔍 Detecting and correcting errors in different healthcare standards. -- ⬆️ Improving the quality of healthcare data. - -## 📄 Prior Authorization Medical Necessity Requirements by Policy - -AI can help in several ways in the area of Prior Authorization Medical Necessity Requirements by policy. - -### 📚 Policy Understanding -- 📖 Understanding the nuances of different policies. -- 💡 Determining the medical necessity requirements. - -### 📝 Claims Processing -- 💼 Processing claims more efficiently. -- 📄 Understanding the medical necessity requirements. - -### ✅ Policy Compliance -- ☑️ Ensuring all medical procedures comply with the necessary policies. - -### ⚠️ Fraud Detection -- 🕵️ Detecting any fraudulent activities. -- 🔍 Comparing the claims with the medical necessity requirements. - -### 🔧 Policy Optimization -- 📈 Suggesting improvements to policies based on analysis of past claims and medical necessity requirements. - -## 📋 CCD Summarization - -AI can play a crucial role in CCD summarization, creating longitudinal and easy-to-understand clinical summaries. - -### 📑 Data Extraction -- 🔎 Extracting relevant information from CCD for summarization. - -### 📊 Data Standardization -- 🔄 Standardizing the extracted information for easier understanding. - -### 🖊️ Summarization -- 📝 Summarizing the CCD in a way that's easy to understand for both clinicians and patients. - -### 📊 Data Visualization -- 🎨 Creating visual summaries of CCD for easier comprehension. - -### 🗓️ Longitudinal Analysis -- 📈 Creating longitudinal summaries of patient health data. -- 🕰️ Monitoring patient progress over time. - -# 🧓 Medicare and Medicaid Innovations -- 📚 Understanding different requirements and regulations. -- 💼 Processing claims more efficiently. -- ⚠️ Detecting fraud. -- 🩺 Providing decision support to clinicians. - diff --git a/spaces/awacke1/Northern.Lights.Map.Streamlit.Folium/app.py b/spaces/awacke1/Northern.Lights.Map.Streamlit.Folium/app.py deleted file mode 100644 index e9ee579383b7a8c04752b5c4a3905e34761f9a28..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Northern.Lights.Map.Streamlit.Folium/app.py +++ /dev/null @@ -1,87 +0,0 @@ -import streamlit as st -import altair as alt -from vega_datasets import data -import pandas as pd -import pydeck as pdk - -# Define the data source for the map -iceland_geojson = "https://raw.githubusercontent.com/deldersveld/topojson/master/countries/iceland/iceland-counties.json" - -# Define the mythological cities with their respective latitude and longitude -cities = { - "Asgard": [64.7938, -17.3413], - "Helheim": [63.8278, -21.1865], - "Jotunheim": [64.8441, -19.1669], - "Midgard": [63.9863, -22.6210], - "Muspellheim": [65.3201, -16.4235], - "Nidavellir": [64.9011, -18.0580], - "Svartalfheim": [64.0114, -21.4504], - "Valhalla": [63.7267, -19.6133], - "Vanaheim": [64.7381, -17.4497], - "Yggdrasil": [64.8999, -19.0044] -} - -# Define the colors to use for each city marker -colors = { - "Asgard": [255, 0, 0], - "Helheim": [255, 255, 0], - "Jotunheim": [0, 0, 255], - "Midgard": [0, 255, 0], - "Muspellheim": [255, 153, 0], - "Nidavellir": [153, 0, 255], - "Svartalfheim": [0, 255, 255], - "Valhalla": [255, 0, 255], - "Vanaheim": [153, 255, 0], - "Yggdrasil": [255, 255, 255] -} - -# Define the Streamlit app layout -st.set_page_config(layout="wide") -st.title("Mythological Cities in Iceland") -st.sidebar.title("Select a City to View the Northern Lights From") -selected_city = st.sidebar.selectbox("", sorted(cities.keys())) - -# Load the Icelandic county boundaries and prepare the data for the 3D map -iceland_data = alt.topo_feature(iceland_geojson, "iceland-counties") -iceland_df = pd.DataFrame(iceland_data["features"]) - -# Create the 3D map with Deck.gl and Altair -view_state = pdk.ViewState(latitude=64.9, longitude=-18.5, zoom=5, pitch=40) -layers = [ - pdk.Layer( - "PolygonLayer", - data=iceland_data, - get_polygon="coordinates", - filled=True, - extruded=True, - get_elevation="properties.avg_elevation", - elevation_scale=1000, - get_fill_color=[200, 200, 200, 200] - ), - pdk.Layer( - "ScatterplotLayer", - data=pd.DataFrame({"latitude": [cities[selected_city][0]], "longitude": [cities[selected_city][1]]}), - get_position="[longitude, latitude]", - get_color=colors[selected_city], - get_radius=20000 - ) -] - -r = pdk.Deck(layers=layers, initial_view_state=view_state) -altair_chart = alt.Chart(iceland_df).mark_geoshape( -stroke="black", -strokeWidth=0.5 - ).encode( - color=alt.Color("properties.avg_elevation:Q", scale=alt.Scale(scheme="viridis")), - tooltip=[alt.Tooltip("properties.name", title="County"), alt.Tooltip("properties.avg_elevation:Q", title="Elevation (m)")] - ).transform_lookup( - lookup="id", - from_=alt.LookupData(iceland_data, "id", ["properties.avg_elevation", "properties.name"]) - ).properties( - width=900, - height=600 -).interactive() - -# Display the 3D map and the Altair chart -st.pydeck_chart(r) -st.altair_chart(altair_chart) diff --git a/spaces/awacke1/Streamlit-Google-Maps-Massachusetts/README.md b/spaces/awacke1/Streamlit-Google-Maps-Massachusetts/README.md deleted file mode 100644 index aaac1c3ffb1ad59325902226a8cbd9e4aa6da6bf..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Streamlit-Google-Maps-Massachusetts/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 🏥 Massachusetts Medical Centers 🌊 -emoji: 🏥🌊 -colorFrom: green -colorTo: indigo -sdk: streamlit -sdk_version: 1.28.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/StreamlitSuperPowerCheatSheet/README.md b/spaces/awacke1/StreamlitSuperPowerCheatSheet/README.md deleted file mode 100644 index 7788467f50d77e5a241eea53530360b3fc3f5687..0000000000000000000000000000000000000000 --- a/spaces/awacke1/StreamlitSuperPowerCheatSheet/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: StreamlitSuperPowerCheatSheet -emoji: 📈 -colorFrom: yellow -colorTo: green -sdk: streamlit -sdk_version: 1.25.0 -app_file: app.py -pinned: false -license: mit ---- diff --git a/spaces/awinml/api_vicuna-openblas/README.md b/spaces/awinml/api_vicuna-openblas/README.md deleted file mode 100644 index 493ad7c25a9ecbb73a2a61d645a81126cd379c60..0000000000000000000000000000000000000000 --- a/spaces/awinml/api_vicuna-openblas/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Api Vicuna -emoji: 👩‍💻 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.12.0 -python_version: 3.9.13 -app_file: app.py -pinned: false -license: mit -duplicated_from: awinml/api_vicuna-AlekseyKorshuk-7B-GPTQ-4bit-128g-GGML ---- diff --git a/spaces/badayvedat/AudioSep/models/CLAP/training/data.py b/spaces/badayvedat/AudioSep/models/CLAP/training/data.py deleted file mode 100644 index c1f1b50166afcaa698690860f6d1b51b6f267b13..0000000000000000000000000000000000000000 --- a/spaces/badayvedat/AudioSep/models/CLAP/training/data.py +++ /dev/null @@ -1,975 +0,0 @@ -import ast -import json -import logging -import math -import os -import random -import h5py -from dataclasses import dataclass -from models.CLAP.training.params import parse_args -import braceexpand -import numpy as np -import pandas as pd -import torch -import torch.nn as nn -import torch.nn.functional as F -import torchvision.datasets as datasets -import torchvision.transforms -import webdataset as wds -from PIL import Image -from torch.utils.data import Dataset, DataLoader, SubsetRandomSampler -from torch.utils.data.distributed import DistributedSampler -from functools import partial -import soundfile as sf -import io -from pathlib import Path -import wget - -from models.CLAP.open_clip.utils import get_tar_path_from_dataset_name, dataset_split -from models.CLAP.open_clip.utils import load_p, load_class_label -import tempfile -import copy - -try: - import horovod.torch as hvd -except ImportError: - hvd = None - -try: - import torchaudio -except ImportError: - torchaudio = None - -from models.CLAP.open_clip import tokenize - - -def tokenizer(text): - return tokenize(text).squeeze(0) - - -from transformers import RobertaTokenizer - -tokenize = RobertaTokenizer.from_pretrained("roberta-base") - - -def tokenizer(text): - result = tokenize( - text, - padding="max_length", - truncation=True, - max_length=77, - return_tensors="pt", - ) - return {k: v.squeeze(0) for k, v in result.items()} - - -# initizlied the audioset map -_AUDIOSET_MAP_PATH = os.path.join(Path(__file__).parent, "audioset_textmap.npy") -_AUDIOSET_MAP = np.load(_AUDIOSET_MAP_PATH, allow_pickle=True) - - -def int16_to_float32(x): - return (x / 32767.0).astype(np.float32) - - -def float32_to_int16(x): - x = np.clip(x, a_min=-1.0, a_max=1.0) - return (x * 32767.0).astype(np.int16) - - -# For Toy Dataset -class ToyDataset(Dataset): - def __init__(self, index_path, ipc, config, eval_mode=False): - """Toy Dataset for testing the audioset input with text labels - Parameters - ---------- - index_path: str - the link to the h5 file of each audio - idc: str - the link to the npy file, the number of samples in each class - config: dict - the audio cfg file - eval_model (bool): to indicate if the dataset is a testing dataset - """ - self.audio_cfg = config["audio_cfg"] - self.text_cfg = config["text_cfg"] - self.fp = h5py.File(index_path, "r") - self.ipc = np.load(ipc, allow_pickle=True) - self.total_size = len(self.fp["audio_name"]) - self.classes_num = self.audio_cfg["class_num"] - self.eval_mode = eval_mode - - if not eval_mode: - self.generate_queue() - else: - self.queue = [] - for i in range(self.total_size): - target = self.fp["target"][i] - if np.sum(target) > 0: - self.queue.append(i) - self.total_size = len(self.queue) - logging.info("total dataset size: %d" % (self.total_size)) - logging.info("class num: %d" % (self.classes_num)) - - def time_shifting(self, x): - frame_num = len(x) - shift_len = random.randint(0, frame_num - 1) - new_sample = np.concatenate([x[shift_len:], x[:shift_len]], axis=0) - return new_sample - - def generate_queue(self): - self.queue = [] - while len(self.queue) < self.total_size: - class_set = [*range(self.classes_num)] - random.shuffle(class_set) - self.queue += [ - self.ipc[d][random.randint(0, len(self.ipc[d]) - 1)] for d in class_set - ] - self.queue = self.queue[: self.total_size] - - logging.info("queue regenerated:%s" % (self.queue[-5:])) - - def crop_wav(self, x): - crop_size = self.audio_cfg["crop_size"] - crop_pos = random.randint(0, len(x) - crop_size - 1) - return x[crop_pos : crop_pos + crop_size] - - def prompt_text(self, target): - events = _AUDIOSET_MAP[np.where(target > 0)] - event_text = "The sounds of " + ", ".join(events[:-1]) + " and " + events[-1] - text = tokenize(event_text)[0] - return text - - def __getitem__(self, index): - """Load waveform, text, and target of an audio clip - - Parameters - ---------- - index: int - the index number - Return - ------ - output: dict { - "hdf5_path": str, - "index_in_hdf5": int, - "audio_name": str, - "waveform": list (audio_length,), - "target": list (class_num, ), - "text": torch.tensor (context_length,) - } - the output dictionary - """ - s_index = self.queue[index] - - audio_name = self.fp["audio_name"][s_index].decode() - # Hardcode here CHANGE - hdf5_path = ( - self.fp["hdf5_path"][s_index] - .decode() - .replace( - "../workspace", - "/home/la/kechen/Research/ke_zsasp/workspace", - ) - ) - r_idx = self.fp["index_in_hdf5"][s_index] - target = self.fp["target"][s_index].astype(np.float32) - text = self.prompt_text(target) - with h5py.File(hdf5_path, "r") as f: - waveform = int16_to_float32(f["waveform"][r_idx])[ - : self.audio_cfg["clip_samples"] - ] - assert ( - len(waveform) == self.audio_cfg["clip_samples"] - ), "The sample length is not match" - # Time shift - # if (self.config.enable_time_shift) and (not self.eval_mode): - # waveform = self.time_shifting(waveform) - # # Label Enhance - # if (self.config.crop_size is not None) and (not self.eval_mode): - # waveform = self.crop_wav(waveform) - # # the label enhance rate is fixed 0.5 - # if (self.config.enable_label_enhance) and (not self.eval_mode) and random.random() < 0.5: - # kidx = np.where(target)[0] - # for k in kidx: - # for add_key in self.class_map[k][1]: - # target[add_key] = 1.0 - # if len(self.class_map[k][2]) > 0: - # add_key = random.choice(self.class_map[k][2]) - # target[add_key] = 1.0 - - # missing the text input - mel_spec = get_mel(torch.from_numpy(waveform), self.audio_cfg)[None, :, :] - mel_spec = ( - torch.cat( - [mel_spec, mel_spec.clone(), mel_spec.clone(), mel_spec.clone()], dim=0 - ) - .cpu() - .numpy() - ) - longer = random.choice([True, False]) - if longer == False: - mel_spec[1:, :, :] = 0.0 - data_dict = { - "hdf5_path": hdf5_path, - "index_in_hdf5": r_idx, - "audio_name": audio_name, - "waveform": waveform, - "class_label": target, - "text": text, - "longer": longer, - "mel_fusion": mel_spec, - } - return data_dict - - def __len__(self): - return self.total_size - - -class CsvDataset(Dataset): - def __init__(self, input_filename, transforms, img_key, caption_key, sep="\t"): - logging.debug(f"Loading csv data from {input_filename}.") - df = pd.read_csv(input_filename, sep=sep) - - self.images = df[img_key].tolist() - self.captions = df[caption_key].tolist() - self.transforms = transforms - logging.debug("Done loading data.") - - def __len__(self): - return len(self.captions) - - def __getitem__(self, idx): - images = self.transforms(Image.open(str(self.images[idx]))) - texts = tokenize([str(self.captions[idx])])[0] - return images, texts - - -@dataclass -class DataInfo: - dataloader: DataLoader - sampler: DistributedSampler - - -def preprocess_txt(text): - return tokenize([str(text)])[0] - - -def get_dataset_size(shards, sizefilepath_=None, is_local=True): - if isinstance(shards, list): - size_list = [] - for s in shards: - size_list.append( - get_dataset_size(s, sizefilepath_=sizefilepath_, is_local=is_local)[0] - ) - else: - if not is_local: - for n in dataset_split.keys(): - if n in shards.split("/"): - break - for s in dataset_split[n]: - if s in shards.split("/"): - break - sizefilepath_ = f"./json_files/{n}/{s}/sizes.json" - shards_list = list(braceexpand.braceexpand(shards)) - dir_path = os.path.dirname(shards) - if sizefilepath_ is not None: - sizes = json.load(open(sizefilepath_, "r")) - total_size = sum( - [ - int(sizes[os.path.basename(shard.replace(".tar -", ".tar"))]) - for shard in shards_list - ] - ) - else: - sizes_filename = os.path.join(dir_path, "sizes.json") - len_filename = os.path.join(dir_path, "__len__") - if os.path.exists(sizes_filename): - sizes = json.load(open(sizes_filename, "r")) - total_size = sum( - [int(sizes[os.path.basename(shard)]) for shard in shards_list] - ) - elif os.path.exists(len_filename): - # FIXME this used to be eval(open(...)) but that seemed rather unsafe - total_size = ast.literal_eval(open(len_filename, "r").read()) - else: - raise Exception( - "Cannot find sizes file for dataset. Please specify the path to the file." - ) - # total_size = None # num samples undefined - # some common dataset sizes (at time of authors last download) - # cc3m-train: 2905954 - # cc12m: 10968539 - # LAION-400m: 407332084 - num_shards = len(shards_list) - if isinstance(shards, list): - return sum(size_list), len(shards) - else: - return total_size, num_shards - - -def get_imagenet(args, preprocess_fns, split): - assert split in ["train", "val", "v2"] - is_train = split == "train" - preprocess_train, preprocess_val = preprocess_fns - - if split == "v2": - from imagenetv2_pytorch import ImageNetV2Dataset - - dataset = ImageNetV2Dataset(location=args.imagenet_v2, transform=preprocess_val) - else: - if is_train: - data_path = args.imagenet_train - preprocess_fn = preprocess_train - else: - data_path = args.imagenet_val - preprocess_fn = preprocess_val - assert data_path - - dataset = datasets.ImageFolder(data_path, transform=preprocess_fn) - - if is_train: - idxs = np.zeros(len(dataset.targets)) - target_array = np.array(dataset.targets) - k = 50 - for c in range(1000): - m = target_array == c - n = len(idxs[m]) - arr = np.zeros(n) - arr[:k] = 1 - np.random.shuffle(arr) - idxs[m] = arr - - idxs = idxs.astype("int") - sampler = SubsetRandomSampler(np.where(idxs)[0]) - else: - sampler = None - - dataloader = torch.utils.data.DataLoader( - dataset, - batch_size=args.batch_size, - num_workers=args.workers, - sampler=sampler, - ) - - return DataInfo(dataloader, sampler) - - -def count_samples(dataloader): - os.environ["WDS_EPOCH"] = "0" - n_elements, n_batches = 0, 0 - for images, texts in dataloader: - n_batches += 1 - n_elements += len(images) - assert len(images) == len(texts) - return n_elements, n_batches - - -def filter_no_caption(sample): - return "txt" in sample - - -def log_and_continue(exn): - """Call in an exception handler to ignore any exception, isssue a warning, and continue.""" - logging.warning(f"Handling webdataset error ({repr(exn)}). Ignoring.") - return True - - -_SHARD_SHUFFLE_SIZE = 2000 -_SHARD_SHUFFLE_INITIAL = 500 -_SAMPLE_SHUFFLE_SIZE = 5000 -_SAMPLE_SHUFFLE_INITIAL = 1000 - - -def sample_prop(sizefile, inputs, proportion, is_local=True): - """ - Sample a proportion of the data. - """ - file_path_dict = { - os.path.split(inputs[i])[1]: os.path.split(inputs[i])[0] - for i in range(len(inputs)) - } - sampled_filepath_dict = {} - sampled_size_dict = {} - if not is_local: - if os.path.exists("sizes.json"): - os.remove("sizes.json") - wget.download(sizefile, "sizes.json") - sizefile = "sizes.json" - with open(sizefile, "r", encoding="UTF-8") as f: - load_dict = json.load(f) - L = int(len(file_path_dict) * proportion) - subkeys = random.sample(file_path_dict.keys(), L) - for k in subkeys: - sampled_size_dict[k] = load_dict[k] - sampled_filepath_dict[k] = file_path_dict[k] - return ( - sum(sampled_size_dict.values()), - L, - [os.path.join(v, k) for k, v in sampled_filepath_dict.items()], - sampled_size_dict, - ) - - -def get_mel(audio_data, audio_cfg): - # mel shape: (n_mels, T) - mel = torchaudio.transforms.MelSpectrogram( - sample_rate=audio_cfg["sample_rate"], - n_fft=audio_cfg["window_size"], - win_length=audio_cfg["window_size"], - hop_length=audio_cfg["hop_size"], - center=True, - pad_mode="reflect", - power=2.0, - norm=None, - onesided=True, - n_mels=64, - f_min=audio_cfg["fmin"], - f_max=audio_cfg["fmax"], - ).to(audio_data.device) - mel = mel(audio_data) - # Align to librosa: - # librosa_melspec = librosa.feature.melspectrogram( - # waveform, - # sr=audio_cfg['sample_rate'], - # n_fft=audio_cfg['window_size'], - # hop_length=audio_cfg['hop_size'], - # win_length=audio_cfg['window_size'], - # center=True, - # pad_mode="reflect", - # power=2.0, - # n_mels=64, - # norm=None, - # htk=True, - # f_min=audio_cfg['fmin'], - # f_max=audio_cfg['fmax'] - # ) - # we use log mel spectrogram as input - mel = torchaudio.transforms.AmplitudeToDB(top_db=None)(mel) - return mel.T # (T, n_mels) - - -def get_audio_features( - sample, audio_data, max_len, data_truncating, data_filling, audio_cfg -): - """ - Calculate and add audio features to sample. - Sample: a dict containing all the data of current sample. - audio_data: a tensor of shape (T) containing audio data. - max_len: the maximum length of audio data. - data_truncating: the method of truncating data. - data_filling: the method of filling data. - audio_cfg: a dict containing audio configuration. Comes from model_cfg['audio_cfg']. - """ - with torch.no_grad(): - if len(audio_data) > max_len: - if data_truncating == "rand_trunc": - longer = torch.tensor([True]) - elif data_truncating == "fusion": - # fusion - mel = get_mel(audio_data, audio_cfg) - # split to three parts - chunk_frames = ( - max_len // audio_cfg["hop_size"] + 1 - ) # the +1 related to how the spectrogram is computed - total_frames = mel.shape[0] - if chunk_frames == total_frames: - # there is a corner case where the audio length is - # larger than max_len but smaller than max_len+hop_size. - # In this case, we just use the whole audio. - mel_fusion = torch.stack([mel, mel, mel, mel], dim=0) - sample["mel_fusion"] = mel_fusion - longer = torch.tensor([False]) - else: - ranges = np.array_split( - list(range(0, total_frames - chunk_frames + 1)), 3 - ) - # print('total_frames-chunk_frames:', total_frames-chunk_frames, - # 'len(audio_data):', len(audio_data), - # 'chunk_frames:', chunk_frames, - # 'total_frames:', total_frames) - if len(ranges[1]) == 0: - # if the audio is too short, we just use the first chunk - ranges[1] = [0] - if len(ranges[2]) == 0: - # if the audio is too short, we just use the first chunk - ranges[2] = [0] - # randomly choose index for each part - idx_front = np.random.choice(ranges[0]) - idx_middle = np.random.choice(ranges[1]) - idx_back = np.random.choice(ranges[2]) - # select mel - mel_chunk_front = mel[idx_front : idx_front + chunk_frames, :] - mel_chunk_middle = mel[idx_middle : idx_middle + chunk_frames, :] - mel_chunk_back = mel[idx_back : idx_back + chunk_frames, :] - - # shrink the mel - mel_shrink = torchvision.transforms.Resize(size=[chunk_frames, 64])( - mel[None] - )[0] - # logging.info(f"mel_shrink.shape: {mel_shrink.shape}") - - # stack - mel_fusion = torch.stack( - [mel_chunk_front, mel_chunk_middle, mel_chunk_back, mel_shrink], - dim=0, - ) - sample["mel_fusion"] = mel_fusion - longer = torch.tensor([True]) - else: - raise NotImplementedError( - f"data_truncating {data_truncating} not implemented" - ) - # random crop to max_len (for compatibility) - overflow = len(audio_data) - max_len - idx = np.random.randint(0, overflow + 1) - audio_data = audio_data[idx : idx + max_len] - - else: # padding if too short - if len(audio_data) < max_len: # do nothing if equal - if data_filling == "repeatpad": - n_repeat = int(max_len / len(audio_data)) - audio_data = audio_data.repeat(n_repeat) - # audio_data = audio_data.unsqueeze(0).unsqueeze(0).unsqueeze(0) - # audio_data = F.interpolate(audio_data,size=max_len,mode="bicubic")[0,0,0] - audio_data = F.pad( - audio_data, - (0, max_len - len(audio_data)), - mode="constant", - value=0, - ) - elif data_filling == "pad": - audio_data = F.pad( - audio_data, - (0, max_len - len(audio_data)), - mode="constant", - value=0, - ) - elif data_filling == "repeat": - n_repeat = int(max_len / len(audio_data)) - audio_data = audio_data.repeat(n_repeat + 1)[:max_len] - else: - raise NotImplementedError( - f"data_filling {data_filling} not implemented" - ) - if data_truncating == "fusion": - mel = get_mel(audio_data, audio_cfg) - mel_fusion = torch.stack([mel, mel, mel, mel], dim=0) - sample["mel_fusion"] = mel_fusion - longer = torch.tensor([False]) - - sample["longer"] = longer - sample["waveform"] = audio_data - - return sample - - -def preprocess( - sample, - audio_ext, - text_ext, - max_len, - audio_cfg, - class_index_dict=None, - data_filling="pad", - data_truncating="rand_trunc", - text_augment_selection=None, -): - """ - Preprocess a single sample for wdsdataloader. - """ - audio_data, orig_sr = sf.read(io.BytesIO(sample[audio_ext])) - audio_data = int16_to_float32(float32_to_int16(audio_data)) - audio_data = torch.tensor(audio_data).float() - - # TODO: (yusong) to be include in the future - # # if torchaudio not installed, use soundfile to load audio - # if torchaudio is None: - # audio_data, orig_sr = sf.read(io.BytesIO(sample[audio_ext])) - # audio_data = torch.tensor(audio_data).float() - # else: - # # https://github.com/webdataset/webdataset/blob/main/webdataset/autodecode.py - # with tempfile.TemporaryDirectory() as dirname: - # os.makedirs(dirname, exist_ok=True) - # fname = os.path.join(dirname, f"file.flac") - # with open(fname, "wb") as stream: - # stream.write(sample[audio_ext]) - # audio_data, orig_sr = torchaudio.load(fname) - # audio_data = audio_data[0, :].float() - - sample = get_audio_features( - sample, audio_data, max_len, data_truncating, data_filling, audio_cfg - ) - del sample[audio_ext] - - try: - json_dict_raw = json.loads(sample[text_ext].decode("utf-8")) - except: - print("sample[__url__]:", sample["__url__"]) - - # For selecting augmented text from dataset - if text_augment_selection is None or text_augment_selection == "none": - texts = json_dict_raw["text"] - elif text_augment_selection == "all": - if "text_augment_all" in json_dict_raw.keys(): - texts = json_dict_raw["text_augment_all"] - else: - texts = json_dict_raw["text"] - elif text_augment_selection == "augment_only": - if "text_augment_all" in json_dict_raw.keys(): - if json_dict_raw["text_augment_t5"] is None: - texts = json_dict_raw["text"] - else: - texts = json_dict_raw["text_augment_t5"] - else: - texts = json_dict_raw["text"] - else: - raise NotImplementedError( - f"text_augment_selection {text_augment_selection} not implemented" - ) - sample["full_text"] = texts - - if isinstance(texts, list) and isinstance(texts[0], str) and len(texts) > 1: - texts = random.choice(texts) - sample["raw_text"] = texts - sample["text"] = tokenizer(texts) # text shape: [num_token] - if class_index_dict is not None: - # https://stackoverflow.com/questions/48004243/how-to-share-large-read-only-dictionary-list-across-processes-in-multiprocessing - # https://stackoverflow.com/questions/45693949/storing-strings-in-a-multiprocessing-sharedctypes-array - # key, val = class_index_dict - # key = key[:].split('\n') - # _dict = {k: v for k, v in zip(key, val)} - sample["class_label"] = np.zeros(len(class_index_dict.keys())) - for x in json_dict_raw["tag"]: - sample["class_label"][class_index_dict[x]] = 1 - sample["class_label"] = torch.tensor(sample["class_label"]).float() - del sample[text_ext] - sample["audio_name"] = sample["__key__"].split("/")[-1] + "." + audio_ext - sample["text_name"] = sample["__key__"].split("/")[-1] + "." + text_ext - sample["audio_orig_sr"] = orig_sr - return sample - - -def collate_fn(batch): - """ - Collate function for wdsdataloader. - batch: a list of dict, each dict is a sample - """ - # concatenate values in each dictionary. if it is a tensor, concatenate. if it is a list, extend. - batch_dict = {} - for k in batch[0].keys(): - if isinstance(batch[0][k], dict): # dealwith bert tokenizer output - batch_dict[k] = {} - for kk in batch[0][k].keys(): - tmp = [] - for i in range(len(batch)): - tmp.append(batch[i][k][kk]) - batch_dict[k][kk] = torch.vstack(tmp) - elif isinstance(batch[0][k], torch.Tensor): - batch_dict[k] = torch.stack([sample[k] for sample in batch]) - elif isinstance(batch[0][k], np.ndarray): - batch_dict[k] = torch.tensor(np.stack([sample[k] for sample in batch])) - else: - batch_dict[k] = [sample[k] for sample in batch] - return batch_dict - - -def get_wds_dataset( - args, - model_cfg, - is_train, - audio_ext="flac", - text_ext="json", - max_len=480000, - proportion=1.0, - sizefilepath_=None, - is_local=None, -): - """ - Get a dataset for wdsdataloader. - """ - if is_local is None and (not args.remotedata is None): - is_local = not args.remotedata - - input_shards = args.train_data if is_train else args.val_data - assert input_shards is not None - - if not sizefilepath_ is None: - sizefilepath = sizefilepath_ - else: - sizefilepath = os.path.join(os.path.dirname(input_shards[0]), "sizes.json") - - if proportion != 1.0: - num_samples, num_shards, input_shards, _ = sample_prop( - sizefilepath, input_shards, proportion, is_local=is_local - ) - else: - num_samples, num_shards = get_dataset_size( - input_shards, sizefilepath_=sizefilepath_, is_local=is_local - ) - - if not num_samples: - if is_train: - num_samples = args.train_num_samples - if not num_samples: - raise RuntimeError( - "Currently, number of dataset samples must be specified for training dataset. " - "Please specify via `--train-num-samples` if no dataset length info present." - ) - else: - num_samples = ( - args.val_num_samples or 0 - ) # eval will just exhaust the iterator if not specified - - pipeline = [wds.SimpleShardList(input_shards)] - # at this point we have an iterator over all the shards - # TODO: (yusong): add a if statement of distributed. If not, we don't need to split_by_node - if is_train or args.parallel_eval: - pipeline.extend( - [ - wds.detshuffle( - bufsize=_SHARD_SHUFFLE_SIZE, - initial=_SHARD_SHUFFLE_INITIAL, - seed=args.seed, - ), - wds.split_by_node, - wds.split_by_worker, - # at this point, we have an iterator over the shards assigned to each worker at each node - wds.tarfile_to_samples(handler=log_and_continue), - wds.shuffle( - bufsize=_SAMPLE_SHUFFLE_SIZE, - initial=_SAMPLE_SHUFFLE_INITIAL, - rng=random.Random(args.seed), - ), - # wds.repeatedly, # FIXME determine if this is beneficial - ] - ) - else: - pipeline.extend( - [ - wds.split_by_worker, - # at this point, we have an iterator over the shards assigned to each worker - wds.tarfile_to_samples(handler=log_and_continue), - ] - ) - pipeline.append( - wds.map( - partial( - preprocess, - audio_ext=audio_ext, - text_ext=text_ext, - max_len=max_len, - audio_cfg=model_cfg["audio_cfg"], - class_index_dict=copy.deepcopy(args.class_index_dict), - data_filling=args.data_filling, - data_truncating=args.data_truncating, - text_augment_selection=args.text_augment_selection, - ) - ), - ) - - pipeline.append( - wds.batched( - args.batch_size, - partial=not (is_train or args.parallel_eval), - collation_fn=collate_fn, - ) - ) - - dataset = wds.DataPipeline(*pipeline) - if is_train or args.parallel_eval: - # (yusong): Currently parallel evaluation will be not precise as we are repeat the last few samples. - # (yusong): See comments below. - # roll over and repeat a few samples to get same number of full batches on each node - global_batch_size = args.batch_size * args.world_size - num_batches = math.ceil(num_samples / global_batch_size) - num_workers = max(1, args.workers) - num_worker_batches = math.ceil( - num_batches / num_workers - ) # per dataloader worker - num_batches = num_worker_batches * num_workers - num_samples = num_batches * global_batch_size - dataset = dataset.with_epoch( - num_worker_batches - ) # each worker is iterating over this - else: - # last batches are partial, eval is done on single (master) node - num_batches = math.ceil(num_samples / args.batch_size) - - kwargs = {} - if args.horovod: # multi-node training on summit - kwargs["multiprocessing_context"] = "forkserver" - - dataloader = wds.WebLoader( - dataset, batch_size=None, shuffle=False, num_workers=args.workers, **kwargs - ) - - # FIXME not clear which approach is better, with_epoch before vs after dataloader? - # hoping to resolve via https://github.com/webdataset/webdataset/issues/169 - # if is_train: - # # roll over and repeat a few samples to get same number of full batches on each node - # global_batch_size = args.batch_size * args.world_size - # num_batches = math.ceil(num_samples / global_batch_size) - # num_workers = max(1, args.workers) - # num_batches = math.ceil(num_batches / num_workers) * num_workers - # num_samples = num_batches * global_batch_size - # dataloader = dataloader.with_epoch(num_batches) - # else: - # # last batches are partial, eval is done on single (master) node - # num_batches = math.ceil(num_samples / args.batch_size) - - # add meta-data to dataloader instance for convenience - dataloader.num_batches = num_batches - dataloader.num_samples = num_samples - - return DataInfo(dataloader, None) - - -def wds_batch_list2dict( - batch, - keys=[ - "__url__", - "__key__", - "waveform", - "text", - "raw_text", - "audio_name", - "text_name", - "audio_orig_sr", - ], -): - """ - Return a dictionary of the batch, with keys as the names of the fields. - """ - assert len(keys) == len( - batch - ), "batch must have same number of keys as keys argument" - return {keys[i]: batch[i] for i in range(len(batch))} - - -def get_csv_dataset(args, preprocess_fn, is_train): - input_filename = args.train_data if is_train else args.val_data - assert input_filename - dataset = CsvDataset( - input_filename, - preprocess_fn, - img_key=args.csv_img_key, - caption_key=args.csv_caption_key, - sep=args.csv_separator, - ) - num_samples = len(dataset) - sampler = DistributedSampler(dataset) if args.distributed and is_train else None - shuffle = is_train and sampler is None - - dataloader = DataLoader( - dataset, - batch_size=args.batch_size, - shuffle=shuffle, - num_workers=args.workers, - pin_memory=True, - sampler=sampler, - drop_last=is_train, - ) - dataloader.num_samples = num_samples - dataloader.num_batches = len(dataloader) - - return DataInfo(dataloader, sampler) - - -def get_toy_dataset(args, model_cfg, is_train): - index_path = args.train_data if is_train else args.val_data - ipc_path = args.train_ipc if is_train else args.val_ipc - assert index_path and ipc_path - eval_mode = not is_train - dataset = ToyDataset(index_path, ipc_path, model_cfg, eval_mode=eval_mode) - - num_samples = len(dataset) - sampler = ( - DistributedSampler(dataset, shuffle=False) - if args.distributed and is_train - else None - ) - - dataloader = DataLoader( - dataset, - batch_size=args.batch_size, - shuffle=False, - num_workers=args.workers, - sampler=sampler, - drop_last=is_train, - ) - dataloader.num_samples = num_samples - dataloader.num_batches = len(dataloader) - - return DataInfo(dataloader, sampler) - - -def get_dataset_fn(data_path, dataset_type): - if dataset_type == "webdataset": - return get_wds_dataset - elif dataset_type == "csv": - return get_csv_dataset - elif dataset_type == "auto": - ext = data_path.split(".")[-1] - if ext in ["csv", "tsv"]: - return get_csv_dataset - elif ext in ["tar"]: - return get_wds_dataset - else: - raise ValueError( - f"Tried to figure out dataset type, but failed for extention {ext}." - ) - elif dataset_type == "toy": - return get_toy_dataset - else: - raise ValueError(f"Unsupported dataset type: {dataset_type}") - - -def get_data(args, model_cfg): - data = {} - - args.class_index_dict = load_class_label(args.class_label_path) - - if args.datasetinfos is None: - args.datasetinfos = ["train", "unbalanced_train", "balanced_train"] - if args.dataset_type == "webdataset": - args.train_data = get_tar_path_from_dataset_name( - args.datasetnames, - args.datasetinfos, - islocal=not args.remotedata, - proportion=args.dataset_proportion, - dataset_path=args.datasetpath, - full_dataset=args.full_train_dataset, - ) - - if args.full_train_dataset is None: - args.full_train_dataset = [] - if args.exclude_eval_dataset is None: - args.exclude_eval_dataset = [] - excluded_eval_datasets = args.full_train_dataset + args.exclude_eval_dataset - - val_dataset_names = ( - [n for n in args.datasetnames if n not in excluded_eval_datasets] - if excluded_eval_datasets - else args.datasetnames - ) - args.val_dataset_names = val_dataset_names - args.val_data = get_tar_path_from_dataset_name( - val_dataset_names, - ["valid", "test", "eval"], - islocal=not args.remotedata, - proportion=1, - dataset_path=args.datasetpath, - full_dataset=None, - ) - - if args.train_data: - data["train"] = get_dataset_fn(args.train_data, args.dataset_type)( - args, model_cfg, is_train=True - ) - - if args.val_data: - data["val"] = get_dataset_fn(args.val_data, args.dataset_type)( - args, model_cfg, is_train=False - ) - - return data diff --git a/spaces/badmonk/model/app.py b/spaces/badmonk/model/app.py deleted file mode 100644 index e227346cd69b333a86bc351311dfaed0a63d39dd..0000000000000000000000000000000000000000 --- a/spaces/badmonk/model/app.py +++ /dev/null @@ -1,141 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "stablediffusionapi/cyberrealistic-v32", - "stablediffusionapi/majicmixrealistic", - "stablediffusionapi/reliberate", - "stablediffusionapi/realistic-vision-v40", - "stablediffusionapi/uber-realistic-merge", - "stablediffusionapi/epicrealism", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/bdp-AI/03-ImageSearchSimilar/app.py b/spaces/bdp-AI/03-ImageSearchSimilar/app.py deleted file mode 100644 index be5848cc9752fe7732264cb90aaa408db8e310be..0000000000000000000000000000000000000000 --- a/spaces/bdp-AI/03-ImageSearchSimilar/app.py +++ /dev/null @@ -1,185 +0,0 @@ -from html import escape -import re -import streamlit as st -import pandas as pd, numpy as np -from transformers import CLIPProcessor, CLIPModel -from st_clickable_images import clickable_images - -@st.cache( - show_spinner=False, - hash_funcs={ - CLIPModel: lambda _: None, - CLIPProcessor: lambda _: None, - dict: lambda _: None, - }, -) -def load(): - model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14") - processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14") - df = {0: pd.read_csv("data.csv"), 1: pd.read_csv("data2.csv")} - embeddings = {0: np.load("embeddings.npy"), 1: np.load("embeddings2.npy")} - for k in [0, 1]: - embeddings[k] = embeddings[k] / np.linalg.norm( - embeddings[k], axis=1, keepdims=True - ) - return model, processor, df, embeddings - - -model, processor, df, embeddings = load() -source = {0: "\nSource: Unsplash", 1: "\nSource: The Movie Database (TMDB)"} - - -def compute_text_embeddings(list_of_strings): - inputs = processor(text=list_of_strings, return_tensors="pt", padding=True) - result = model.get_text_features(**inputs).detach().numpy() - return result / np.linalg.norm(result, axis=1, keepdims=True) - - -def image_search(query, corpus, n_results=24): - positive_embeddings = None - - def concatenate_embeddings(e1, e2): - if e1 is None: - return e2 - else: - return np.concatenate((e1, e2), axis=0) - - splitted_query = query.split("EXCLUDING ") - dot_product = 0 - k = 0 if corpus == "Unsplash" else 1 - if len(splitted_query[0]) > 0: - positive_queries = splitted_query[0].split(";") - for positive_query in positive_queries: - match = re.match(r"\[(Movies|Unsplash):(\d{1,5})\](.*)", positive_query) - if match: - corpus2, idx, remainder = match.groups() - idx, remainder = int(idx), remainder.strip() - k2 = 0 if corpus2 == "Unsplash" else 1 - positive_embeddings = concatenate_embeddings( - positive_embeddings, embeddings[k2][idx : idx + 1, :] - ) - if len(remainder) > 0: - positive_embeddings = concatenate_embeddings( - positive_embeddings, compute_text_embeddings([remainder]) - ) - else: - positive_embeddings = concatenate_embeddings( - positive_embeddings, compute_text_embeddings([positive_query]) - ) - dot_product = embeddings[k] @ positive_embeddings.T - dot_product = dot_product - np.median(dot_product, axis=0) - dot_product = dot_product / np.max(dot_product, axis=0, keepdims=True) - dot_product = np.min(dot_product, axis=1) - - if len(splitted_query) > 1: - negative_queries = (" ".join(splitted_query[1:])).split(";") - negative_embeddings = compute_text_embeddings(negative_queries) - dot_product2 = embeddings[k] @ negative_embeddings.T - dot_product2 = dot_product2 - np.median(dot_product2, axis=0) - dot_product2 = dot_product2 / np.max(dot_product2, axis=0, keepdims=True) - dot_product -= np.max(np.maximum(dot_product2, 0), axis=1) - - results = np.argsort(dot_product)[-1 : -n_results - 1 : -1] - return [ - ( - df[k].iloc[i]["path"], - df[k].iloc[i]["tooltip"] + source[k], - i, - ) - for i in results - ] - - -description = """ -# Semantic image search -**Enter your query and hit enter** -""" - -howto = """ -- Click image to find similar images -- Use "**;**" to combine multiple queries) -- Use "**EXCLUDING**", to exclude a query -""" - - -def main(): - st.markdown( - """ - """, - unsafe_allow_html=True, - ) - st.sidebar.markdown(description) - with st.sidebar.expander("Advanced use"): - st.markdown(howto) - - - st.sidebar.markdown(f"Unsplash has categories that match: backgrounds, photos, nature, iphone, etc") - st.sidebar.markdown(f"Unsplash images contain animals, apps, events, feelings, food, travel, nature, people, religion, sports, things, stock") - st.sidebar.markdown(f"Unsplash things include flag, tree, clock, money, tattoo, arrow, book, car, fireworks, ghost, health, kiss, dance, balloon, crown, eye, house, music, airplane, lighthouse, typewriter, toys") - st.sidebar.markdown(f"unsplash feelings include funny, heart, love, cool, congratulations, love, scary, cute, friendship, inspirational, hug, sad, cursed, beautiful, crazy, respect, transformation, peaceful, happy") - st.sidebar.markdown(f"unsplash people contain baby, life, women, family, girls, pregnancy, society, old people, musician, attractive, bohemian") - st.sidebar.markdown(f"imagenet queries include: photo of, photo of many, sculpture of, rendering of, graffiti of, tattoo of, embroidered, drawing of, plastic, black and white, painting, video game, doodle, origami, sketch, etc") - - - _, c, _ = st.columns((1, 3, 1)) - if "query" in st.session_state: - query = c.text_input("", value=st.session_state["query"]) - else: - - query = c.text_input("", value="lighthouse") - corpus = st.radio("", ["Unsplash"]) - #corpus = st.radio("", ["Unsplash", "Movies"]) - if len(query) > 0: - results = image_search(query, corpus) - clicked = clickable_images( - [result[0] for result in results], - titles=[result[1] for result in results], - div_style={ - "display": "flex", - "justify-content": "center", - "flex-wrap": "wrap", - }, - img_style={"margin": "2px", "height": "200px"}, - ) - if clicked >= 0: - change_query = False - if "last_clicked" not in st.session_state: - change_query = True - else: - if clicked != st.session_state["last_clicked"]: - change_query = True - if change_query: - st.session_state["query"] = f"[{corpus}:{results[clicked][2]}]" - st.experimental_rerun() - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/benjaminzuckermanbasisscottsdale/Cardiovascular_Disease_Prediction_Service/README.md b/spaces/benjaminzuckermanbasisscottsdale/Cardiovascular_Disease_Prediction_Service/README.md deleted file mode 100644 index b816d386c67b51b85a3263f3735d50ba3070f31b..0000000000000000000000000000000000000000 --- a/spaces/benjaminzuckermanbasisscottsdale/Cardiovascular_Disease_Prediction_Service/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Cardiovascular Disease Prediction Service -emoji: 🏆 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.44.4 -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bioriAsaeru/text-to-voice/Adobe Indesign Cs5.5 Amtlib.dll How to Backup and Restore Your DLL Files Safely.md b/spaces/bioriAsaeru/text-to-voice/Adobe Indesign Cs5.5 Amtlib.dll How to Backup and Restore Your DLL Files Safely.md deleted file mode 100644 index 9b1fbf1615f5eba204b1ad4d914cde89295cdc8d..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Adobe Indesign Cs5.5 Amtlib.dll How to Backup and Restore Your DLL Files Safely.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Adobe Indesign Cs5.5 Amtlib.dll


    DOWNLOAD »»» https://urloso.com/2uyRm2



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/CS 1.6 PsychoTraining 39s Edition V3.0 (v43 RevEmu) Maisanta Rivales Sim.md b/spaces/bioriAsaeru/text-to-voice/CS 1.6 PsychoTraining 39s Edition V3.0 (v43 RevEmu) Maisanta Rivales Sim.md deleted file mode 100644 index 8c071bb4ca6d2285707a406e46d842bf37abe4da..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/CS 1.6 PsychoTraining 39s Edition V3.0 (v43 RevEmu) Maisanta Rivales Sim.md +++ /dev/null @@ -1,6 +0,0 @@ -

    CS 1.6 PsychoTraining 39;s Edition V3.0 (v43, RevEmu) maisanta rivales sim


    DOWNLOAD –––––>>> https://urloso.com/2uyQjw



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/ViTDet/configs/LVIS/cascade_mask_rcnn_swin_l_in21k_50ep.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/ViTDet/configs/LVIS/cascade_mask_rcnn_swin_l_in21k_50ep.py deleted file mode 100644 index 9e22e3b28777003776774f61273c04bbb2abea1e..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/ViTDet/configs/LVIS/cascade_mask_rcnn_swin_l_in21k_50ep.py +++ /dev/null @@ -1,12 +0,0 @@ -from .cascade_mask_rcnn_swin_b_in21k_50ep import ( - dataloader, - lr_multiplier, - model, - train, - optimizer, -) - -model.backbone.bottom_up.embed_dim = 192 -model.backbone.bottom_up.num_heads = [6, 12, 24, 48] - -train.init_checkpoint = "detectron2://ImageNetPretrained/swin/swin_large_patch4_window7_224_22k.pth" diff --git a/spaces/cakiki/facets-dive/README.md b/spaces/cakiki/facets-dive/README.md deleted file mode 100644 index 6d63680af29bfc7b77459760e5bd7ea086609160..0000000000000000000000000000000000000000 --- a/spaces/cakiki/facets-dive/README.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -title: Facets Dive -emoji: 📊 -colorFrom: gray -colorTo: red -sdk: static ---- diff --git a/spaces/caojiachen1/ChatGPT/crazy_functions/test_project/cpp/cppipc/waiter.h b/spaces/caojiachen1/ChatGPT/crazy_functions/test_project/cpp/cppipc/waiter.h deleted file mode 100644 index ee45fe3517be95ac1688a3e3540189edeb0d860c..0000000000000000000000000000000000000000 --- a/spaces/caojiachen1/ChatGPT/crazy_functions/test_project/cpp/cppipc/waiter.h +++ /dev/null @@ -1,83 +0,0 @@ -#pragma once - -#include -#include -#include -#include - -#include "libipc/def.h" -#include "libipc/mutex.h" -#include "libipc/condition.h" -#include "libipc/platform/detail.h" - -namespace ipc { -namespace detail { - -class waiter { - ipc::sync::condition cond_; - ipc::sync::mutex lock_; - std::atomic quit_ {false}; - -public: - static void init(); - - waiter() = default; - waiter(char const *name) { - open(name); - } - - ~waiter() { - close(); - } - - bool valid() const noexcept { - return cond_.valid() && lock_.valid(); - } - - bool open(char const *name) noexcept { - quit_.store(false, std::memory_order_relaxed); - if (!cond_.open((std::string{"_waiter_cond_"} + name).c_str())) { - return false; - } - if (!lock_.open((std::string{"_waiter_lock_"} + name).c_str())) { - cond_.close(); - return false; - } - return valid(); - } - - void close() noexcept { - cond_.close(); - lock_.close(); - } - - template - bool wait_if(F &&pred, std::uint64_t tm = ipc::invalid_value) noexcept { - IPC_UNUSED_ std::lock_guard guard {lock_}; - while ([this, &pred] { - return !quit_.load(std::memory_order_relaxed) - && std::forward(pred)(); - }()) { - if (!cond_.wait(lock_, tm)) return false; - } - return true; - } - - bool notify() noexcept { - std::lock_guard{lock_}; // barrier - return cond_.notify(lock_); - } - - bool broadcast() noexcept { - std::lock_guard{lock_}; // barrier - return cond_.broadcast(lock_); - } - - bool quit_waiting() { - quit_.store(true, std::memory_order_release); - return broadcast(); - } -}; - -} // namespace detail -} // namespace ipc diff --git a/spaces/caoyiming/vits-uma-genshin-honkai/modules.py b/spaces/caoyiming/vits-uma-genshin-honkai/modules.py deleted file mode 100644 index 56ea4145eddf19dd330a3a41ab0183efc1686d83..0000000000000000000000000000000000000000 --- a/spaces/caoyiming/vits-uma-genshin-honkai/modules.py +++ /dev/null @@ -1,388 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/captainChan/CaptainChan/modules/__init__.py b/spaces/captainChan/CaptainChan/modules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/carlosaguayo/cats_vs_dogs/app.py b/spaces/carlosaguayo/cats_vs_dogs/app.py deleted file mode 100644 index 0975ade9b4f253c9645d2c86f7481ec2e6e9643b..0000000000000000000000000000000000000000 --- a/spaces/carlosaguayo/cats_vs_dogs/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import gradio as gr -import cv2 -from huggingface_hub import from_pretrained_keras -from skimage import io - -ROWS, COLS = 150, 150 - -model = from_pretrained_keras("carlosaguayo/cats_vs_dogs") - -def process_image(img): - img = cv2.resize(img, (ROWS, COLS), interpolation=cv2.INTER_CUBIC) - img = img / 255.0 - img = img.reshape(1,ROWS,COLS,3) - - prediction = model.predict(img)[0][0] - if prediction >= 0.5: - message = 'I am {:.2%} sure this is a Cat'.format(prediction) - else: - message = 'I am {:.2%} sure this is a Dog'.format(1-prediction) - return message - -title = "Interactive demo: Classify cat vs dog" -description = "Simple Cat vs Dog classification" -article = "" -# examples =[["image_0.png"], ["image_1.png"], ["image_2.png"]] - -iface = gr.Interface(fn=process_image, - inputs=gr.inputs.Image(), - outputs=gr.outputs.Textbox(), - title=title, - description=description) - # article=article, - # examples=examples) -iface.launch() \ No newline at end of file diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/common/data/coco.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/common/data/coco.py deleted file mode 100644 index 703c4385c7ddc7eb0759c98d102ab2384d6a9e3e..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/common/data/coco.py +++ /dev/null @@ -1,48 +0,0 @@ -from omegaconf import OmegaConf - -import detectron2.data.transforms as T -from detectron2.config import LazyCall as L -from detectron2.data import ( - DatasetMapper, - build_detection_test_loader, - build_detection_train_loader, - get_detection_dataset_dicts, -) -from detectron2.evaluation import COCOEvaluator - -dataloader = OmegaConf.create() - -dataloader.train = L(build_detection_train_loader)( - dataset=L(get_detection_dataset_dicts)(names="coco_2017_train"), - mapper=L(DatasetMapper)( - is_train=True, - augmentations=[ - L(T.ResizeShortestEdge)( - short_edge_length=(640, 672, 704, 736, 768, 800), - sample_style="choice", - max_size=1333, - ), - L(T.RandomFlip)(horizontal=True), - ], - image_format="BGR", - use_instance_mask=True, - ), - total_batch_size=16, - num_workers=4, -) - -dataloader.test = L(build_detection_test_loader)( - dataset=L(get_detection_dataset_dicts)(names="coco_2017_val", filter_empty=False), - mapper=L(DatasetMapper)( - is_train=False, - augmentations=[ - L(T.ResizeShortestEdge)(short_edge_length=800, max_size=1333), - ], - image_format="${...train.mapper.image_format}", - ), - num_workers=4, -) - -dataloader.evaluator = L(COCOEvaluator)( - dataset_name="${..test.dataset.names}", -) diff --git a/spaces/ccds/vits_onnx/export/vits/models.py b/spaces/ccds/vits_onnx/export/vits/models.py deleted file mode 100644 index 7757ad7afecddf87b29e120ba3784e9bed42d713..0000000000000000000000000000000000000000 --- a/spaces/ccds/vits_onnx/export/vits/models.py +++ /dev/null @@ -1,672 +0,0 @@ -import math - -import torch -from torch import nn -from torch.nn import functional as F -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -import monotonic_align - -import commons -import modules -import attentions -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, - in_channels, - filter_channels, - kernel_size, - p_dropout, - n_flows=4, - gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append( - modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, - kernel_size, - n_layers=3, - p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append( - modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, - kernel_size, - n_layers=3, - p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, - x, - x_mask, - w=None, - g=None, - reverse=False, - noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to( - device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum( - (F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2]) - logq = torch.sum( - -0.5 * (math.log(2 * math.pi) + - (e_q**2)) * x_mask, [1, 2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2 * math.pi) + - (z**2)) * x_mask, [1, 2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to( - device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, - in_channels, - filter_channels, - kernel_size, - p_dropout, - gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, - filter_channels, - kernel_size, - padding=kernel_size // 2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, - filter_channels, - kernel_size, - padding=kernel_size // 2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, n_vocab, out_channels, hidden_channels, filter_channels, - n_heads, n_layers, kernel_size, p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder(hidden_channels, filter_channels, - n_heads, n_layers, kernel_size, - p_dropout) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), - 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer(channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), - 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, - upsample_initial_channel, - 7, - 1, - padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d(upsample_initial_channel // (2**i), - upsample_initial_channel // (2**(i + 1)), - k, - u, - padding=(k - u) // 2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2**(i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, - period, - kernel_size=5, - stride=3, - use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm is False else spectral_norm - self.convs = nn.ModuleList([ - norm_f( - Conv2d(1, - 32, (kernel_size, 1), (stride, 1), - padding=(get_padding(kernel_size, 1), 0))), - norm_f( - Conv2d(32, - 128, (kernel_size, 1), (stride, 1), - padding=(get_padding(kernel_size, 1), 0))), - norm_f( - Conv2d(128, - 512, (kernel_size, 1), (stride, 1), - padding=(get_padding(kernel_size, 1), 0))), - norm_f( - Conv2d(512, - 1024, (kernel_size, 1), (stride, 1), - padding=(get_padding(kernel_size, 1), 0))), - norm_f( - Conv2d(1024, - 1024, (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm is False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) - for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - if self.n_speakers != 0: - message = "gin_channels must be none zero for multiple speakers" - assert gin_channels != 0, message - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, inter_channels, hidden_channels, - filter_channels, n_heads, n_layers, - kernel_size, p_dropout) - self.dec = Generator(inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, - hidden_channels, - 5, - 1, - 4, - gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, - 192, - 3, - 0.5, - 4, - gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, - 256, - 3, - 0.5, - gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], - keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul( - -0.5 * (z_p**2).transpose(1, 2), - s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul( - z_p.transpose(1, 2), - (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p**2) * s_p_sq_r, [1], - keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze( - y_mask, -1) - attn = monotonic_align.maximum_path( - neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum( - (logw - logw_)**2, [1, 2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, - 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), - logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, - logs_p, m_q, - logs_q) - - def infer(self, - x, - x_lengths, - sid=None, - noise_scale=1, - length_scale=1, - noise_scale_w=1., - max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, - x_mask, - g=g, - reverse=True, - noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), - 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose( - 1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose( - 1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:, :, :max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def export_forward(self, x, x_lengths, scales, sid): - # shape of scales: Bx3, make triton happy - audio, *_ = self.infer(x, - x_lengths, - sid, - noise_scale=scales[0][0], - length_scale=scales[0][1], - noise_scale_w=scales[0][2]) - return audio - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) diff --git a/spaces/chansung/zero2story/modules/llms/chatgpt_service.py b/spaces/chansung/zero2story/modules/llms/chatgpt_service.py deleted file mode 100644 index 91c89e48a87d3796a2325c92e7b24d8e9cf3ab7b..0000000000000000000000000000000000000000 --- a/spaces/chansung/zero2story/modules/llms/chatgpt_service.py +++ /dev/null @@ -1,110 +0,0 @@ -import os -import threading -import toml -from pathlib import Path - -from pingpong import PingPong -from pingpong.pingpong import PPManager -from pingpong.pingpong import PromptFmt -from pingpong.pingpong import UIFmt -from pingpong.gradio import GradioChatUIFmt - -from modules.llms import ( - LLMFactory, - PromptFmt, PromptManager, PPManager, UIPPManager, LLMService -) - -class ChatGPTFactory(LLMFactory): - def __init__(self): - pass - - def create_prompt_format(self): - return ChatGPTChatPromptFmt() - - def create_prompt_manager(self, prompts_path: str=None): - return ChatGPTPromptManager((prompts_path or Path('.') / 'prompts' / 'chatgpt_prompts.toml')) - - def create_pp_manager(self): - return ChatGPTChatPPManager() - - def create_ui_pp_manager(self): - return GradioChatGPTChatPPManager() - - def create_llm_service(self): - return ChatGPTService() - - -class ChatGPTChatPromptFmt(PromptFmt): - @classmethod - def ctx(cls, context): - pass - - @classmethod - def prompt(cls, pingpong, truncate_size): - pass - - -class ChatGPTPromptManager(PromptManager): - _instance = None - _lock = threading.Lock() - _prompts = None - - def __new__(cls, prompts_path): - if cls._instance is None: - with cls._lock: - if not cls._instance: - cls._instance = super(ChatGPTPromptManager, cls).__new__(cls) - cls._instance.load_prompts(prompts_path) - return cls._instance - - def load_prompts(self, prompts_path): - self._prompts_path = prompts_path - self.reload_prompts() - - def reload_prompts(self): - assert self.prompts_path, "Prompt path is missing." - self._prompts = toml.load(self.prompts_path) - - @property - def prompts_path(self): - return self._prompts_path - - @prompts_path.setter - def prompts_path(self, prompts_path): - self._prompts_path = prompts_path - self.reload_prompts() - - @property - def prompts(self): - if self._prompts is None: - self.load_prompts() - return self._prompts - - -class ChatGPTChatPPManager(PPManager): - def build_prompts(self, from_idx: int=0, to_idx: int=-1, fmt: PromptFmt=None, truncate_size: int=None): - pass - - -class GradioChatGPTChatPPManager(UIPPManager, ChatGPTChatPPManager): - def build_uis(self, from_idx: int=0, to_idx: int=-1, fmt: UIFmt=GradioChatUIFmt): - pass - -class ChatGPTService(LLMService): - def make_params(self, mode="chat", - temperature=None, - candidate_count=None, - top_k=None, - top_p=None, - max_output_tokens=None, - use_filter=True): - pass - - async def gen_text( - self, - prompt, - mode="chat", #chat or text - parameters=None, - use_filter=True - ): - pass \ No newline at end of file diff --git a/spaces/charles0519/ChuanhuChatGPT/README.md b/spaces/charles0519/ChuanhuChatGPT/README.md deleted file mode 100644 index feb19352c11d33b74cd0462f8699d4967aa9d53b..0000000000000000000000000000000000000000 --- a/spaces/charles0519/ChuanhuChatGPT/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ChuanhuChatGPT -emoji: 🐠 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: gpl-3.0 -duplicated_from: JohnSmith9982/ChuanhuChatGPT ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/demo/TensorRT/python/README.md b/spaces/chendl/compositional_test/multimodal/YOLOX/demo/TensorRT/python/README.md deleted file mode 100644 index 236eeb1265344b68e24616293c96fffee9a17262..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/YOLOX/demo/TensorRT/python/README.md +++ /dev/null @@ -1,46 +0,0 @@ -# YOLOX-TensorRT in Python - -This tutorial includes a Python demo for TensorRT. - -## Install TensorRT Toolkit - -Please follow the [TensorRT Installation Guide](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html) and [torch2trt gitrepo](https://github.com/NVIDIA-AI-IOT/torch2trt) to install TensorRT and torch2trt. - -## Convert model - -YOLOX models can be easily conveted to TensorRT models using torch2trt - - If you want to convert our model, use the flag -n to specify a model name: - ```shell - python tools/trt.py -n -c - ``` - For example: - ```shell - python tools/trt.py -n yolox-s -c your_ckpt.pth - ``` - can be: yolox-nano, yolox-tiny. yolox-s, yolox-m, yolox-l, yolox-x. - - If you want to convert your customized model, use the flag -f to specify you exp file: - ```shell - python tools/trt.py -f -c - ``` - For example: - ```shell - python tools/trt.py -f /path/to/your/yolox/exps/yolox_s.py -c your_ckpt.pth - ``` - *yolox_s.py* can be any exp file modified by you. - -The converted model and the serialized engine file (for C++ demo) will be saved on your experiment output dir. - -## Demo - -The TensorRT python demo is merged on our pytorch demo file, so you can run the pytorch demo command with ```--trt```. - -```shell -python tools/demo.py image -n yolox-s --trt --save_result -``` -or -```shell -python tools/demo.py image -f exps/default/yolox_s.py --trt --save_result -``` - diff --git a/spaces/chendl/compositional_test/transformers/README_zh-hans.md b/spaces/chendl/compositional_test/transformers/README_zh-hans.md deleted file mode 100644 index 59ef12bbf2c11f13ab6ef38a9a83062611de1065..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/README_zh-hans.md +++ /dev/null @@ -1,471 +0,0 @@ - - - - -

    -
    - -
    -

    -

    - - Build - - - GitHub - - - Documentation - - - GitHub release - - - Contributor Covenant - - DOI -

    - -

    -

    - English | - 简体中文 | - 繁體中文 | - 한국어 | - Español | - 日本語 | - हिन्दी -

    -

    - -

    -

    为 Jax、PyTorch 和 TensorFlow 打造的先进的自然语言处理

    -

    - -

    - -

    - -🤗 Transformers 提供了数以千计的预训练模型,支持 100 多种语言的文本分类、信息抽取、问答、摘要、翻译、文本生成。它的宗旨是让最先进的 NLP 技术人人易用。 - -🤗 Transformers 提供了便于快速下载和使用的API,让你可以把预训练模型用在给定文本、在你的数据集上微调然后通过 [model hub](https://huggingface.co/models) 与社区共享。同时,每个定义的 Python 模块均完全独立,方便修改和快速研究实验。 - -🤗 Transformers 支持三个最热门的深度学习库: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) 以及 [TensorFlow](https://www.tensorflow.org/) — 并与之无缝整合。你可以直接使用一个框架训练你的模型然后用另一个加载和推理。 - -## 在线演示 - -你可以直接在模型页面上测试大多数 [model hub](https://huggingface.co/models) 上的模型。 我们也提供了 [私有模型托管、模型版本管理以及推理API](https://huggingface.co/pricing)。 - -这里是一些例子: -- [用 BERT 做掩码填词](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) -- [用 Electra 做命名实体识别](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) -- [用 GPT-2 做文本生成](https://huggingface.co/gpt2?text=A+long+time+ago%2C+) -- [用 RoBERTa 做自然语言推理](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) -- [用 BART 做文本摘要](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) -- [用 DistilBERT 做问答](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) -- [用 T5 做翻译](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) - -**[Write With Transformer](https://transformer.huggingface.co)**,由抱抱脸团队打造,是一个文本生成的官方 demo。 - -## 如果你在寻找由抱抱脸团队提供的定制化支持服务 - - - HuggingFace Expert Acceleration Program -
    - -## 快速上手 - -我们为快速使用模型提供了 `pipeline` (流水线)API。流水线聚合了预训练模型和对应的文本预处理。下面是一个快速使用流水线去判断正负面情绪的例子: - -```python ->>> from transformers import pipeline - -# 使用情绪分析流水线 ->>> classifier = pipeline('sentiment-analysis') ->>> classifier('We are very happy to introduce pipeline to the transformers repository.') -[{'label': 'POSITIVE', 'score': 0.9996980428695679}] -``` - -第二行代码下载并缓存了流水线使用的预训练模型,而第三行代码则在给定的文本上进行了评估。这里的答案“正面” (positive) 具有 99 的置信度。 - -许多的 NLP 任务都有开箱即用的预训练流水线。比如说,我们可以轻松的从给定文本中抽取问题答案: - -``` python ->>> from transformers import pipeline - -# 使用问答流水线 ->>> question_answerer = pipeline('question-answering') ->>> question_answerer({ -... 'question': 'What is the name of the repository ?', -... 'context': 'Pipeline has been included in the huggingface/transformers repository' -... }) -{'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'} - -``` - -除了给出答案,预训练模型还给出了对应的置信度分数、答案在词符化 (tokenized) 后的文本中开始和结束的位置。你可以从[这个教程](https://huggingface.co/docs/transformers/task_summary)了解更多流水线API支持的任务。 - -要在你的任务上下载和使用任意预训练模型也很简单,只需三行代码。这里是 PyTorch 版的示例: -```python ->>> from transformers import AutoTokenizer, AutoModel - ->>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") ->>> model = AutoModel.from_pretrained("bert-base-uncased") - ->>> inputs = tokenizer("Hello world!", return_tensors="pt") ->>> outputs = model(**inputs) -``` -这里是等效的 TensorFlow 代码: -```python ->>> from transformers import AutoTokenizer, TFAutoModel - ->>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") ->>> model = TFAutoModel.from_pretrained("bert-base-uncased") - ->>> inputs = tokenizer("Hello world!", return_tensors="tf") ->>> outputs = model(**inputs) -``` - -词符化器 (tokenizer) 为所有的预训练模型提供了预处理,并可以直接对单个字符串进行调用(比如上面的例子)或对列表 (list) 调用。它会输出一个你可以在下游代码里使用或直接通过 `**` 解包表达式传给模型的词典 (dict)。 - -模型本身是一个常规的 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) 或 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)(取决于你的后端),可以常规方式使用。 [这个教程](https://huggingface.co/transformers/training.html)解释了如何将这样的模型整合到经典的 PyTorch 或 TensorFlow 训练循环中,或是如何使用我们的 `Trainer` 训练器)API 来在一个新的数据集上快速微调。 - -## 为什么要用 transformers? - -1. 便于使用的先进模型: - - NLU 和 NLG 上表现优越 - - 对教学和实践友好且低门槛 - - 高级抽象,只需了解三个类 - - 对所有模型统一的API - -1. 更低计算开销,更少的碳排放: - - 研究人员可以分享已训练的模型而非每次从头开始训练 - - 工程师可以减少计算用时和生产环境开销 - - 数十种模型架构、两千多个预训练模型、100多种语言支持 - -1. 对于模型生命周期的每一个部分都面面俱到: - - 训练先进的模型,只需 3 行代码 - - 模型在不同深度学习框架间任意转移,随你心意 - - 为训练、评估和生产选择最适合的框架,衔接无缝 - -1. 为你的需求轻松定制专属模型和用例: - - 我们为每种模型架构提供了多个用例来复现原论文结果 - - 模型内部结构保持透明一致 - - 模型文件可单独使用,方便魔改和快速实验 - -## 什么情况下我不该用 transformers? - -- 本库并不是模块化的神经网络工具箱。模型文件中的代码特意呈若璞玉,未经额外抽象封装,以便研究人员快速迭代魔改而不致溺于抽象和文件跳转之中。 -- `Trainer` API 并非兼容任何模型,只为本库之模型优化。若是在寻找适用于通用机器学习的训练循环实现,请另觅他库。 -- 尽管我们已尽力而为,[examples 目录](https://github.com/huggingface/transformers/tree/main/examples)中的脚本也仅为用例而已。对于你的特定问题,它们并不一定开箱即用,可能需要改几行代码以适之。 - -## 安装 - -### 使用 pip - -这个仓库已在 Python 3.6+、Flax 0.3.2+、PyTorch 1.3.1+ 和 TensorFlow 2.3+ 下经过测试。 - -你可以在[虚拟环境](https://docs.python.org/3/library/venv.html)中安装 🤗 Transformers。如果你还不熟悉 Python 的虚拟环境,请阅此[用户说明](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)。 - -首先,用你打算使用的版本的 Python 创建一个虚拟环境并激活。 - -然后,你需要安装 Flax、PyTorch 或 TensorFlow 其中之一。关于在你使用的平台上安装这些框架,请参阅 [TensorFlow 安装页](https://www.tensorflow.org/install/), [PyTorch 安装页](https://pytorch.org/get-started/locally/#start-locally) 或 [Flax 安装页](https://github.com/google/flax#quick-install)。 - -当这些后端之一安装成功后, 🤗 Transformers 可依此安装: - -```bash -pip install transformers -``` - -如果你想要试试用例或者想在正式发布前使用最新的开发中代码,你得[从源代码安装](https://huggingface.co/docs/transformers/installation#installing-from-source)。 - -### 使用 conda - -自 Transformers 4.0.0 版始,我们有了一个 conda 频道: `huggingface`。 - -🤗 Transformers 可以通过 conda 依此安装: - -```shell script -conda install -c huggingface transformers -``` - -要通过 conda 安装 Flax、PyTorch 或 TensorFlow 其中之一,请参阅它们各自安装页的说明。 - -## 模型架构 - -🤗 Transformers 支持的[**所有的模型检查点**](https://huggingface.co/models)由[用户](https://huggingface.co/users)和[组织](https://huggingface.co/organizations)上传,均与 huggingface.co [model hub](https://huggingface.co) 无缝整合。 - -目前的检查点数量: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) - -🤗 Transformers 目前支持如下的架构(模型概述请阅[这里](https://huggingface.co/docs/transformers/model_summary)): - -1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (来自 Google Research and the Toyota Technological Institute at Chicago) 伴随论文 [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), 由 Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut 发布。 -1. **[ALIGN](https://huggingface.co/docs/transformers/model_doc/align)** (来自 Google Research) 伴随论文 [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) 由 Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig 发布。 -1. **[AltCLIP](https://huggingface.co/docs/transformers/model_doc/altclip)** (来自 BAAI) 伴随论文 [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) 由 Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell 发布。 -1. **[Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)** (来自 MIT) 伴随论文 [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) 由 Yuan Gong, Yu-An Chung, James Glass 发布。 -1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (来自 Facebook) 伴随论文 [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) 由 Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer 发布。 -1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (来自 École polytechnique) 伴随论文 [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) 由 Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis 发布。 -1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (来自 VinAI Research) 伴随论文 [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) 由 Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen 发布。 -1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (来自 Microsoft) 伴随论文 [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) 由 Hangbo Bao, Li Dong, Furu Wei 发布。 -1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (来自 Google) 伴随论文 [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) 由 Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova 发布。 -1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (来自 Google) 伴随论文 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 由 Sascha Rothe, Shashi Narayan, Aliaksei Severyn 发布。 -1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (来自 VinAI Research) 伴随论文 [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) 由 Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen 发布。 -1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (来自 Google Research) 伴随论文 [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) 由 Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed 发布。 -1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (来自 Google Research) 伴随论文 [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) 由 Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed 发布。 -1. **[BioGpt](https://huggingface.co/docs/transformers/model_doc/biogpt)** (来自 Microsoft Research AI4Science) 伴随论文 [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) 由 Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu 发布。 -1. **[BiT](https://huggingface.co/docs/transformers/model_doc/bit)** (来自 Google AI) 伴随论文 [Big Transfer (BiT) 由 Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby 发布。 -1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (来自 Facebook) 伴随论文 [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) 由 Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston 发布。 -1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (来自 Facebook) 伴随论文 [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) 由 Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston 发布。 -1. **[BLIP](https://huggingface.co/docs/transformers/model_doc/blip)** (来自 Salesforce) 伴随论文 [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) 由 Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi 发布。 -1. **[BLIP-2](https://huggingface.co/docs/transformers/model_doc/blip-2)** (来自 Salesforce) 伴随论文 [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) 由 Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi 发布。 -1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/). -1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (来自 Alexa) 伴随论文 [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) 由 Adrian de Wynter and Daniel J. Perry 发布。 -1. **[BridgeTower](https://huggingface.co/docs/transformers/model_doc/bridgetower)** (from Harbin Institute of Technology/Microsoft Research Asia/Intel Labs) released with the paper [BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning](https://arxiv.org/abs/2206.08657) by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. -1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (来自 Google Research) 伴随论文 [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) 由 Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel 发布。 -1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (来自 Inria/Facebook/Sorbonne) 伴随论文 [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) 由 Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot 发布。 -1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (来自 Google Research) 伴随论文 [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) 由 Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting 发布。 -1. **[Chinese-CLIP](https://huggingface.co/docs/transformers/model_doc/chinese_clip)** (来自 OFA-Sys) 伴随论文 [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) 由 An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou 发布。 -1. **[CLAP](https://huggingface.co/docs/transformers/model_doc/clap)** (来自 LAION-AI) 伴随论文 [Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation]https://arxiv.org/abs/2211.06687) 由 Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov 发布。 -1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (来自 OpenAI) 伴随论文 [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) 由 Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever 发布。 -1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (来自 University of Göttingen) 伴随论文 [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) 由 Timo Lüddecke and Alexander Ecker 发布。 -1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (来自 Salesforce) 伴随论文 [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) 由 Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong 发布。 -1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (来自 Microsoft Research Asia) 伴随论文 [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) 由 Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang 发布。 -1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (来自 YituTech) 伴随论文 [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) 由 Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan 发布。 -1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (来自 Facebook AI) 伴随论文 [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) 由 Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie 发布。 -1. **[ConvNeXTV2](https://huggingface.co/docs/transformers/model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. -1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (来自 Tsinghua University) 伴随论文 [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) 由 Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun 发布。 -1. **[CPM-Ant](https://huggingface.co/docs/transformers/main/model_doc/cpmant)** (from OpenBMB) released by the [OpenBMB](https://www.openbmb.org/). -1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (来自 Salesforce) 伴随论文 [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) 由 Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher 发布。 -1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (来自 Microsoft) 伴随论文 [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) 由 Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang 发布。 -1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (来自 Facebook) 伴随论文 [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) 由 Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli 发布。 -1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (来自 Microsoft) 伴随论文 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 由 Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 发布。 -1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (来自 Microsoft) 伴随论文 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 由 Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 发布。 -1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (来自 Berkeley/Facebook/Google) 伴随论文 [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) 由 Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch 发布。 -1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (来自 SenseTime Research) 伴随论文 [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) 由 Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai 发布。 -1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (来自 Facebook) 伴随论文 [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) 由 Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou 发布。 -1. **[DePlot](https://huggingface.co/docs/transformers/main/model_doc/deplot)** (来自 Google AI) 伴随论文 [DePlot: One-shot visual language reasoning by plot-to-table translation](https://arxiv.org/abs/2212.10505) 由 Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun 发布。 -1. **[DETA](https://huggingface.co/docs/transformers/model_doc/deta)** (来自 The University of Texas at Austin) 伴随论文 [NMS Strikes Back](https://arxiv.org/abs/2212.06137) 由 Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl 发布。 -1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (来自 Facebook) 伴随论文 [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) 由 Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko 发布。 -1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (来自 Microsoft Research) 伴随论文 [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) 由 Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan 发布。 -1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (来自 SHI Labs) 伴随论文 [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) 由 Ali Hassani and Humphrey Shi 发布。 -1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (来自 HuggingFace), 伴随论文 [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) 由 Victor Sanh, Lysandre Debut and Thomas Wolf 发布。 同样的方法也应用于压缩 GPT-2 到 [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa 到 [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT 到 [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) 和德语版 DistilBERT。 -1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (来自 Microsoft Research) 伴随论文 [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) 由 Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei 发布。 -1. **[Donut](https://huggingface.co/docs/transformers/model_doc/donut)** (来自 NAVER) 伴随论文 [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) 由 Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park 发布。 -1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (来自 Facebook) 伴随论文 [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) 由 Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih 发布。 -1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (来自 Intel Labs) 伴随论文 [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) 由 René Ranftl, Alexey Bochkovskiy, Vladlen Koltun 发布。 -1. **[EfficientFormer](https://huggingface.co/docs/transformers/model_doc/efficientformer)** (来自 Snap Research) 伴随论文 [EfficientFormer: Vision Transformers at MobileNetSpeed](https://arxiv.org/abs/2206.01191) 由 Yanyu Li, Geng Yuan, Yang Wen, Ju Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren 发布。 -1. **[EfficientNet](https://huggingface.co/docs/transformers/model_doc/efficientnet)** (from Google Brain) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan, Quoc V. Le. -1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (来自 Google Research/Stanford University) 伴随论文 [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) 由 Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning 发布。 -1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (来自 Google Research) 伴随论文 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 由 Sascha Rothe, Shashi Narayan, Aliaksei Severyn 发布。 -1. **[ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie)** (来自 Baidu) 伴随论文 [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) by Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu 发布。 -1. **[ErnieM](https://huggingface.co/docs/transformers/model_doc/ernie_m)** (来自 Baidu) 伴随论文 [ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora](https://arxiv.org/abs/2012.15674) 由 Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang 发布。 -1. **[ESM](https://huggingface.co/docs/transformers/model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2** was released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives. -1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei -1. **[FLAN-UL2](https://huggingface.co/docs/transformers/model_doc/flan-ul2)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-ul2-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei -1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (来自 CNRS) 伴随论文 [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) 由 Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab 发布。 -1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (来自 Facebook AI) 伴随论文 [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) 由 Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela 发布。 -1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (来自 Google Research) 伴随论文 [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) 由 James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon 发布。 -1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (来自 CMU/Google Brain) 伴随论文 [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) 由 Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le 发布。 -1. **[GIT](https://huggingface.co/docs/transformers/model_doc/git)** (来自 Microsoft Research) 伴随论文 [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) 由 Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang 发布。 -1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (来自 KAIST) 伴随论文 [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) 由 Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim 发布。 -1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (来自 OpenAI) 伴随论文 [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) 由 Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever 发布。 -1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (来自 EleutherAI) 随仓库 [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) 发布。作者为 Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy 发布。 -1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach -1. **[GPT NeoX Japanese](https://huggingface.co/docs/transformers/model_doc/gpt_neox_japanese)** (来自 ABEJA) 由 Shinya Otani, Takayoshi Makabe, Anuj Arora, Kyo Hattori。 -1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (来自 OpenAI) 伴随论文 [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) 由 Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever** 发布。 -1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (来自 EleutherAI) 伴随论文 [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) 由 Ben Wang and Aran Komatsuzaki 发布。 -1. **[GPT-Sw3](https://huggingface.co/docs/transformers/model_doc/gpt-sw3)** (from AI-Sweden) released with the paper [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren. -1. **[GPTBigCode](https://huggingface.co/docs/transformers/main/model_doc/gpt_bigcode)** (来自 BigCode) 伴随论文 [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) 由 Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra 发布。 -1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by 坂本俊之(tanreinama). -1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu. -1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (来自 UCSD, NVIDIA) 伴随论文 [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) 由 Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang 发布。 -1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (来自 Facebook) 伴随论文 [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) 由 Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed 发布。 -1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (来自 Berkeley) 伴随论文 [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) 由 Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer 发布。 -1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (来自 OpenAI) 伴随论文 [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) 由 Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever 发布。 -1. **[Informer](https://huggingface.co/docs/transformers/model_doc/informer)** (from Beihang University, UC Berkeley, Rutgers University, SEDD Company) released with the paper [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting](https://arxiv.org/abs/2012.07436) by Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. -1. **[Jukebox](https://huggingface.co/docs/transformers/model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever. -1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) 由 Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou 发布。 -1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) 由 Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou 发布。 -1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) 由 Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei 发布。 -1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutxlm)** (来自 Microsoft Research Asia) 伴随论文 [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) 由 Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei 发布。 -1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (来自 AllenAI) 伴随论文 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 由 Iz Beltagy, Matthew E. Peters, Arman Cohan 发布。 -1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (来自 Meta AI) 伴随论文 [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) 由 Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze 发布。 -1. **[LiLT](https://huggingface.co/docs/transformers/model_doc/lilt)** (来自 South China University of Technology) 伴随论文 [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) 由 Jiapeng Wang, Lianwen Jin, Kai Ding 发布。 -1. **[LLaMA](https://huggingface.co/docs/transformers/main/model_doc/llama)** (来自 The FAIR team of Meta AI) 伴随论文 [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971) 由 Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample 发布。 -1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (来自 AllenAI) 伴随论文 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 由 Iz Beltagy, Matthew E. Peters, Arman Cohan 发布。 -1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (来自 Google AI) released 伴随论文 [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) 由 Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang 发布。 -1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (来自 Studio Ousia) 伴随论文 [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) 由 Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto 发布。 -1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (来自 UNC Chapel Hill) 伴随论文 [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) 由 Hao Tan and Mohit Bansal 发布。 -1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (来自 Facebook) 伴随论文 [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) 由 Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert 发布。 -1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (来自 Facebook) 伴随论文 [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) 由 Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin 发布。 -1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** 用 [OPUS](http://opus.nlpl.eu/) 数据训练的机器翻译模型由 Jörg Tiedemann 发布。[Marian Framework](https://marian-nmt.github.io/) 由微软翻译团队开发。 -1. **[MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm)** (来自 Microsoft Research Asia) 伴随论文 [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) 由 Junlong Li, Yiheng Xu, Lei Cui, Furu Wei 发布。 -1. **[Mask2Former](https://huggingface.co/docs/transformers/model_doc/mask2former)** (来自 FAIR and UIUC) 伴随论文 [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) 由 Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar 发布。 -1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov -1. **[MatCha](https://huggingface.co/docs/transformers/main/model_doc/matcha)** (来自 Google AI) 伴随论文 [MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering](https://arxiv.org/abs/2212.09662) 由 Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, Julian Martin Eisenschlos 发布。 -1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (来自 Facebook) 伴随论文 [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) 由 Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer 发布。 -1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (来自 Facebook) 伴随论文 [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) 由 Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan 发布。 -1. **[MEGA](https://huggingface.co/docs/transformers/main/model_doc/mega)** (来自 Facebook) 伴随论文 [Mega: Moving Average Equipped Gated Attention](https://arxiv.org/abs/2209.10655) 由 Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer 发布。 -1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (来自 NVIDIA) 伴随论文 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 由 Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 发布。 -1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (来自 NVIDIA) 伴随论文 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 由 Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 发布。 -1. **[MGP-STR](https://huggingface.co/docs/transformers/model_doc/mgp-str)** (来自 Alibaba Research) 伴随论文 [Multi-Granularity Prediction for Scene Text Recognition](https://arxiv.org/abs/2209.03592) 由 Peng Wang, Cheng Da, and Cong Yao 发布。 -1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (来自 Studio Ousia) 伴随论文 [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) 由 Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka 发布。 -1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (来自 CMU/Google Brain) 伴随论文 [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) 由 Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou 发布。 -1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (来自 Google Inc.) 伴随论文 [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) 由 Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam 发布。 -1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (来自 Google Inc.) 伴随论文 [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) 由 Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen 发布。 -1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (来自 Apple) 伴随论文 [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) 由 Sachin Mehta and Mohammad Rastegari 发布。 -1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (来自 Microsoft Research) 伴随论文 [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) 由 Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu 发布。 -1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (来自 Google AI) 伴随论文 [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) 由 Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel 发布。 -1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (来自 中国人民大学 AI Box) 伴随论文 [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) 由 Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen 发布。 -1. **[NAT](https://huggingface.co/docs/transformers/model_doc/nat)** (来自 SHI Labs) 伴随论文 [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) 由 Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi 发布。 -1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (来自华为诺亚方舟实验室) 伴随论文 [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) 由 Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu 发布。 -1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (来自 Meta) 伴随论文 [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) 由 the NLLB team 发布。 -1. **[NLLB-MOE](https://huggingface.co/docs/transformers/main/model_doc/nllb-moe)** (来自 Meta) 伴随论文 [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) 由 the NLLB team 发布。 -1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (来自 the University of Wisconsin - Madison) 伴随论文 [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) 由 Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh 发布。 -1. **[OneFormer](https://huggingface.co/docs/transformers/model_doc/oneformer)** (来自 SHI Labs) 伴随论文 [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) 由 Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi 发布。 -1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (来自 Meta AI) 伴随论文 [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) 由 Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al 发布。 -1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (来自 Google AI) 伴随论文 [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) 由 Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby 发布。 -1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (来自 Google) 伴随论文 [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) 由 Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu 发布。 -1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (来自 Google) 伴随论文 [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) 由 Jason Phang, Yao Zhao, Peter J. Liu 发布。 -1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (来自 Deepmind) 伴随论文 [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) 由 Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira 发布。 -1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (来自 VinAI Research) 伴随论文 [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) 由 Dat Quoc Nguyen and Anh Tuan Nguyen 发布。 -1. **[Pix2Struct](https://huggingface.co/docs/transformers/main/model_doc/pix2struct)** (来自 Google) 伴随论文 [Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding](https://arxiv.org/abs/2210.03347) 由 Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova 发布。 -1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (来自 UCLA NLP) 伴随论文 [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) 由 Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang 发布。 -1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (来自 Sea AI Labs) 伴随论文 [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) 由 Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng 发布。 -1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (来自 Microsoft Research) 伴随论文 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 由 Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 发布。 -1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (来自 NVIDIA) 伴随论文 [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) 由 Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius 发布。 -1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (来自 Facebook) 伴随论文 [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) 由 Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela 发布。 -1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (来自 Google Research) 伴随论文 [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) 由 Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang 发布。 -1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (来自 Google Research) 伴随论文 [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) 由 Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya 发布。 -1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Research) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár. -1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (来自 Google Research) 伴随论文 [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) 由 Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder 发布。 -1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. -1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (来自 Facebook), 伴随论文 [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) 由 Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov 发布。 -1. **[RoBERTa-PreLayerNorm](https://huggingface.co/docs/transformers/model_doc/roberta-prelayernorm)** (来自 Facebook) 伴随论文 [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) 由 Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli 发布。 -1. **[RoCBert](https://huggingface.co/docs/transformers/model_doc/roc_bert)** (来自 WeChatAI), 伴随论文 [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) 由 HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou 发布。 -1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (来自 ZhuiyiTechnology), 伴随论文 [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) 由 Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu 发布。 -1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (来自 NVIDIA) 伴随论文 [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) 由 Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo 发布。 -1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (来自 ASAPP) 伴随论文 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 由 Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 发布。 -1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (来自 ASAPP) 伴随论文 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 由 Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 发布。 -1. **[SpeechT5](https://huggingface.co/docs/transformers/model_doc/speecht5)** (来自 Microsoft Research) 伴随论文 [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) 由 Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei 发布。 -1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (来自 Facebook), 伴随论文 [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) 由 Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino 发布。 -1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (来自 Facebook) 伴随论文 [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) 由 Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau 发布。 -1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (来自 Tel Aviv University) 伴随论文 [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) 由 Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy 发布。 -1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (来自 Berkeley) 伴随论文 [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) 由 Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer 发布。 -1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (来自 Microsoft) 伴随论文 [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) 由 Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo 发布。 -1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/model_doc/swinv2)** (来自 Microsoft) 伴随论文 [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) 由 Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo 发布。 -1. **[Swin2SR](https://huggingface.co/docs/transformers/model_doc/swin2sr)** (来自 University of Würzburg) 伴随论文 [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345) 由 Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte 发布。 -1. **[SwitchTransformers](https://huggingface.co/docs/transformers/model_doc/switch_transformers)** (from Google) released with the paper [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer. -1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (来自 Google AI) 伴随论文 [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) 由 Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu 发布。 -1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (来自 Google AI) 伴随论文 [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) 由 Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu 发布。 -1. **[Table Transformer](https://huggingface.co/docs/transformers/model_doc/table-transformer)** (来自 Microsoft Research) 伴随论文 [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) 由 Brandon Smock, Rohith Pesala, Robin Abraham 发布。 -1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (来自 Google AI) 伴随论文 [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) 由 Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos 发布。 -1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (来自 Microsoft Research) 伴随论文 [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) 由 Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou 发布。 -1. **[Time Series Transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer)** (from HuggingFace). -1. **[TimeSformer](https://huggingface.co/docs/transformers/model_doc/timesformer)** (from Facebook) released with the paper [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Gedas Bertasius, Heng Wang, Lorenzo Torresani. -1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine -1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (来自 Google/CMU) 伴随论文 [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) 由 Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov 发布。 -1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (来自 Microsoft) 伴随论文 [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) 由 Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei 发布。 -1. **[TVLT](https://huggingface.co/docs/transformers/model_doc/tvlt)** (来自 UNC Chapel Hill) 伴随论文 [TVLT: Textless Vision-Language Transformer](https://arxiv.org/abs/2209.14156) 由 Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal 发布。 -1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler -1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (来自 Microsoft Research) 伴随论文 [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) 由 Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang 发布。 -1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (来自 Microsoft Research) 伴随论文 [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) 由 Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu 发布。 -1. **[UPerNet](https://huggingface.co/docs/transformers/model_doc/upernet)** (来自 Peking University) 伴随论文 [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) 由 Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun 发布。 -1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (来自 Tsinghua University and Nankai University) 伴随论文 [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) 由 Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu 发布。 -1. **[VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)** (来自 Multimedia Computing Group, Nanjing University) 伴随论文 [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) 由 Zhan Tong, Yibing Song, Jue Wang, Limin Wang 发布。 -1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (来自 NAVER AI Lab/Kakao Enterprise/Kakao Brain) 伴随论文 [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) 由 Wonjae Kim, Bokyung Son, Ildoo Kim 发布。 -1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (来自 Google AI) 伴随论文 [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) 由 Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby 发布。 -1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (来自 UCLA NLP) 伴随论文 [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) 由 Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang 发布。 -1. **[ViT Hybrid](https://huggingface.co/docs/transformers/model_doc/vit_hybrid)** (来自 Google AI) 伴随论文 [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) 由 Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby 发布。 -1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (来自 Meta AI) 伴随论文 [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) 由 Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick 发布。 -1. **[ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn)** (来自 Meta AI) 伴随论文 [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas 发布. -1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (来自 Facebook AI) 伴随论文 [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) 由 Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli 发布。 -1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (来自 Facebook AI) 伴随论文 [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) 由 Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino 发布。 -1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (来自 Facebook AI) 伴随论文 [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) 由 Qiantong Xu, Alexei Baevski, Michael Auli 发布。 -1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. -1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (来自 OpenAI) 伴随论文 [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) 由 Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever 发布。 -1. **[X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)** (来自 Microsoft Research) 伴随论文 [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) 由 Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling 发布。 -1. **[X-MOD](https://huggingface.co/docs/transformers/model_doc/xmod)** (来自 Meta AI) 伴随论文 [Lifting the Curse of Multilinguality by Pre-training Modular Transformers](http://dx.doi.org/10.18653/v1/2022.naacl-main.255) 由 Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, Mikel Artetxe 发布。 -1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. -1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (来自 Facebook) 伴随论文 [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) 由 Guillaume Lample and Alexis Conneau 发布。 -1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (来自 Microsoft Research) 伴随论文 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 由 Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 发布。 -1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (来自 Facebook AI), 伴随论文 [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) 由 Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov 发布。 -1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (来自 Facebook AI) 伴随论文 [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) 由 Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau 发布。 -1. **[XLM-V](https://huggingface.co/docs/transformers/model_doc/xlm-v)** (来自 Meta AI) 伴随论文 [XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472) 由 Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, Madian Khabsa 发布。 -1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (来自 Google/CMU) 伴随论文 [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) 由 Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le 发布。 -1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (来自 Facebook AI) 伴随论文 [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) 由 Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli 发布。 -1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (来自 Facebook AI) 伴随论文 [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) 由 Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli 发布。 -1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (来自 Huazhong University of Science & Technology) 伴随论文 [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) 由 Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu 发布。 -1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (来自 the University of Wisconsin - Madison) 伴随论文 [You Only Sample (Almost) 由 Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh 发布。 -1. 想要贡献新的模型?我们这里有一份**详细指引和模板**来引导你添加新的模型。你可以在 [`templates`](./templates) 目录中找到他们。记得查看 [贡献指南](./CONTRIBUTING.md) 并在开始写 PR 前联系维护人员或开一个新的 issue 来获得反馈。 - -要检查某个模型是否已有 Flax、PyTorch 或 TensorFlow 的实现,或其是否在 🤗 Tokenizers 库中有对应词符化器(tokenizer),敬请参阅[此表](https://huggingface.co/docs/transformers/index#supported-frameworks)。 - -这些实现均已于多个数据集测试(请参看用例脚本)并应于原版实现表现相当。你可以在用例文档的[此节](https://huggingface.co/docs/transformers/examples)中了解表现的细节。 - - -## 了解更多 - -| 章节 | 描述 | -|-|-| -| [文档](https://huggingface.co/transformers/) | 完整的 API 文档和教程 | -| [任务总结](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers 支持的任务 | -| [预处理教程](https://huggingface.co/docs/transformers/preprocessing) | 使用 `Tokenizer` 来为模型准备数据 | -| [训练和微调](https://huggingface.co/docs/transformers/training) | 在 PyTorch/TensorFlow 的训练循环或 `Trainer` API 中使用 🤗 Transformers 提供的模型 | -| [快速上手:微调和用例脚本](https://github.com/huggingface/transformers/tree/main/examples) | 为各种任务提供的用例脚本 | -| [模型分享和上传](https://huggingface.co/docs/transformers/model_sharing) | 和社区上传和分享你微调的模型 | -| [迁移](https://huggingface.co/docs/transformers/migration) | 从 `pytorch-transformers` 或 `pytorch-pretrained-bert` 迁移到 🤗 Transformers | - -## 引用 - -我们已将此库的[论文](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)正式发表,如果你使用了 🤗 Transformers 库,请引用: -```bibtex -@inproceedings{wolf-etal-2020-transformers, - title = "Transformers: State-of-the-Art Natural Language Processing", - author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", - booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", - month = oct, - year = "2020", - address = "Online", - publisher = "Association for Computational Linguistics", - url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", - pages = "38--45" -} -``` diff --git a/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/old_test_seq2seq_examples_multi_gpu.py b/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/old_test_seq2seq_examples_multi_gpu.py deleted file mode 100644 index 6625f061b5660793a5a054acd4eab518622bf5f8..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/old_test_seq2seq_examples_multi_gpu.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright 2020 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# as due to their complexity multi-gpu tests could impact other tests, and to aid debug we have those in a separate module. - -import os -import sys - -from transformers.testing_utils import TestCasePlus, execute_subprocess_async, get_gpu_count, require_torch_gpu, slow - -from .utils import load_json - - -class TestSummarizationDistillerMultiGPU(TestCasePlus): - @classmethod - def setUpClass(cls): - return cls - - @slow - @require_torch_gpu - def test_distributed_eval(self): - output_dir = self.get_auto_remove_tmp_dir() - args = f""" - --model_name Helsinki-NLP/opus-mt-en-ro - --save_dir {output_dir} - --data_dir {self.test_file_dir_str}/test_data/wmt_en_ro - --num_beams 2 - --task translation - """.split() - - # we want this test to run even if there is only one GPU, but if there are more we use them all - n_gpu = get_gpu_count() - distributed_args = f""" - -m torch.distributed.launch - --nproc_per_node={n_gpu} - {self.test_file_dir}/run_distributed_eval.py - """.split() - cmd = [sys.executable] + distributed_args + args - execute_subprocess_async(cmd, env=self.get_env()) - - metrics_save_path = os.path.join(output_dir, "test_bleu.json") - metrics = load_json(metrics_save_path) - # print(metrics) - self.assertGreaterEqual(metrics["bleu"], 25) diff --git a/spaces/chrisjay/mnist-adversarial/README.md b/spaces/chrisjay/mnist-adversarial/README.md deleted file mode 100644 index 628bf43cd85e7a559ae3b33a5d04d88a29b9608c..0000000000000000000000000000000000000000 --- a/spaces/chrisjay/mnist-adversarial/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: MNIST Adversarial -emoji: 🔢 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.0.20 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - diff --git a/spaces/cihyFjudo/fairness-paper-search/Gemini Pattern Editor Keygen BETTER.md b/spaces/cihyFjudo/fairness-paper-search/Gemini Pattern Editor Keygen BETTER.md deleted file mode 100644 index 46912c6485a29db558a456db3c46a93aef07eff8..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Gemini Pattern Editor Keygen BETTER.md +++ /dev/null @@ -1,43 +0,0 @@ -## Gemini Pattern Editor Keygen - - - -**Click Here → [https://walllowcopo.blogspot.com/?download=2twr35](https://walllowcopo.blogspot.com/?download=2twr35)** - - - -# How to Download and Install Gemini Pattern Editor v.X9 with Keygen - - - -Gemini Pattern Editor v.X9 is a software program that allows you to design, digitize, verify, and print your patterns for sewn goods. It is developed by Gemini Cad Systems, a leading company in the field of apparel design and production. Gemini Pattern Editor v.X9 offers quick and accurate pattern design, using basic design tools and advanced geometrical procedures. You can also check the integrity of your finished design and export it to various formats. - - - -If you are looking for a way to download and install Gemini Pattern Editor v.X9 with keygen, you have come to the right place. In this article, we will show you how to get this software for free and use it without any limitations. Follow these steps carefully and enjoy your new pattern editor. - - - -1. Go to [this link](https://gemini-pattern-editor-v-x9.software.informer.com/9.0/) [^1^] and click on the "Download" button. You will be redirected to a page where you can choose a mirror site to download the software from. Choose the one that is closest to your location and click on it. - -2. Save the file to your computer and run it. You will see a setup wizard that will guide you through the installation process. Follow the instructions on the screen and accept the terms and conditions. Choose the destination folder where you want to install the software and click on "Next". - -3. Wait for the installation to complete and click on "Finish". You have successfully installed Gemini Pattern Editor v.X9 on your computer. - -4. Now you need to activate the software with a keygen. A keygen is a program that generates a serial number or a license key for a software product. You can download a keygen for Gemini Pattern Editor v.X9 from [this link](https://pastebin.com/qEi8qCyy) [^3^]. Save the file to your computer and run it. - -5. You will see a window that asks you to enter your name and email address. Enter any name and email address that you want and click on "Generate". You will see a serial number or a license key that you can use to activate Gemini Pattern Editor v.X9. - -6. Open Gemini Pattern Editor v.X9 on your computer and go to "Help" > "About". You will see a window that shows your software version and registration status. Click on "Register" and enter the serial number or license key that you generated with the keygen. Click on "OK". - -7. You have successfully activated Gemini Pattern Editor v.X9 with keygen. You can now use all the features of this software without any limitations. - - - -Congratulations! You have learned how to download and install Gemini Pattern Editor v.X9 with keygen. This software will help you create amazing patterns for your sewn goods projects. If you have any questions or problems, feel free to contact Gemini Cad Systems at [this link](https://www.geminicad.com/contact/). They will be happy to assist you. - - - -If you liked this article, please share it with your friends and colleagues who might be interested in Gemini Pattern Editor v.X9. Also, don't forget to check out our other articles on SEO optimization and HTML formatting for more tips and tricks on how to write engaging and effective web content. - - 1b8d091108 \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Vimeo - Man Records His Journey Across America on a Bike.md b/spaces/cihyFjudo/fairness-paper-search/Vimeo - Man Records His Journey Across America on a Bike.md deleted file mode 100644 index d71ba6061d09bf0d2dc0a28be4f3fa7abf65a5c8..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Vimeo - Man Records His Journey Across America on a Bike.md +++ /dev/null @@ -1,5 +0,0 @@ - -

    A film that explores Australian masculinity, featuring a cover by Kirin J Callinan of the 1980s hit 'You Think You're A Man' by Divine. Premiered on Clash magazine - clashmusic.com\/videos\/kirin-j-callinan-kim-gehrig-explore-australian-masculinity","uploaded_on":"2018-04-26 04:32:36","uploaded_on_relative":"5 years ago","uploaded_on_full":"Thursday, April 26, 2018 at 4:32 AM EST","is_spatial":false,"is_hdr":false,"is_dolby_vision":false,"privacy":"is_public":true,"type":"anybody","description":"Public","duration":"raw":254,"formatted":"04:14","is_liked":false,"is_unavailable":false,"likes_url":"\/266646638\/likes","is_live":false,"unlisted_hash":null},"owner":"id":6803180,"display_name":"Elise Butt","has_advanced_stats":true,"is_pro_lapsed":false,"is_paid":true,"badge":null,"portrait":"src":"https:\/\/i.vimeocdn.com\/portrait\/25548999_75x75","src_2x":"https:\/\/i.vimeocdn.com\/portrait\/25548999_150x150","is_mod":false,"url":"\/elisebutt","verified":true,"is_following":false,"is_available_for_hire":null,"ondemand":null,"brand_channel":null,"api_url":"api.vimeo.com","jwt":"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJleHAiOjE2NzU5MzMyMDAsInVzZXJfaWQiOm51bGwsImFwcF9pZCI6NTg0NzksInNjb3BlcyI6InB1YmxpYyBzdGF0cyIsInRlYW1fdXNlcl9pZCI6bnVsbH0.FGeJipEDGX64r_1fnxny2d8w9-3qoTEU7ac4cfyKsak","chat":null,"cur_user":null,"status":"state":"ready","copyright_status":"is_blocked":false,"content_block_status":"is_blocked":false,"message":"Video is not rated. Log in to watch.","continuous_play_enabled":false,"allowBypass":false,"requireLogin":true,"possibleOfcomBlock":true,"player":"config_url":"https:\/\/player.vimeo.com\/video\/266646638\/config?autopause=1&byline=0&collections=1&context=Vimeo%5CController%5CClipController.main&default_to_hd=1&h=4938487011&outro=nothing&portrait=0&share=1&speed=0&title=0&watch_trailer=0&s=6ef4c282ff3737af9f1d8e6e43ae244b33c94d84_1675947562","player_url":"player.vimeo.com","dimensions":"height":540,"width":960,"poster":"url":"https:\/\/i.vimeocdn.com\/video\/696973301-da706e545d8d2c5972b6978fe6d305fdce237df609366646d00edec933f81f7e-d?mw=2000&mh=1080&q=70","share_enabled":true,"send_to_wipster_enabled":false,"thumbnail":"src":"https:\/\/i.vimeocdn.com\/video\/696973301-da706e545d8d2c5972b6978fe6d305fdce237df609366646d00edec933f81f7e-d_190x107","src_2x":"https:\/\/i.vimeocdn.com\/video\/696973301-da706e545d8d2c5972b6978fe6d305fdce237df609366646d00edec933f81f7e-d_380x214","width":190,"height":107,"id":696973301,"ads":"house_ads_enabled":true,"third_party_ads_enabled":false,"content_rating":"type":"unrated","message":"Not Yet Rated","description":"","content_advertisement_warning":null,"notifications":[],"categories_config":"categories":[],"total_categories":0,"music_track":null,"cc_license":null,"google_app_id":"599168806697-1vailf0v6ai0j09va1hga0krnd0n3tlq.apps.googleusercontent.com","credits":"total_credits":"raw":0,"formatted":"0","displayed_credits":[],"stream":"id":null,"pos":0,"collection_adder":"enabled":true,"recaptcha_site_key":"6LeRCLwSAAAAAOJ1ba_xqd3NBOlV5P_XRWJVEPdw","clip_stats":"enabled":false,"download_config":[],"has_review_modes":false,"data_layer":"clip_id":266646638,"page_path":"\/266646638","creator_id":6803180,"creator_user_type":"plus","video_categories":"","privacy":"anybody","staff_pick":"no","user_id":null,"page_type":"Video","pref_tips":"file_transfer_tour_point":"key":"vstpft","value":false}; // Autoplay test for onsite referrals to clip page (function () &)autoplay=1(&()); if (typeof window.vimeo === 'undefined' || typeof window.vimeo.clips === 'undefined') Please enable JavaScript to experience Vimeo in all of its glory.

    -

    Vimeo - man re.


    Download Zip ··· https://tinurli.com/2uwjQ8



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/codertoro/gpt-academic/colorful.py b/spaces/codertoro/gpt-academic/colorful.py deleted file mode 100644 index d90972bb30a8f8fb932abbc34232e474df4d5205..0000000000000000000000000000000000000000 --- a/spaces/codertoro/gpt-academic/colorful.py +++ /dev/null @@ -1,91 +0,0 @@ -import platform -from sys import stdout - -if platform.system()=="Linux": - pass -else: - from colorama import init - init() - -# Do you like the elegance of Chinese characters? -def print红(*kw,**kargs): - print("\033[0;31m",*kw,"\033[0m",**kargs) -def print绿(*kw,**kargs): - print("\033[0;32m",*kw,"\033[0m",**kargs) -def print黄(*kw,**kargs): - print("\033[0;33m",*kw,"\033[0m",**kargs) -def print蓝(*kw,**kargs): - print("\033[0;34m",*kw,"\033[0m",**kargs) -def print紫(*kw,**kargs): - print("\033[0;35m",*kw,"\033[0m",**kargs) -def print靛(*kw,**kargs): - print("\033[0;36m",*kw,"\033[0m",**kargs) - -def print亮红(*kw,**kargs): - print("\033[1;31m",*kw,"\033[0m",**kargs) -def print亮绿(*kw,**kargs): - print("\033[1;32m",*kw,"\033[0m",**kargs) -def print亮黄(*kw,**kargs): - print("\033[1;33m",*kw,"\033[0m",**kargs) -def print亮蓝(*kw,**kargs): - print("\033[1;34m",*kw,"\033[0m",**kargs) -def print亮紫(*kw,**kargs): - print("\033[1;35m",*kw,"\033[0m",**kargs) -def print亮靛(*kw,**kargs): - print("\033[1;36m",*kw,"\033[0m",**kargs) - - - -def print亮红(*kw,**kargs): - print("\033[1;31m",*kw,"\033[0m",**kargs) -def print亮绿(*kw,**kargs): - print("\033[1;32m",*kw,"\033[0m",**kargs) -def print亮黄(*kw,**kargs): - print("\033[1;33m",*kw,"\033[0m",**kargs) -def print亮蓝(*kw,**kargs): - print("\033[1;34m",*kw,"\033[0m",**kargs) -def print亮紫(*kw,**kargs): - print("\033[1;35m",*kw,"\033[0m",**kargs) -def print亮靛(*kw,**kargs): - print("\033[1;36m",*kw,"\033[0m",**kargs) - -print_red = print红 -print_green = print绿 -print_yellow = print黄 -print_blue = print蓝 -print_purple = print紫 -print_indigo = print靛 - -print_bold_red = print亮红 -print_bold_green = print亮绿 -print_bold_yellow = print亮黄 -print_bold_blue = print亮蓝 -print_bold_purple = print亮紫 -print_bold_indigo = print亮靛 - -if not stdout.isatty(): - # redirection, avoid a fucked up log file - print红 = print - print绿 = print - print黄 = print - print蓝 = print - print紫 = print - print靛 = print - print亮红 = print - print亮绿 = print - print亮黄 = print - print亮蓝 = print - print亮紫 = print - print亮靛 = print - print_red = print - print_green = print - print_yellow = print - print_blue = print - print_purple = print - print_indigo = print - print_bold_red = print - print_bold_green = print - print_bold_yellow = print - print_bold_blue = print - print_bold_purple = print - print_bold_indigo = print \ No newline at end of file diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/synth_filter_init_arm.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/synth_filter_init_arm.c deleted file mode 100644 index 858c117d39b2009f0ed3ae107a5da5db09e8b825..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/synth_filter_init_arm.c +++ /dev/null @@ -1,49 +0,0 @@ -/* - * Copyright (c) 2010 Mans Rullgard - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "config.h" - -#include "libavutil/arm/cpu.h" -#include "libavutil/attributes.h" -#include "libavutil/internal.h" -#include "libavcodec/fft.h" -#include "libavcodec/synth_filter.h" - -void ff_synth_filter_float_vfp(AVTXContext *imdct, - float *synth_buf_ptr, int *synth_buf_offset, - float synth_buf2[32], const float window[512], - float out[32], float in[32], - float scale, av_tx_fn imdct_fn); - -void ff_synth_filter_float_neon(AVTXContext *imdct, - float *synth_buf_ptr, int *synth_buf_offset, - float synth_buf2[32], const float window[512], - float out[32], float in[32], - float scale, av_tx_fn imdct_fn); - -av_cold void ff_synth_filter_init_arm(SynthFilterContext *s) -{ - int cpu_flags = av_get_cpu_flags(); - - if (have_vfp_vm(cpu_flags)) - s->synth_filter_float = ff_synth_filter_float_vfp; - if (have_neon(cpu_flags)) - s->synth_filter_float = ff_synth_filter_float_neon; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/av1dec.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/av1dec.h deleted file mode 100644 index cef899f81f7d01dd80b064769d33a565a47db6c6..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/av1dec.h +++ /dev/null @@ -1,97 +0,0 @@ -/* - * AV1 video decoder - * * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_AV1DEC_H -#define AVCODEC_AV1DEC_H - -#include - -#include "libavutil/fifo.h" -#include "libavutil/buffer.h" -#include "libavutil/frame.h" -#include "libavutil/pixfmt.h" -#include "avcodec.h" -#include "cbs.h" -#include "cbs_av1.h" - -typedef struct AV1Frame { - AVFrame *f; - - AVBufferRef *hwaccel_priv_buf; - void *hwaccel_picture_private; - - AVBufferRef *header_ref; - AV1RawFrameHeader *raw_frame_header; - - int temporal_id; - int spatial_id; - - uint8_t gm_invalid[AV1_NUM_REF_FRAMES]; - uint8_t gm_type[AV1_NUM_REF_FRAMES]; - int32_t gm_params[AV1_NUM_REF_FRAMES][6]; - - uint8_t skip_mode_frame_idx[2]; - - AV1RawFilmGrainParams film_grain; - - uint8_t coded_lossless; -} AV1Frame; - -typedef struct TileGroupInfo { - uint32_t tile_offset; - uint32_t tile_size; - uint16_t tile_row; - uint16_t tile_column; -} TileGroupInfo; - -typedef struct AV1DecContext { - const AVClass *class; - AVCodecContext *avctx; - - enum AVPixelFormat pix_fmt; - CodedBitstreamContext *cbc; - CodedBitstreamFragment current_obu; - - AVBufferRef *seq_ref; - AV1RawSequenceHeader *raw_seq; - AVBufferRef *header_ref; - AV1RawFrameHeader *raw_frame_header; - TileGroupInfo *tile_group_info; - - AVBufferRef *cll_ref; - AV1RawMetadataHDRCLL *cll; - AVBufferRef *mdcv_ref; - AV1RawMetadataHDRMDCV *mdcv; - AVFifo *itut_t35_fifo; - - uint16_t tile_num; - uint16_t tg_start; - uint16_t tg_end; - - int operating_point_idc; - - AV1Frame ref[AV1_NUM_REF_FRAMES]; - AV1Frame cur_frame; - - // AVOptions - int operating_point; -} AV1DecContext; - -#endif /* AVCODEC_AV1DEC_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/bethsoftvideo.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/bethsoftvideo.h deleted file mode 100644 index d5b5d0a5258ffe182480804f3d83a5e1c9407f13..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/bethsoftvideo.h +++ /dev/null @@ -1,36 +0,0 @@ -/* - * Bethesda VID video decoder - * Copyright (C) 2007 Nicholas Tung - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_BETHSOFTVIDEO_H -#define AVCODEC_BETHSOFTVIDEO_H - -enum BethsoftVidBlockType -{ - PALETTE_BLOCK = 0x02, - FIRST_AUDIO_BLOCK = 0x7c, - AUDIO_BLOCK = 0x7d, - VIDEO_I_FRAME = 0x03, - VIDEO_P_FRAME = 0x01, - VIDEO_YOFF_P_FRAME = 0x04, - EOF_BLOCK = 0x14, -}; - -#endif /* AVCODEC_BETHSOFTVIDEO_H */ diff --git a/spaces/congsaPfin/Manga-OCR/logs/Messenger Emoji Update 2022 APK A Review of the New Emojis and How to Use Them.md b/spaces/congsaPfin/Manga-OCR/logs/Messenger Emoji Update 2022 APK A Review of the New Emojis and How to Use Them.md deleted file mode 100644 index 3a71dcf7566f7c0d4b2eee7db2ed92c6725eff1e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Messenger Emoji Update 2022 APK A Review of the New Emojis and How to Use Them.md +++ /dev/null @@ -1,140 +0,0 @@ - -

    Messenger Emoji Update 2022 APK: Everything You Need to Know

    -

    If you are a frequent user of Facebook Messenger, you might have noticed some changes in your emoji keyboard recently. That's because Facebook has released a new emoji update for its messenger app, which includes over 200 brand new emojis, such as the Face with Spiral Eyes, the Heart on Fire, and the Woman with Beard. In this article, we will explain what is messenger emoji update 2022 apk, why you should care about it, and how you can download, install, and use it on your device.

    -

    What is messenger emoji update 2022 apk?

    -

    Messenger emoji update 2022 apk is a file that contains the latest version of Facebook's emoji designs for its messenger app. APK stands for Android Package Kit, which is a format used to distribute and install applications on Android devices. By downloading and installing this file, you can get access to all the new emojis that Facebook has added to its messenger app, as well as some improvements and bug fixes.

    -

    messenger emoji update 2022 apk


    Download File ✫✫✫ https://urlca.com/2uOcBY



    -

    Why is it important to know about it?

    -

    Emojis are more than just cute icons that you can use to express your emotions or spice up your conversations. They are also a way of communicating across cultures, languages, and platforms. Emojis can help you convey your tone, intention, and personality in a way that words alone cannot. They can also make your messages more engaging, fun, and memorable.

    -

    However, not all emojis are created equal. Different platforms and devices may have different designs and interpretations of the same emoji. For example, an emoji that looks happy on one platform may look sad or angry on another. This can lead to confusion, misunderstanding, or even offense among users. That's why it's important to keep your emoji keyboard updated with the latest standards and trends.

    -

    By downloading and installing messenger emoji update 2022 apk, you can ensure that you are using the most current and consistent emoji designs that Facebook has to offer. You can also enjoy some of the benefits and features that come with the update, such as:

    -
      -
    • More diversity and inclusivity: The update includes more gender-neutral options, skin tone variations, and representation of different cultures and identities.
    • -
    • More creativity and customization: The update allows you to create your own emojis by combining different elements, such as facial expressions, accessories, and backgrounds.
    • -
    • More compatibility and accessibility: The update supports more devices and platforms, as well as more languages and scripts.
    • -
    -

    How to download and install the update

    -

    If you want to get messenger emoji update 2022 apk on your device, here are the steps you need to follow:

    -
      -
    1. Go to this link on your browser and download the file.
    2. -
    3. Once the download is complete, open the file manager app on your device and locate the file.
    4. -
    5. Tap on the file and select "Install". You may need to enable "Unknown sources" in your settings if you haven't done so before.
    6. -
    7. Wait for the installation process to finish. You may need to grant some permissions or accept some terms and conditions.
    8. -
    9. Open the messenger app and enjoy your new emojis!
    10. -
    -

    Tips and warnings:- Be careful when downloading and installing files from unknown sources, as they may contain malware or viruses that can harm your device or compromise your privacy.

    -

    - Make sure you have enough storage space and battery life on your device before downloading and installing the update.

    -

    - If you encounter any problems or errors during the download or installation process, try restarting your device or clearing your cache and data.

    -

    * facebook emoji update 2022 apk download
    -* messenger new emoji 2022 apk free
    -* how to get messenger emoji 14.0 apk
    -* messenger emoji 13.1 apk latest version
    -* facebook messenger melting face emoji apk
    -* messenger emoji holding back tears apk mod
    -* facebook messenger heart hands emoji apk
    -* messenger emoji update 2022 for android
    -* messenger new emoji 2022 for ios
    -* how to install messenger emoji 14.0 for pc
    -* messenger emoji 13.1 for windows
    -* facebook messenger saluting face emoji for mac
    -* messenger emoji update 2022 features
    -* messenger new emoji 2022 review
    -* how to use messenger emoji 14.0 guide
    -* messenger emoji 13.1 changelog
    -* facebook messenger biting lip emoji meaning
    -* messenger emoji update 2022 release date
    -* messenger new emoji 2022 beta
    -* how to join messenger emoji 14.0 test
    -* messenger emoji 13.1 feedback
    -* facebook messenger coral emoji support
    -* messenger emoji update 2022 problems
    -* messenger new emoji 2022 fix
    -* how to uninstall messenger emoji 14.0 tutorial
    -* messenger emoji 13.1 alternatives
    -* facebook messenger low battery emoji tips
    -* messenger emoji update 2022 comparison
    -* messenger new emoji 2022 vs whatsapp
    -* how to switch between messenger emoji 14.0 and default
    -* messenger emoji 13.1 compatibility
    -* facebook messenger rightwards hand emoji keyboard
    -* messenger emoji update 2022 settings
    -* messenger new emoji 2022 customization
    -* how to enable or disable messenger emoji 14.0 option
    -* messenger emoji 13.1 availability
    -* facebook messenger leftwards hand emoji shortcut
    -* messenger emoji update 2022 benefits
    -* messenger new emoji 2022 advantages
    -* how to improve performance with messenger emoji 14.0 trick
    -* messenger emoji 13.1 drawbacks
    -* facebook messenger dotted line face emoji fun
    -* messenger emoji update 2022 opinions
    -* messenger new emoji 2022 ratings
    -* how to share feedback on messenger emoji 14.0 survey
    -* messenger emoji 13.1 recommendations
    -* facebook messenger face with diagonal mouth emoji usage
    -* messenger emoji update 2022 trends
    -* messenger new emoji 2022 news

    -

    How to use the new emojis in messenger

    -

    Now that you have messenger emoji update 2022 apk on your device, you can start using the new emojis in your chats and conversations. Here are some tips on how to do that:

    -
      -
    • To access the emoji keyboard, tap on the smiley icon next to the text box in the messenger app.
    • -
    • To browse through the different categories of emojis, swipe left or right on the keyboard or tap on the icons at the bottom of the screen.
    • -
    • To see more options for each emoji, such as different skin tones, genders, or expressions, tap and hold on the emoji and select the one you want.
    • -
    • To create your own custom emojis, tap on the plus icon at the top right corner of the keyboard and follow the instructions. You can choose from different face shapes, eyes, mouths, hair styles, accessories, and backgrounds. You can also edit, delete, or rename your custom emojis.
    • -
    • To use your custom emojis in your chats, tap on the star icon at the bottom left corner of the keyboard and select the one you want.
    • -
    • To send an emoji, simply tap on it and it will appear in the text box. You can also add text, stickers, gifs, or images to your message.
    • -
    • To react to a message with an emoji, tap and hold on the message and select the emoji you want. You can also swipe up on a message to see more reaction options.
    • -
    -

    How to troubleshoot common issues with the update

    -

    While messenger emoji update 2022 apk is designed to enhance your messaging experience, it may also cause some issues or glitches that can affect your app's performance or functionality. Here are some of the common problems that users may encounter with the update and how to fix them:

    - - - - - - - -
    ProblemSolution
    The update does not show up on my device or I cannot download it.Make sure you have a stable internet connection and enough storage space on your device. Check if your device is compatible with the update and meets the minimum requirements. Try using a different browser or source to download the file. If none of these work, contact Facebook support for assistance.
    The update causes my app to crash or freeze.Try restarting your device or force closing and reopening the app. Clear your cache and data or uninstall and reinstall the app. Check if there are any updates available for your device's software or for the messenger app itself. If none of these work, contact Facebook support for assistance.
    The update changes my default emoji keyboard or I cannot see the new emojis.Make sure you have enabled messenger as your default emoji keyboard in your settings. Check if you have selected the correct language and region for your keyboard. Try switching between different keyboards or restarting your app. If none of these work, contact Facebook support for assistance.
    The update affects my battery life or data usage.Make sure you have optimized your device's battery settings and turned off any unnecessary features or apps that may drain your battery. Check if you have enabled data saver mode or limited your data usage for messenger in your settings. Try using Wi-Fi instead of mobile data when possible. If none of these work, contact Facebook support for assistance.
    The update does not work well with other apps or platforms.Make sure you have updated all your other apps and platforms that may use emojis to their latest versions. Check if there are any compatibility issues or conflicts between messenger and other apps or platforms. Try using different apps or platforms to test if the problem persists. If none of these work, contact Facebook support for assistance.
    -

    Conclusion

    -

    Messenger emoji update 2022 apk is a great way to spice up your chats and conversations with over 200 new emojis that are more diverse, creative, and compatible than ever before. By downloading and installing this file, you can enjoy all the benefits and features that come with this update, such as more gender-neutral options, skin tone variations, representation of different cultures and identities, customization and creation of your own emojis, support for more devices and platforms, as well as improvements and bug fixes.

    -

    If you want to If you want to learn more about messenger emoji update 2022 apk, you can visit this website for more information and resources. You can also check out this blog post for some tips and tricks on how to make the most of your new emojis. We hope you enjoy using the new emojis and have fun chatting with your friends and family on messenger!

    FAQs

    -

    Here are some of the frequently asked questions that users may have about messenger emoji update 2022 apk:

    -
      -
    1. What are some of the most popular new emojis in the update?
    2. -

      Some of the most popular new emojis in the update are the Face with Spiral Eyes, which can be used to express confusion, dizziness, or hypnosis; the Heart on Fire, which can be used to express passion, love, or excitement; and the Woman with Beard, which can be used to represent gender diversity, self-expression, or confidence.

      -
    3. How can I switch back to the old emojis if I don't like the new ones?
    4. -

      If you prefer the old emojis over the new ones, you can switch back to them by following these steps:

      -
        -
      • Go to your device's settings and select "Apps" or "Applications".
      • -
      • Find and tap on "Messenger" and select "Storage".
      • -
      • Tap on "Clear data" or "Clear cache".
      • -
      • Restart your device and open the messenger app.
      • -
      • You should see the old emojis on your keyboard.
      • -
      -

      Note that this may delete some of your app's data or settings, so make sure you back up anything important before doing this.

      -
    5. How can I share my custom emojis with other users?
    6. -

      If you want to share your custom emojis with other users, you can do so by following these steps:

      -
        -
      • Create your custom emoji using the plus icon on the emoji keyboard.
      • -
      • Tap on the star icon at the bottom left corner of the keyboard and select your custom emoji.
      • -
      • Tap on the share icon at the top right corner of the screen and choose how you want to share it. You can send it as a message, a sticker, or a link.
      • -
      • The recipient will be able to see and use your custom emoji if they have messenger emoji update 2022 apk installed on their device.
      • -
      -
    7. How can I access more emojis from other sources or platforms?
    8. -

      If you want to access more emojis from other sources or platforms, you can do so by following these steps:

      -
        -
      • Go to this website and browse through thousands of emojis from different categories and providers.
      • -
      • Select the emoji you want and copy it to your clipboard.
      • -
      • Paste it in your message box in the messenger app and send it.
      • -
      • The recipient will be able to see and use the emoji if they have messenger emoji update 2022 apk installed on their device.
      • -
      -
    9. How can I keep my messenger app updated with the latest emoji changes?
    10. -

      If you want to keep your messenger app updated with the latest emoji changes, you can do so by following these steps:

      -
        -
      • Go to your device's settings and select "Apps" or "Applications".
      • -
      • Find and tap on "Messenger" and select "Update".
      • -
      • Wait for the update process to finish. You may need to grant some permissions or accept some terms and conditions.
      • -
      • Open the messenger app and enjoy the latest emoji changes!
      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Mobile Legends Bang Bang A Free-to-Play MOBA with a Large Cast of Characters and a Stream Library - Download for Android from Softonic.md b/spaces/congsaPfin/Manga-OCR/logs/Mobile Legends Bang Bang A Free-to-Play MOBA with a Large Cast of Characters and a Stream Library - Download for Android from Softonic.md deleted file mode 100644 index bf1c9352e148b2c85a50885037e36d94e65e0f84..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Mobile Legends Bang Bang A Free-to-Play MOBA with a Large Cast of Characters and a Stream Library - Download for Android from Softonic.md +++ /dev/null @@ -1,202 +0,0 @@ - - - -

      How to Download MLBB Softonic: A Guide for MOBA Fans

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

      Now that you have successfully downloaded and installed MLBB Softonic on PC, you are ready to enjoy the game. But before you jump into the action, here are some tips and tricks that will help you play better and have more fun.

      - - - - - - - - - - - - - - - - - - - - -

      The last thing to do when playing MLBB Softonic on PC is to have fun and enjoy the game. MLBB Softonic is a great way to experience the thrill and excitement of MOBA games on your PC. You can play with your friends, meet new people, and compete with players from all over the world. You can also explore the different modes, heroes, skins, and events that MLBB Softonic has to offer. You can also join the MLBB community and share your feedback, suggestions, and opinions with other players and developers.

      - - - - - - - - - - - -
      None

      If you are a fan of multiplayer online battle arena (MOBA) games, you might have heard of Mobile Legends Bang Bang (MLBB), a popular free Android game that lets you fight against real players in 5v5 matches. But did you know that you can also play MLBB on your PC using an emulator called Gameloop? In this article, we will show you how to download and install MLBB Softonic, a version of the game that is available on Softonic.com, a website that offers safe and reliable software downloads. We will also give you some tips and tricks on how to play MLBB Softonic on PC like a pro.

      What is MLBB Softonic?

      None

      MLBB Softonic is a free-to-play MOBA game that features classic combat against real opponents with different skill sets. You need to choose from a range of heroes with unique abilities and roles, such as tanks, marksmen, assassins, mages, and supports. You need to fight over three lanes to take down the enemy's tower and defend your own. You can also jungle, push, teamfight, and roam around the map to gain advantages. You can also enjoy live streaming and watching other players' matches in MLBB Softonic.

      -

      download mlbb softonic


      Download Filehttps://urlca.com/2uOd8a



      Why Play MLBB Softonic?

      Advantages of Playing on PC

      Playing MLBB Softonic on PC has many advantages over playing on your mobile device. Here are some of them:

      -
        -
      • You can use your keyboard and mouse to control your hero, which gives you more accuracy and speed than using your fingers on a touchscreen.
      • -
      • You can enjoy a larger and clearer screen, which helps you see the details and movements of the game better.
      • -
      • You can avoid battery drain, overheating, and lag issues that might affect your mobile device during long gaming sessions.
      • -
      • You can access more features and settings that are not available on the mobile version, such as customizing your graphics, sound, and controls.
      • -

      Requirements to Play on PC

      To play MLBB Softonic on PC, you need to have a compatible device that meets the following system requirements:

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      Minimum RequirementsRecommended Requirements
      CPU: Dual-core 1.8GHz or higherCPU: Quad-core 2.0GHz or higher
      RAM: 3GB or higherRAM: 4GB or higher
      OS: Windows 7 or higherOS: Windows 10 or higher
      Graphics: Intel HD Graphics 3000 or higherGraphics: NVIDIA GeForce GTX 660 or higher
      Storage: 4GB or higherStorage: 8GB or higher
      Internet: Broadband connection with low latencyInternet: Broadband connection with low latency
      -

      If your device meets these requirements, you are ready to download and install MLBB Softonic on PC.

      How to Download and Install MLBB Softonic on PC?

      Step 1: Download Gameloop Emulator

      The first step to play MLBB Softonic on PC is to download Gameloop, an Android emulator that supports the game. Gameloop is a software that allows you to run Android apps and games on your PC. You can download Gameloop from its official website: https://gameloop.fun/. After downloading the installer, run it and follow the instructions to install and launch Gameloop on your PC.

      Step 2: Search for MLBB Softonic in Gameloop

      The next step is to search for MLBB Softonic in Gameloop. To do this, open Gameloop and click on the search icon on the top right corner. Type "MLBB Softonic" in the search box and press enter. You should see the game icon in the search result. Here is a screenshot of what it looks like:

      -MLBB Softonic in Gameloop search result

      Step 3: Download and Install MLBB Softonic in Gameloop

      The third step is to download and install MLBB Softonic in Gameloop. To do this, click on the game icon and then click on the download button. Wait for the game to be downloaded and installed in Gameloop. You can see the progress of the installation in the bottom left corner. Here is a screenshot of what it looks like:

      -MLBB Softonic installation progress in Gameloop

      Step 4: Launch and Play MLBB Softonic in Gameloop

      The final step is to launch and play MLBB Softonic in Gameloop. To do this, click on the play button and wait for the game to load. You should see the game interface with a login screen. You can use your Facebook or Google account to log in, or create a new account with your email or phone number. Here is a screenshot of what it looks like:

      -

      How to download mlbb softonic for free
      -Download mlbb softonic latest version apk
      -Mobile legends bang bang softonic review
      -Best heroes and tips for mlbb softonic
      -Download mlbb softonic for windows pc
      -Mobile legends bang bang softonic gameplay
      -Download mlbb softonic mod apk unlimited diamonds
      -Mobile legends bang bang softonic vs original
      -Download mlbb softonic offline installer
      -Mobile legends bang bang softonic system requirements
      -Download mlbb softonic for mac os
      -Mobile legends bang bang softonic cheats and hacks
      -Download mlbb softonic for android tv
      -Mobile legends bang bang softonic live stream
      -Download mlbb softonic for ios devices
      -Mobile legends bang bang softonic update and patch notes
      -Download mlbb softonic for chromebook
      -Mobile legends bang bang softonic skins and costumes
      -Download mlbb softonic for linux
      -Mobile legends bang bang softonic tournaments and events
      -Download mlbb softonic for firestick
      -Mobile legends bang bang softonic ranking and rewards
      -Download mlbb softonic for bluestacks emulator
      -Mobile legends bang bang softonic customer service and support
      -Download mlbb softonic for nox player emulator
      -Mobile legends bang bang softonic community and forums
      -Download mlbb softonic for ld player emulator
      -Mobile legends bang bang softonic guides and tutorials
      -Download mlbb softonic for memu player emulator
      -Mobile legends bang bang softonic news and updates

      -MLBB Softonic login screen in Gameloop

      Tips and Tricks for Playing MLBB Softonic on PC

      Customize Your Controls and Settings

      One of the first things you should do when playing MLBB Softonic on PC is to customize your controls and settings. You can access the settings menu by clicking on the gear icon on the top right corner of the game interface. Here, you can adjust your controls, graphics, sound, and other preferences. You can also change the language of the game if you want. Here is a screenshot of the settings menu:

      -MLBB Softonic settings menu in Gameloop

      Choose Your Hero Wisely

      Another important thing to do when playing MLBB Softonic on PC is to choose your hero wisely. You can select from a range of heroes with unique abilities and roles, such as tanks, marksmen, assassins, mages, and supports. You should choose a hero that suits your playstyle and role in the team. For example, if you like to deal damage from a distance, you might want to pick a marksman or a mage. If you like to protect your allies and initiate fights, you might want to pick a tank or a support. Here are some examples of different hero types and their abilities:

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      Hero TypeExampleAbility
      TankGrockCan create walls and deal massive damage with his charge.
      MarksmanLaylaCan shoot enemies from a long range and increase her damage with her passive.
      AssassinLancelotCan dash and evade attacks and deal burst damage with his skills.
      MageAliceCan teleport and heal herself with her blood orbs and stun enemies with her ultimate.
      SupportRafaelaCan heal and speed up allies and slow down and damage enemies with her skills.
      -

      You can try out different heroes and see which ones you like best. You can also learn more about their skills, stats, and builds by clicking on the hero icon on the top left corner of the game interface.

      Communicate and Coordinate with Your Teammates

      A third thing to do when playing MLBB Softonic on PC is to communicate and coordinate with your teammates. You can use the chat and voice functions in the game to communicate with your teammates. You can also use the quick chat and signal buttons to send messages and alerts to your team. Here are some tips on how to cooperate and strategize with your team:

      -
        -
      • Use the chat and voice functions to greet your teammates, discuss your plans, and give feedback.
      • -
      • Use the quick chat and signal buttons to inform your teammates of your actions, such as going to a lane, jungling, ganking, retreating, or requesting backup.
      • -
      • Use the map to see the positions of your allies and enemies, and ping the locations of interest, such as objectives, enemies, or dangers.
      • -
      • Follow your role and lane assignment, and help your teammates when they need it.
      • -
      • Don't flame or blame your teammates, and be respectful and positive.
      • -

      Learn from Other Players and Streamers

      A fourth thing to do when playing MLBB Softonic on PC is to learn from other players and streamers. You can watch live streams and replays of other players in the game to learn from their skills and strategies. You can also follow some popular streamers and videos on platforms like YouTube, Twitch, or Facebook. Here are some links to some of them:

      -

      Conclusion

      None

      In this article, we have shown you how to download and install MLBB Softonic on PC using Gameloop emulator. We have also given you some tips and tricks on how to play MLBB Softonic on PC like a pro. We hope that this article has been helpful and informative for you. If you are a fan of MOBA games, you should definitely try out MLBB Softonic on PC and see for yourself why it is one of the most popular games in the genre. You will not regret it!

      -

      So what are you waiting for? Download MLBB Softonic on PC today and join the millions of players who are enjoying the game. Have fun and good luck!

      FAQs

      None

      Here are some frequently asked questions and answers about MLBB Softonic on PC:

      -
        -
      1. Is MLBB Softonic safe to download and play?
        Yes, MLBB Softonic is safe to download and play. Softonic.com is a reputable website that offers safe and reliable software downloads. Gameloop is also a trusted Android emulator that supports MLBB Softonic. You can download and play MLBB Softonic on PC without any worries.
      2. -
      3. Is MLBB Softonic free to play?
        Yes, MLBB Softonic is free to play. You can download and play the game without paying anything. However, you can also purchase in-game items and currency with real money if you want to enhance your gaming experience.
      4. -
      5. Can I play MLBB Softonic with my friends?
        Yes, you can play MLBB Softonic with your friends. You can invite your friends to join your team or match with them in the game. You can also chat and voice with them in the game.
      6. -
      7. Can I play MLBB Softonic offline?
        No, you cannot play MLBB Softonic offline. You need to have an internet connection to play the game. You also need to log in with your account to access the game.
      8. -
      9. Can I transfer my progress from mobile to PC?
        Yes, you can transfer your progress from mobile to PC. You just need to log in with the same account that you use on your mobile device. You can then continue playing where you left off.
      10. -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Adguard License Key __FULL__ Keygen.md b/spaces/contluForse/HuggingGPT/assets/Adguard License Key __FULL__ Keygen.md deleted file mode 100644 index 6c3f277e52e71e57d097ccd43ac371abf2504f68..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Adguard License Key __FULL__ Keygen.md +++ /dev/null @@ -1,14 +0,0 @@ -

      Adguard License Key Keygen


      Download Zip 🌟 https://ssurll.com/2uzxEm



      -
      -July 19, 2021 — AdGuard Premium Version Crack + Patch Free Download: AdGuard 7.4.3238 Premium License Key Crack gives you lifetime access to ... AdGuard Premium Crack & Keygen Free Download -... Download Adguard Premium 7.4.3238 (Premium + key) Download for free ... -Adguard Premium 7.4.3238 is an anti-banner program that will protect your computer from ... -Download Adguard Premium 7.4.3238 + Adguard 7.3.2398 ... -Jul 19 2018 -Adguard Premium 7.4.3238 + Crack (Premium + key) -AdGuard is the best protection for... -Adguard Premium 7.4.3238 + Crack (Premium + key). -Adguard 7.4.3238 Premium + Crack - the best Internet filter to protect against ... 8a78ff9644
      -
      -
      -

      diff --git a/spaces/contluForse/HuggingGPT/assets/Ama Ata Aidoo The Message PDF Download A Comparison with Other Works by the Author.md b/spaces/contluForse/HuggingGPT/assets/Ama Ata Aidoo The Message PDF Download A Comparison with Other Works by the Author.md deleted file mode 100644 index 00a0b4637b445d5e952da1ea95b13ee9e7963800..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Ama Ata Aidoo The Message PDF Download A Comparison with Other Works by the Author.md +++ /dev/null @@ -1,5 +0,0 @@ -
      -

      Ama Ata Aidoo's message in this story isn't exclusive to Ghana. The same thing happens in governments all over the world, including the United States. Leaders are frequently replaced with the hope of returning the country to a better version of its most moral and stable form to find that each new version is as bad or worse than the one before. Furthermore, it's a cautionary tale about sacrificing ethics in the name of capitalism.

      -

      ama ata aidoo the message pdf download


      Downloadhttps://ssurll.com/2uzw7v



      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/AutoDesk Civil 3D 2014 X64 (64bit) (Product Key And Xforce Keygen).md b/spaces/contluForse/HuggingGPT/assets/AutoDesk Civil 3D 2014 X64 (64bit) (Product Key And Xforce Keygen).md deleted file mode 100644 index 0e45fde78a380d07e79d2a349b0811ac0fd5e979..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/AutoDesk Civil 3D 2014 X64 (64bit) (Product Key And Xforce Keygen).md +++ /dev/null @@ -1,9 +0,0 @@ - -

      Revolutionary software! Make My Crack is a combination of a decompiler and a cracker. It can crack any cracked software and uncrack any activated software. AutoCAD Civil 3D 2016 x64 (32/64 Bit) (Product Key And Xforce. Avira AntiVir Free Edition Crack is a powerful antivirus software from Avira, which offer a fast, secure and reliable virus & Anti-spyware protection.

      -

      AutoCAD Home. AutoCAD 2019 (32/64 Bit) Installation (Product Key And Xforce. Autocad, civil, architecture, land, surveying, cad. Autocad 2021. http://minicatrina.com/a0973/Autocad-2019-Automation-Now-Get-Download-The-Latest-Draft-2016-All-Version-100-Pass-Exam.

      -

      AutoDesk Civil 3D 2014 x64 (64bit) (Product Key and Xforce Keygen)


      Download 🔗 https://ssurll.com/2uzwcp



      -

      autocad 2013 x64 xforce keygen. /profile/AutoCAD-Civil-3D-2014-X64-64bit-Product-Key-And-Xforce-Keygen-travaure/. /Profile. Autocad 2012 Todos Los Programas de Autodesk (Product key and Xforce.

      -

      Autocad 2013 64bit Serial Number. AutoCAD 2018 Serial Number. AutoCAD 2018 For The New Civil 3D, 2016 / 2017 With Autocad 2018 Product Key XForce Keygen. AutoCAD 2017 Update. When activating any software or program using any activator, xForce, keygen, key generator, or patcher, and you face this error that Make.

      -

      In this article, I will show you how to get free Civil 3D.
      AutoDesk Civil 3D 2014 x64 (64bit) (Product Key and Xforce Keygen)

      This is a typical installation workflow for Autodesk Civil 3D 2021, including major selections/recommendations while installing it first. Downloads for the Xforce keygen Autocad Land Desktop 2009 64 Bit 2549.. Xforce. Autocad Land. Civil 3d Land Desktop Companion 2009 Keygen Autocad Land.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Daddy 1 Movie Download LINK Utorrent.md b/spaces/contluForse/HuggingGPT/assets/Daddy 1 Movie Download LINK Utorrent.md deleted file mode 100644 index af0c9bd27b3e02d1bdbed5ebe9822bfd1e671e78..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Daddy 1 Movie Download LINK Utorrent.md +++ /dev/null @@ -1,7 +0,0 @@ -
      -

      At a glance, Project GXS is similar to one of the many fan-created blogs on anime. But clicking onto the index brings up a mammoth list of all the movies listed on the site. Some of the titles offer direct downloads apart from torrenting.

      -

      The data for our weekly download chart is estimated by TorrentFreak, and is for informational and educational reference only. All the movies in the list are Web-DL/Webrip/HDRip/BDrip/DVDrip unless stated otherwise.

      -

      Daddy 1 movie download utorrent


      Download File - https://ssurll.com/2uzvXs



      -

      hose pantie shaved sex wih mature old man penis girt becky
      nude 3d sex villa 2 084 rapidshare.
      gorilla lifts woman's breast he licks cum off her face asian chamber of congress asian americans in the gold rush danish
      sexual molestors.
      free adult films open for downloads best rated adult online store
      anxiety questioning manhood sexual thougt saigon asian georgia laws for
      teen workers.
      amature fuck videos free black female naked image bbs free
      erotic cartoon movies watch xxx movies online jane skinner
      fake nude.

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/EViews Mac-torrent.torrent _BEST_.md b/spaces/contluForse/HuggingGPT/assets/EViews Mac-torrent.torrent _BEST_.md deleted file mode 100644 index 3842c738a7febd64f7563d3eb20f7b3931b07e35..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/EViews Mac-torrent.torrent _BEST_.md +++ /dev/null @@ -1,54 +0,0 @@ -

      EViews mac-torrent.torrent


      DOWNLOADhttps://ssurll.com/2uzy9A



      -
      -Click on the link: - -It is legal for all pupils in the U.S. to download this EViews version for Windows. Students may use it to learn, practice, and study about cells, organic chemistry, or other topics. - -Students may also use the EViews Student Version Lite for macOS. EViews Lite is free! EViews Lite is the student version of the EViews 12 desktop app for the Mac. Students can download EViews Student Version Lite for macOS. - -English, Social Studies, Business, Science, and many other subjects. - -EViews Student Version Lite is free! Students can download EViews Student Version Lite for Windows and Mac. EViews Lite is free! - -The EViews Student Version Lite is the free student version of the EViews desktop app for the Mac. - -You can't buy or sell EViews Lite on AppStore, just use for learning and practice. - -EViews Lite is free! Students can download EViews Lite for Windows and Mac. EViews Lite is free! - -The features you can use with the EViews Lite: - -Construct a vocabulary list - -Create and use abbreviations - -Analyze - -Check by what you can read - -Quick check - -Create a PDF document - -Hinting - -Recording - -Diagrams - -Project - -Folders and portfolios - -EViews Lite for macOS is free! Students can download EViews Student Version Lite for macOS. EViews Lite for macOS is free! - -This is EViews Lite with an EViews for macOS desktop app. - -EViews Lite for macOS is free! Students can download EViews Lite for macOS. EViews Lite for macOS is free! - -The features you can use with the EViews Lite for macOS: - -EViews Lite for Windows is free! Students can download EViews Student Version Lite for Windows 4fefd39f24
      -
      -
      -

      diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/roi_align.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/roi_align.py deleted file mode 100644 index 0755aefc66e67233ceae0f4b77948301c443e9fb..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/roi_align.py +++ /dev/null @@ -1,223 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair - -from ..utils import deprecated_api_warning, ext_loader - -ext_module = ext_loader.load_ext('_ext', - ['roi_align_forward', 'roi_align_backward']) - - -class RoIAlignFunction(Function): - - @staticmethod - def symbolic(g, input, rois, output_size, spatial_scale, sampling_ratio, - pool_mode, aligned): - from ..onnx import is_custom_op_loaded - has_custom_op = is_custom_op_loaded() - if has_custom_op: - return g.op( - 'mmcv::MMCVRoiAlign', - input, - rois, - output_height_i=output_size[0], - output_width_i=output_size[1], - spatial_scale_f=spatial_scale, - sampling_ratio_i=sampling_ratio, - mode_s=pool_mode, - aligned_i=aligned) - else: - from torch.onnx.symbolic_opset9 import sub, squeeze - from torch.onnx.symbolic_helper import _slice_helper - from torch.onnx import TensorProtoDataType - # batch_indices = rois[:, 0].long() - batch_indices = _slice_helper( - g, rois, axes=[1], starts=[0], ends=[1]) - batch_indices = squeeze(g, batch_indices, 1) - batch_indices = g.op( - 'Cast', batch_indices, to_i=TensorProtoDataType.INT64) - # rois = rois[:, 1:] - rois = _slice_helper(g, rois, axes=[1], starts=[1], ends=[5]) - if aligned: - # rois -= 0.5/spatial_scale - aligned_offset = g.op( - 'Constant', - value_t=torch.tensor([0.5 / spatial_scale], - dtype=torch.float32)) - rois = sub(g, rois, aligned_offset) - # roi align - return g.op( - 'RoiAlign', - input, - rois, - batch_indices, - output_height_i=output_size[0], - output_width_i=output_size[1], - spatial_scale_f=spatial_scale, - sampling_ratio_i=max(0, sampling_ratio), - mode_s=pool_mode) - - @staticmethod - def forward(ctx, - input, - rois, - output_size, - spatial_scale=1.0, - sampling_ratio=0, - pool_mode='avg', - aligned=True): - ctx.output_size = _pair(output_size) - ctx.spatial_scale = spatial_scale - ctx.sampling_ratio = sampling_ratio - assert pool_mode in ('max', 'avg') - ctx.pool_mode = 0 if pool_mode == 'max' else 1 - ctx.aligned = aligned - ctx.input_shape = input.size() - - assert rois.size(1) == 5, 'RoI must be (idx, x1, y1, x2, y2)!' - - output_shape = (rois.size(0), input.size(1), ctx.output_size[0], - ctx.output_size[1]) - output = input.new_zeros(output_shape) - if ctx.pool_mode == 0: - argmax_y = input.new_zeros(output_shape) - argmax_x = input.new_zeros(output_shape) - else: - argmax_y = input.new_zeros(0) - argmax_x = input.new_zeros(0) - - ext_module.roi_align_forward( - input, - rois, - output, - argmax_y, - argmax_x, - aligned_height=ctx.output_size[0], - aligned_width=ctx.output_size[1], - spatial_scale=ctx.spatial_scale, - sampling_ratio=ctx.sampling_ratio, - pool_mode=ctx.pool_mode, - aligned=ctx.aligned) - - ctx.save_for_backward(rois, argmax_y, argmax_x) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - rois, argmax_y, argmax_x = ctx.saved_tensors - grad_input = grad_output.new_zeros(ctx.input_shape) - # complex head architecture may cause grad_output uncontiguous. - grad_output = grad_output.contiguous() - ext_module.roi_align_backward( - grad_output, - rois, - argmax_y, - argmax_x, - grad_input, - aligned_height=ctx.output_size[0], - aligned_width=ctx.output_size[1], - spatial_scale=ctx.spatial_scale, - sampling_ratio=ctx.sampling_ratio, - pool_mode=ctx.pool_mode, - aligned=ctx.aligned) - return grad_input, None, None, None, None, None, None - - -roi_align = RoIAlignFunction.apply - - -class RoIAlign(nn.Module): - """RoI align pooling layer. - - Args: - output_size (tuple): h, w - spatial_scale (float): scale the input boxes by this number - sampling_ratio (int): number of inputs samples to take for each - output sample. 0 to take samples densely for current models. - pool_mode (str, 'avg' or 'max'): pooling mode in each bin. - aligned (bool): if False, use the legacy implementation in - MMDetection. If True, align the results more perfectly. - use_torchvision (bool): whether to use roi_align from torchvision. - - Note: - The implementation of RoIAlign when aligned=True is modified from - https://github.com/facebookresearch/detectron2/ - - The meaning of aligned=True: - - Given a continuous coordinate c, its two neighboring pixel - indices (in our pixel model) are computed by floor(c - 0.5) and - ceil(c - 0.5). For example, c=1.3 has pixel neighbors with discrete - indices [0] and [1] (which are sampled from the underlying signal - at continuous coordinates 0.5 and 1.5). But the original roi_align - (aligned=False) does not subtract the 0.5 when computing - neighboring pixel indices and therefore it uses pixels with a - slightly incorrect alignment (relative to our pixel model) when - performing bilinear interpolation. - - With `aligned=True`, - we first appropriately scale the ROI and then shift it by -0.5 - prior to calling roi_align. This produces the correct neighbors; - - The difference does not make a difference to the model's - performance if ROIAlign is used together with conv layers. - """ - - @deprecated_api_warning( - { - 'out_size': 'output_size', - 'sample_num': 'sampling_ratio' - }, - cls_name='RoIAlign') - def __init__(self, - output_size, - spatial_scale=1.0, - sampling_ratio=0, - pool_mode='avg', - aligned=True, - use_torchvision=False): - super(RoIAlign, self).__init__() - - self.output_size = _pair(output_size) - self.spatial_scale = float(spatial_scale) - self.sampling_ratio = int(sampling_ratio) - self.pool_mode = pool_mode - self.aligned = aligned - self.use_torchvision = use_torchvision - - def forward(self, input, rois): - """ - Args: - input: NCHW images - rois: Bx5 boxes. First column is the index into N.\ - The other 4 columns are xyxy. - """ - if self.use_torchvision: - from torchvision.ops import roi_align as tv_roi_align - if 'aligned' in tv_roi_align.__code__.co_varnames: - return tv_roi_align(input, rois, self.output_size, - self.spatial_scale, self.sampling_ratio, - self.aligned) - else: - if self.aligned: - rois -= rois.new_tensor([0.] + - [0.5 / self.spatial_scale] * 4) - return tv_roi_align(input, rois, self.output_size, - self.spatial_scale, self.sampling_ratio) - else: - return roi_align(input, rois, self.output_size, self.spatial_scale, - self.sampling_ratio, self.pool_mode, self.aligned) - - def __repr__(self): - s = self.__class__.__name__ - s += f'(output_size={self.output_size}, ' - s += f'spatial_scale={self.spatial_scale}, ' - s += f'sampling_ratio={self.sampling_ratio}, ' - s += f'pool_mode={self.pool_mode}, ' - s += f'aligned={self.aligned}, ' - s += f'use_torchvision={self.use_torchvision})' - return s diff --git a/spaces/cowboyonmars/nerijs-pixel-art-xl/app.py b/spaces/cowboyonmars/nerijs-pixel-art-xl/app.py deleted file mode 100644 index d731683bb04c95ad1721a5b4ca706a4e495a38df..0000000000000000000000000000000000000000 --- a/spaces/cowboyonmars/nerijs-pixel-art-xl/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/nerijs/pixel-art-xl").launch() \ No newline at end of file diff --git a/spaces/cscan/CodeFormer/CodeFormer/facelib/detection/yolov5face/models/experimental.py b/spaces/cscan/CodeFormer/CodeFormer/facelib/detection/yolov5face/models/experimental.py deleted file mode 100644 index 37ba4c4420789c92dc0e2aaeb3d5b64859ec728c..0000000000000000000000000000000000000000 --- a/spaces/cscan/CodeFormer/CodeFormer/facelib/detection/yolov5face/models/experimental.py +++ /dev/null @@ -1,45 +0,0 @@ -# # This file contains experimental modules - -import numpy as np -import torch -from torch import nn - -from facelib.detection.yolov5face.models.common import Conv - - -class CrossConv(nn.Module): - # Cross Convolution Downsample - def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False): - # ch_in, ch_out, kernel, stride, groups, expansion, shortcut - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, (1, k), (1, s)) - self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class MixConv2d(nn.Module): - # Mixed Depthwise Conv https://arxiv.org/abs/1907.09595 - def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True): - super().__init__() - groups = len(k) - if equal_ch: # equal c_ per group - i = torch.linspace(0, groups - 1e-6, c2).floor() # c2 indices - c_ = [(i == g).sum() for g in range(groups)] # intermediate channels - else: # equal weight.numel() per group - b = [c2] + [0] * groups - a = np.eye(groups + 1, groups, k=-1) - a -= np.roll(a, 1, axis=1) - a *= np.array(k) ** 2 - a[0] = 1 - c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b - - self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)]) - self.bn = nn.BatchNorm2d(c2) - self.act = nn.LeakyReLU(0.1, inplace=True) - - def forward(self, x): - return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1))) diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/utils/utils_config.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/utils/utils_config.py deleted file mode 100644 index 0c02eaf70fc0140aca7925f621c29a496f491cae..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/utils/utils_config.py +++ /dev/null @@ -1,16 +0,0 @@ -import importlib -import os.path as osp - - -def get_config(config_file): - assert config_file.startswith('configs/'), 'config file setting must start with configs/' - temp_config_name = osp.basename(config_file) - temp_module_name = osp.splitext(temp_config_name)[0] - config = importlib.import_module("configs.base") - cfg = config.config - config = importlib.import_module("configs.%s" % temp_module_name) - job_cfg = config.config - cfg.update(job_cfg) - if cfg.output is None: - cfg.output = osp.join('work_dirs', temp_module_name) - return cfg \ No newline at end of file diff --git a/spaces/dalitongxue/dalitongxue/Dockerfile b/spaces/dalitongxue/dalitongxue/Dockerfile deleted file mode 100644 index f9a3b89d0f99ca2cb677d0045539c9022b8a5bc3..0000000000000000000000000000000000000000 --- a/spaces/dalitongxue/dalitongxue/Dockerfile +++ /dev/null @@ -1,34 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,以便之后能从GitHub克隆项目 -RUN apk --no-cache add git - -# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下 -RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app - -# 设置工作目录为之前克降的项日目录 -WORKDIR /workspace/app - -# 编译 go 项目。-ldflags="-s -w"是为了减少编译后的二进制大小 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 像作为运行时的恭础镜像 -FROM alpine - -# 设置工作目录 -WORKDIR /workspace/app - -# 从构建阶段复制编还后的二进制文件到运行时镜像中 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# 设置环境变量,此处为随机字符 -ENV Go_Proxy_BingAI_USER_TOKEN_1="kJs8hD92ncMzLaoQwYtX5rG6bE3fZ4i0" - -# 暴露8080端口 -EXPOSE 8080 - -# 容器启动时运行的命令 -CMD ["/workspace/app/go-proxy-bingai"] \ No newline at end of file diff --git a/spaces/danterivers/music-generation-samples/tests/modules/test_codebooks_patterns.py b/spaces/danterivers/music-generation-samples/tests/modules/test_codebooks_patterns.py deleted file mode 100644 index b658f4779a369f9ec8dde692a61b7f0fe3485724..0000000000000000000000000000000000000000 --- a/spaces/danterivers/music-generation-samples/tests/modules/test_codebooks_patterns.py +++ /dev/null @@ -1,246 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import pytest -import torch - -from audiocraft.modules.codebooks_patterns import ( - DelayedPatternProvider, - ParallelPatternProvider, - Pattern, - UnrolledPatternProvider, -) - - -class TestParallelPatternProvider: - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [0, 1, 16, 100]) - def test_get_pattern(self, n_q: int, timesteps: int): - provider = ParallelPatternProvider(n_q) - pattern = provider.get_pattern(timesteps) - # + 1 to account for 1st step - assert len(pattern.layout) == timesteps + 1 - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [8, 16, 100]) - def test_pattern_content(self, n_q: int, timesteps: int): - provider = ParallelPatternProvider(n_q) - pattern = provider.get_pattern(timesteps) - for s, v in enumerate(pattern.layout): - for i, code in enumerate(v): - assert i == code.q - assert code.t == s - 1 # account for the 1st empty step - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [8, 16, 100]) - def test_pattern_max_delay(self, n_q: int, timesteps: int): - provider = ParallelPatternProvider(n_q) - pattern = provider.get_pattern(timesteps) - assert pattern.max_delay == 0 - assert len(pattern.valid_layout) == len(pattern.layout) - pattern.max_delay - - -class TestDelayedPatternProvider: - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [0, 1, 16, 100]) - def test_get_pattern(self, n_q: int, timesteps: int): - delays = [ - list(range(n_q)), - [0] + [1] * (n_q - 1), - [0] + [4] * (n_q - 1), - ] - for delay in delays: - provider = DelayedPatternProvider(n_q, delay) - pattern = provider.get_pattern(timesteps) - # + 1 to account for 1st step - assert len(pattern.layout) == timesteps + max(delay) + 1 - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [8, 16, 100]) - def test_pattern_content(self, n_q: int, timesteps: int): - provider = DelayedPatternProvider(n_q) - pattern = provider.get_pattern(timesteps) - for s, v in enumerate(pattern.layout): - for i, code in enumerate(v): - assert i == code.q - assert code.t == max(0, s - code.q - 1) - - @pytest.mark.parametrize("timesteps", [8, 16, 100]) - @pytest.mark.parametrize("delay", [[0, 1, 2, 3], [0, 1, 1, 1], [0, 3, 3, 3], [0, 3]]) - def test_pattern_max_delay(self, timesteps: int, delay: list): - provider = DelayedPatternProvider(len(delay), delay) - pattern = provider.get_pattern(timesteps) - assert pattern.max_delay == max(delay) - assert len(pattern.valid_layout) == len(pattern.layout) - pattern.max_delay - - -class TestUnrolledPatternProvider: - - @pytest.mark.parametrize("timesteps", [0, 1, 16]) - @pytest.mark.parametrize("flattening", [[0, 1, 2], [0, 1, 1]]) - @pytest.mark.parametrize("delays", [[0, 0, 0], [0, 5, 5]]) - def test_get_pattern(self, timesteps: int, flattening: list, delays: list): - n_q = len(flattening) - max_delay = max(delays) - provider = UnrolledPatternProvider(n_q, flattening, delays) - pattern = provider.get_pattern(timesteps) - assert len(pattern.layout) == provider.num_virtual_steps(timesteps) + max_delay - - @pytest.mark.parametrize("timesteps", [0, 1, 16]) - @pytest.mark.parametrize("flattening", [[0, 1, 2], [0, 1, 1]]) - @pytest.mark.parametrize("delays", [[0, 0, 0], [0, 5, 5]]) - def test_pattern_max_delay(self, timesteps: int, flattening: list, delays: list): - n_q = len(flattening) - max_delay = max(delays) - provider = UnrolledPatternProvider(n_q, flattening, delays) - pattern = provider.get_pattern(timesteps) - assert pattern.max_delay == max_delay - - -class TestPattern: - - def ref_build_pattern_sequence(self, z: torch.Tensor, pattern: Pattern, special_token: int): - """Reference method to build the sequence from the pattern without using fancy scatter.""" - bs, n_q, T = z.shape - z = z.cpu().numpy() - assert n_q == pattern.n_q - assert T <= pattern.timesteps - inp = torch.full((bs, n_q, len(pattern.layout)), special_token, dtype=torch.long).numpy() - inp[:] = special_token - for s, v in enumerate(pattern.layout): - for (t, q) in v: - if t < T: - inp[:, q, s] = z[:, q, t] - return torch.from_numpy(inp) - - def ref_revert_pattern_sequence(self, z: torch.Tensor, pattern: Pattern, special_token: int): - """Reference method to revert the sequence from the pattern without using fancy scatter.""" - z = z.cpu().numpy() - bs, n_q, S = z.shape - assert pattern.n_q == n_q - inp = torch.full((bs, pattern.n_q, pattern.timesteps), special_token, dtype=torch.long).numpy() - inp[:] = special_token - for s, v in enumerate(pattern.layout): - for (t, q) in v: - if t < pattern.timesteps: - inp[:, q, t] = z[:, q, s] - return torch.from_numpy(inp) - - def ref_revert_pattern_logits(self, z: torch.Tensor, pattern: Pattern, special_token: float): - """Reference method to revert the logits from the pattern without using fancy scatter.""" - z = z.cpu().numpy() - bs, card, n_q, S = z.shape - assert pattern.n_q == n_q - ref_layout = pattern.layout - inp = torch.full((bs, card, pattern.n_q, pattern.timesteps), special_token, dtype=torch.float).numpy() - inp[:] = special_token - for s, v in enumerate(ref_layout[1:]): - if s < S: - for (t, q) in v: - if t < pattern.timesteps: - inp[:, :, q, t] = z[:, :, q, s] - return torch.from_numpy(inp) - - def _get_pattern_providers(self, n_q: int): - pattern_provider_1 = ParallelPatternProvider(n_q) - pattern_provider_2 = DelayedPatternProvider(n_q, list(range(n_q))) - pattern_provider_3 = DelayedPatternProvider(n_q, [0] + [1] * (n_q - 1)) - pattern_provider_4 = UnrolledPatternProvider( - n_q, flattening=list(range(n_q)), delays=[0] * n_q - ) - pattern_provider_5 = UnrolledPatternProvider( - n_q, flattening=[0] + [1] * (n_q - 1), delays=[0] * n_q - ) - pattern_provider_6 = UnrolledPatternProvider( - n_q, flattening=[0] + [1] * (n_q - 1), delays=[0] + [5] * (n_q - 1) - ) - return [ - pattern_provider_1, - pattern_provider_2, - pattern_provider_3, - pattern_provider_4, - pattern_provider_5, - pattern_provider_6, - ] - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [16, 72]) - def test_build_pattern_sequence(self, n_q: int, timesteps: int): - bs = 2 - card = 256 - special_token = card - - pattern_providers = self._get_pattern_providers(n_q) - for pattern_provider in pattern_providers: - pattern = pattern_provider.get_pattern(timesteps) - # we can correctly build the sequence from the pattern - z = torch.randint(0, card, (bs, n_q, timesteps)) - ref_res = self.ref_build_pattern_sequence(z, pattern, special_token) - res, indexes, mask = pattern.build_pattern_sequence(z, special_token) - assert (res == ref_res).float().mean() == 1.0 - - # expected assertion fails on the number of timesteps - invalid_timesteps = [timesteps + 1] - if pattern.num_sequence_steps != pattern.timesteps: - invalid_timesteps.append(pattern.num_sequence_steps) - for i_timesteps in invalid_timesteps: - z2 = torch.randint(0, card, (bs, n_q, i_timesteps)) - with pytest.raises(AssertionError): - pattern.build_pattern_sequence(z2, special_token) - - # expected assertion fails on the number of codebooks - invalid_qs = [0, n_q - 1, n_q + 1] - for i_q in invalid_qs: - z3 = torch.randint(0, card, (bs, i_q, timesteps)) - with pytest.raises(AssertionError): - pattern.build_pattern_sequence(z3, special_token) - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [16, 72]) - def test_revert_pattern_sequence(self, n_q: int, timesteps: int): - bs = 2 - card = 256 - special_token = card - - pattern_providers = self._get_pattern_providers(n_q) - for pattern_provider in pattern_providers: - pattern = pattern_provider.get_pattern(timesteps) - # this works assuming previous tests are successful - z = torch.randint(0, card, (bs, n_q, timesteps)) - s = self.ref_build_pattern_sequence(z, pattern, special_token) - ref_out = self.ref_revert_pattern_sequence(s, pattern, special_token) - # ensure our reference script retrieve the original sequence - assert z.shape == ref_out.shape - assert (z == ref_out).float().mean() == 1.0 - # now we can test the scatter version - out, indexes, mask = pattern.revert_pattern_sequence(s, special_token) - assert out.shape == ref_out.shape - assert (out == ref_out).float().mean() == 1.0 - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [16, 72]) - @pytest.mark.parametrize("card", [1, 2, 256, 1024]) - def test_revert_pattern_logits(self, n_q: int, timesteps: int, card: int): - bs = 2 - special_token = card - logits_special_token = float('nan') - - pattern_providers = self._get_pattern_providers(n_q) - for pattern_provider in pattern_providers: - pattern = pattern_provider.get_pattern(timesteps) - # this works assuming previous tests are successful - z = torch.randint(0, card, (bs, n_q, timesteps)) - s = self.ref_build_pattern_sequence(z, pattern, special_token) - logits = torch.randn((bs, card, n_q, s.shape[-1])) - ref_out = self.ref_revert_pattern_logits(logits, pattern, logits_special_token) - # ensure our reference script retrieve the original sequence - assert ref_out.shape == torch.Size([bs, card, n_q, timesteps]) - # now we can test the scatter version - out, indexes, mask = pattern.revert_pattern_logits(logits, logits_special_token) - assert out.shape == ref_out.shape - assert (out == ref_out).float().mean() == 1.0 diff --git a/spaces/davertor/colorizing_images/deoldify/dataset.py b/spaces/davertor/colorizing_images/deoldify/dataset.py deleted file mode 100644 index 316d4342f86d065cd3897f8fc0b678e94fd895a3..0000000000000000000000000000000000000000 --- a/spaces/davertor/colorizing_images/deoldify/dataset.py +++ /dev/null @@ -1,48 +0,0 @@ -import fastai -from fastai import * -from fastai.core import * -from fastai.vision.transform import get_transforms -from fastai.vision.data import ImageImageList, ImageDataBunch, imagenet_stats -from .augs import noisify - - -def get_colorize_data( - sz: int, - bs: int, - crappy_path: Path, - good_path: Path, - random_seed: int = None, - keep_pct: float = 1.0, - num_workers: int = 8, - stats: tuple = imagenet_stats, - xtra_tfms=[], -) -> ImageDataBunch: - - src = ( - ImageImageList.from_folder(crappy_path, convert_mode='RGB') - .use_partial_data(sample_pct=keep_pct, seed=random_seed) - .split_by_rand_pct(0.1, seed=random_seed) - ) - - data = ( - src.label_from_func(lambda x: good_path / x.relative_to(crappy_path)) - .transform( - get_transforms( - max_zoom=1.2, max_lighting=0.5, max_warp=0.25, xtra_tfms=xtra_tfms - ), - size=sz, - tfm_y=True, - ) - .databunch(bs=bs, num_workers=num_workers, no_check=True) - .normalize(stats, do_y=True) - ) - - data.c = 3 - return data - - -def get_dummy_databunch() -> ImageDataBunch: - path = Path('./dummy/') - return get_colorize_data( - sz=1, bs=1, crappy_path=path, good_path=path, keep_pct=0.001 - ) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/otBase.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/otBase.py deleted file mode 100644 index 9c80400e9420577f0d9d6f747e15b83e49f68e49..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/otBase.py +++ /dev/null @@ -1,1458 +0,0 @@ -from fontTools.config import OPTIONS -from fontTools.misc.textTools import Tag, bytesjoin -from .DefaultTable import DefaultTable -from enum import IntEnum -import sys -import array -import struct -import logging -from functools import lru_cache -from typing import Iterator, NamedTuple, Optional, Tuple - -log = logging.getLogger(__name__) - -have_uharfbuzz = False -try: - import uharfbuzz as hb - - # repack method added in uharfbuzz >= 0.23; if uharfbuzz *can* be - # imported but repack method is missing, behave as if uharfbuzz - # is not available (fallback to the slower Python implementation) - have_uharfbuzz = callable(getattr(hb, "repack", None)) -except ImportError: - pass - -USE_HARFBUZZ_REPACKER = OPTIONS[f"{__name__}:USE_HARFBUZZ_REPACKER"] - - -class OverflowErrorRecord(object): - def __init__(self, overflowTuple): - self.tableType = overflowTuple[0] - self.LookupListIndex = overflowTuple[1] - self.SubTableIndex = overflowTuple[2] - self.itemName = overflowTuple[3] - self.itemIndex = overflowTuple[4] - - def __repr__(self): - return str( - ( - self.tableType, - "LookupIndex:", - self.LookupListIndex, - "SubTableIndex:", - self.SubTableIndex, - "ItemName:", - self.itemName, - "ItemIndex:", - self.itemIndex, - ) - ) - - -class OTLOffsetOverflowError(Exception): - def __init__(self, overflowErrorRecord): - self.value = overflowErrorRecord - - def __str__(self): - return repr(self.value) - - -class RepackerState(IntEnum): - # Repacking control flow is implemnted using a state machine. The state machine table: - # - # State | Packing Success | Packing Failed | Exception Raised | - # ------------+-----------------+----------------+------------------+ - # PURE_FT | Return result | PURE_FT | Return failure | - # HB_FT | Return result | HB_FT | FT_FALLBACK | - # FT_FALLBACK | HB_FT | FT_FALLBACK | Return failure | - - # Pack only with fontTools, don't allow sharing between extensions. - PURE_FT = 1 - - # Attempt to pack with harfbuzz (allowing sharing between extensions) - # use fontTools to attempt overflow resolution. - HB_FT = 2 - - # Fallback if HB/FT packing gets stuck. Pack only with fontTools, don't allow sharing between - # extensions. - FT_FALLBACK = 3 - - -class BaseTTXConverter(DefaultTable): - - """Generic base class for TTX table converters. It functions as an - adapter between the TTX (ttLib actually) table model and the model - we use for OpenType tables, which is necessarily subtly different. - """ - - def decompile(self, data, font): - """Create an object from the binary data. Called automatically on access.""" - from . import otTables - - reader = OTTableReader(data, tableTag=self.tableTag) - tableClass = getattr(otTables, self.tableTag) - self.table = tableClass() - self.table.decompile(reader, font) - - def compile(self, font): - """Compiles the table into binary. Called automatically on save.""" - - # General outline: - # Create a top-level OTTableWriter for the GPOS/GSUB table. - # Call the compile method for the the table - # for each 'converter' record in the table converter list - # call converter's write method for each item in the value. - # - For simple items, the write method adds a string to the - # writer's self.items list. - # - For Struct/Table/Subtable items, it add first adds new writer to the - # to the writer's self.items, then calls the item's compile method. - # This creates a tree of writers, rooted at the GUSB/GPOS writer, with - # each writer representing a table, and the writer.items list containing - # the child data strings and writers. - # call the getAllData method - # call _doneWriting, which removes duplicates - # call _gatherTables. This traverses the tables, adding unique occurences to a flat list of tables - # Traverse the flat list of tables, calling getDataLength on each to update their position - # Traverse the flat list of tables again, calling getData each get the data in the table, now that - # pos's and offset are known. - - # If a lookup subtable overflows an offset, we have to start all over. - overflowRecord = None - # this is 3-state option: default (None) means automatically use hb.repack or - # silently fall back if it fails; True, use it and raise error if not possible - # or it errors out; False, don't use it, even if you can. - use_hb_repack = font.cfg[USE_HARFBUZZ_REPACKER] - if self.tableTag in ("GSUB", "GPOS"): - if use_hb_repack is False: - log.debug( - "hb.repack disabled, compiling '%s' with pure-python serializer", - self.tableTag, - ) - elif not have_uharfbuzz: - if use_hb_repack is True: - raise ImportError("No module named 'uharfbuzz'") - else: - assert use_hb_repack is None - log.debug( - "uharfbuzz not found, compiling '%s' with pure-python serializer", - self.tableTag, - ) - - if ( - use_hb_repack in (None, True) - and have_uharfbuzz - and self.tableTag in ("GSUB", "GPOS") - ): - state = RepackerState.HB_FT - else: - state = RepackerState.PURE_FT - - hb_first_error_logged = False - lastOverflowRecord = None - while True: - try: - writer = OTTableWriter(tableTag=self.tableTag) - self.table.compile(writer, font) - if state == RepackerState.HB_FT: - return self.tryPackingHarfbuzz(writer, hb_first_error_logged) - elif state == RepackerState.PURE_FT: - return self.tryPackingFontTools(writer) - elif state == RepackerState.FT_FALLBACK: - # Run packing with FontTools only, but don't return the result as it will - # not be optimally packed. Once a successful packing has been found, state is - # changed back to harfbuzz packing to produce the final, optimal, packing. - self.tryPackingFontTools(writer) - log.debug( - "Re-enabling sharing between extensions and switching back to " - "harfbuzz+fontTools packing." - ) - state = RepackerState.HB_FT - - except OTLOffsetOverflowError as e: - hb_first_error_logged = True - ok = self.tryResolveOverflow(font, e, lastOverflowRecord) - lastOverflowRecord = e.value - - if ok: - continue - - if state is RepackerState.HB_FT: - log.debug( - "Harfbuzz packing out of resolutions, disabling sharing between extensions and " - "switching to fontTools only packing." - ) - state = RepackerState.FT_FALLBACK - else: - raise - - def tryPackingHarfbuzz(self, writer, hb_first_error_logged): - try: - log.debug("serializing '%s' with hb.repack", self.tableTag) - return writer.getAllDataUsingHarfbuzz(self.tableTag) - except (ValueError, MemoryError, hb.RepackerError) as e: - # Only log hb repacker errors the first time they occur in - # the offset-overflow resolution loop, they are just noisy. - # Maybe we can revisit this if/when uharfbuzz actually gives - # us more info as to why hb.repack failed... - if not hb_first_error_logged: - error_msg = f"{type(e).__name__}" - if str(e) != "": - error_msg += f": {e}" - log.warning( - "hb.repack failed to serialize '%s', attempting fonttools resolutions " - "; the error message was: %s", - self.tableTag, - error_msg, - ) - hb_first_error_logged = True - return writer.getAllData(remove_duplicate=False) - - def tryPackingFontTools(self, writer): - return writer.getAllData() - - def tryResolveOverflow(self, font, e, lastOverflowRecord): - ok = 0 - if lastOverflowRecord == e.value: - # Oh well... - return ok - - overflowRecord = e.value - log.info("Attempting to fix OTLOffsetOverflowError %s", e) - - if overflowRecord.itemName is None: - from .otTables import fixLookupOverFlows - - ok = fixLookupOverFlows(font, overflowRecord) - else: - from .otTables import fixSubTableOverFlows - - ok = fixSubTableOverFlows(font, overflowRecord) - - if ok: - return ok - - # Try upgrading lookup to Extension and hope - # that cross-lookup sharing not happening would - # fix overflow... - from .otTables import fixLookupOverFlows - - return fixLookupOverFlows(font, overflowRecord) - - def toXML(self, writer, font): - self.table.toXML2(writer, font) - - def fromXML(self, name, attrs, content, font): - from . import otTables - - if not hasattr(self, "table"): - tableClass = getattr(otTables, self.tableTag) - self.table = tableClass() - self.table.fromXML(name, attrs, content, font) - self.table.populateDefaults() - - def ensureDecompiled(self, recurse=True): - self.table.ensureDecompiled(recurse=recurse) - - -# https://github.com/fonttools/fonttools/pull/2285#issuecomment-834652928 -assert len(struct.pack("i", 0)) == 4 -assert array.array("i").itemsize == 4, "Oops, file a bug against fonttools." - - -class OTTableReader(object): - - """Helper class to retrieve data from an OpenType table.""" - - __slots__ = ("data", "offset", "pos", "localState", "tableTag") - - def __init__(self, data, localState=None, offset=0, tableTag=None): - self.data = data - self.offset = offset - self.pos = offset - self.localState = localState - self.tableTag = tableTag - - def advance(self, count): - self.pos += count - - def seek(self, pos): - self.pos = pos - - def copy(self): - other = self.__class__(self.data, self.localState, self.offset, self.tableTag) - other.pos = self.pos - return other - - def getSubReader(self, offset): - offset = self.offset + offset - return self.__class__(self.data, self.localState, offset, self.tableTag) - - def readValue(self, typecode, staticSize): - pos = self.pos - newpos = pos + staticSize - (value,) = struct.unpack(f">{typecode}", self.data[pos:newpos]) - self.pos = newpos - return value - - def readArray(self, typecode, staticSize, count): - pos = self.pos - newpos = pos + count * staticSize - value = array.array(typecode, self.data[pos:newpos]) - if sys.byteorder != "big": - value.byteswap() - self.pos = newpos - return value.tolist() - - def readInt8(self): - return self.readValue("b", staticSize=1) - - def readInt8Array(self, count): - return self.readArray("b", staticSize=1, count=count) - - def readShort(self): - return self.readValue("h", staticSize=2) - - def readShortArray(self, count): - return self.readArray("h", staticSize=2, count=count) - - def readLong(self): - return self.readValue("i", staticSize=4) - - def readLongArray(self, count): - return self.readArray("i", staticSize=4, count=count) - - def readUInt8(self): - return self.readValue("B", staticSize=1) - - def readUInt8Array(self, count): - return self.readArray("B", staticSize=1, count=count) - - def readUShort(self): - return self.readValue("H", staticSize=2) - - def readUShortArray(self, count): - return self.readArray("H", staticSize=2, count=count) - - def readULong(self): - return self.readValue("I", staticSize=4) - - def readULongArray(self, count): - return self.readArray("I", staticSize=4, count=count) - - def readUInt24(self): - pos = self.pos - newpos = pos + 3 - (value,) = struct.unpack(">l", b"\0" + self.data[pos:newpos]) - self.pos = newpos - return value - - def readUInt24Array(self, count): - return [self.readUInt24() for _ in range(count)] - - def readTag(self): - pos = self.pos - newpos = pos + 4 - value = Tag(self.data[pos:newpos]) - assert len(value) == 4, value - self.pos = newpos - return value - - def readData(self, count): - pos = self.pos - newpos = pos + count - value = self.data[pos:newpos] - self.pos = newpos - return value - - def __setitem__(self, name, value): - state = self.localState.copy() if self.localState else dict() - state[name] = value - self.localState = state - - def __getitem__(self, name): - return self.localState and self.localState[name] - - def __contains__(self, name): - return self.localState and name in self.localState - - -class OTTableWriter(object): - - """Helper class to gather and assemble data for OpenType tables.""" - - def __init__(self, localState=None, tableTag=None, offsetSize=2): - self.items = [] - self.pos = None - self.localState = localState - self.tableTag = tableTag - self.offsetSize = offsetSize - self.parent = None - - # DEPRECATED: 'longOffset' is kept as a property for backward compat with old code. - # You should use 'offsetSize' instead (2, 3 or 4 bytes). - @property - def longOffset(self): - return self.offsetSize == 4 - - @longOffset.setter - def longOffset(self, value): - self.offsetSize = 4 if value else 2 - - def __setitem__(self, name, value): - state = self.localState.copy() if self.localState else dict() - state[name] = value - self.localState = state - - def __getitem__(self, name): - return self.localState[name] - - def __delitem__(self, name): - del self.localState[name] - - # assembler interface - - def getDataLength(self): - """Return the length of this table in bytes, without subtables.""" - l = 0 - for item in self.items: - if hasattr(item, "getCountData"): - l += item.size - elif hasattr(item, "getData"): - l += item.offsetSize - else: - l = l + len(item) - return l - - def getData(self): - """Assemble the data for this writer/table, without subtables.""" - items = list(self.items) # make a shallow copy - pos = self.pos - numItems = len(items) - for i in range(numItems): - item = items[i] - - if hasattr(item, "getData"): - if item.offsetSize == 4: - items[i] = packULong(item.pos - pos) - elif item.offsetSize == 2: - try: - items[i] = packUShort(item.pos - pos) - except struct.error: - # provide data to fix overflow problem. - overflowErrorRecord = self.getOverflowErrorRecord(item) - - raise OTLOffsetOverflowError(overflowErrorRecord) - elif item.offsetSize == 3: - items[i] = packUInt24(item.pos - pos) - else: - raise ValueError(item.offsetSize) - - return bytesjoin(items) - - def getDataForHarfbuzz(self): - """Assemble the data for this writer/table with all offset field set to 0""" - items = list(self.items) - packFuncs = {2: packUShort, 3: packUInt24, 4: packULong} - for i, item in enumerate(items): - if hasattr(item, "getData"): - # Offset value is not needed in harfbuzz repacker, so setting offset to 0 to avoid overflow here - if item.offsetSize in packFuncs: - items[i] = packFuncs[item.offsetSize](0) - else: - raise ValueError(item.offsetSize) - - return bytesjoin(items) - - def __hash__(self): - # only works after self._doneWriting() has been called - return hash(self.items) - - def __ne__(self, other): - result = self.__eq__(other) - return result if result is NotImplemented else not result - - def __eq__(self, other): - if type(self) != type(other): - return NotImplemented - return self.offsetSize == other.offsetSize and self.items == other.items - - def _doneWriting(self, internedTables, shareExtension=False): - # Convert CountData references to data string items - # collapse duplicate table references to a unique entry - # "tables" are OTTableWriter objects. - - # For Extension Lookup types, we can - # eliminate duplicates only within the tree under the Extension Lookup, - # as offsets may exceed 64K even between Extension LookupTable subtables. - isExtension = hasattr(self, "Extension") - - # Certain versions of Uniscribe reject the font if the GSUB/GPOS top-level - # arrays (ScriptList, FeatureList, LookupList) point to the same, possibly - # empty, array. So, we don't share those. - # See: https://github.com/fonttools/fonttools/issues/518 - dontShare = hasattr(self, "DontShare") - - if isExtension and not shareExtension: - internedTables = {} - - items = self.items - for i in range(len(items)): - item = items[i] - if hasattr(item, "getCountData"): - items[i] = item.getCountData() - elif hasattr(item, "getData"): - item._doneWriting(internedTables, shareExtension=shareExtension) - # At this point, all subwriters are hashable based on their items. - # (See hash and comparison magic methods above.) So the ``setdefault`` - # call here will return the first writer object we've seen with - # equal content, or store it in the dictionary if it's not been - # seen yet. We therefore replace the subwriter object with an equivalent - # object, which deduplicates the tree. - if not dontShare: - items[i] = item = internedTables.setdefault(item, item) - self.items = tuple(items) - - def _gatherTables(self, tables, extTables, done): - # Convert table references in self.items tree to a flat - # list of tables in depth-first traversal order. - # "tables" are OTTableWriter objects. - # We do the traversal in reverse order at each level, in order to - # resolve duplicate references to be the last reference in the list of tables. - # For extension lookups, duplicate references can be merged only within the - # writer tree under the extension lookup. - - done[id(self)] = True - - numItems = len(self.items) - iRange = list(range(numItems)) - iRange.reverse() - - isExtension = hasattr(self, "Extension") - - selfTables = tables - - if isExtension: - assert ( - extTables is not None - ), "Program or XML editing error. Extension subtables cannot contain extensions subtables" - tables, extTables, done = extTables, None, {} - - # add Coverage table if it is sorted last. - sortCoverageLast = False - if hasattr(self, "sortCoverageLast"): - # Find coverage table - for i in range(numItems): - item = self.items[i] - if getattr(item, "name", None) == "Coverage": - sortCoverageLast = True - break - if id(item) not in done: - item._gatherTables(tables, extTables, done) - else: - # We're a new parent of item - pass - - for i in iRange: - item = self.items[i] - if not hasattr(item, "getData"): - continue - - if ( - sortCoverageLast - and (i == 1) - and getattr(item, "name", None) == "Coverage" - ): - # we've already 'gathered' it above - continue - - if id(item) not in done: - item._gatherTables(tables, extTables, done) - else: - # Item is already written out by other parent - pass - - selfTables.append(self) - - def _gatherGraphForHarfbuzz(self, tables, obj_list, done, objidx, virtual_edges): - real_links = [] - virtual_links = [] - item_idx = objidx - - # Merge virtual_links from parent - for idx in virtual_edges: - virtual_links.append((0, 0, idx)) - - sortCoverageLast = False - coverage_idx = 0 - if hasattr(self, "sortCoverageLast"): - # Find coverage table - for i, item in enumerate(self.items): - if getattr(item, "name", None) == "Coverage": - sortCoverageLast = True - if id(item) not in done: - coverage_idx = item_idx = item._gatherGraphForHarfbuzz( - tables, obj_list, done, item_idx, virtual_edges - ) - else: - coverage_idx = done[id(item)] - virtual_edges.append(coverage_idx) - break - - child_idx = 0 - offset_pos = 0 - for i, item in enumerate(self.items): - if hasattr(item, "getData"): - pos = offset_pos - elif hasattr(item, "getCountData"): - offset_pos += item.size - continue - else: - offset_pos = offset_pos + len(item) - continue - - if id(item) not in done: - child_idx = item_idx = item._gatherGraphForHarfbuzz( - tables, obj_list, done, item_idx, virtual_edges - ) - else: - child_idx = done[id(item)] - - real_edge = (pos, item.offsetSize, child_idx) - real_links.append(real_edge) - offset_pos += item.offsetSize - - tables.append(self) - obj_list.append((real_links, virtual_links)) - item_idx += 1 - done[id(self)] = item_idx - if sortCoverageLast: - virtual_edges.pop() - - return item_idx - - def getAllDataUsingHarfbuzz(self, tableTag): - """The Whole table is represented as a Graph. - Assemble graph data and call Harfbuzz repacker to pack the table. - Harfbuzz repacker is faster and retain as much sub-table sharing as possible, see also: - https://github.com/harfbuzz/harfbuzz/blob/main/docs/repacker.md - The input format for hb.repack() method is explained here: - https://github.com/harfbuzz/uharfbuzz/blob/main/src/uharfbuzz/_harfbuzz.pyx#L1149 - """ - internedTables = {} - self._doneWriting(internedTables, shareExtension=True) - tables = [] - obj_list = [] - done = {} - objidx = 0 - virtual_edges = [] - self._gatherGraphForHarfbuzz(tables, obj_list, done, objidx, virtual_edges) - # Gather all data in two passes: the absolute positions of all - # subtable are needed before the actual data can be assembled. - pos = 0 - for table in tables: - table.pos = pos - pos = pos + table.getDataLength() - - data = [] - for table in tables: - tableData = table.getDataForHarfbuzz() - data.append(tableData) - - if hasattr(hb, "repack_with_tag"): - return hb.repack_with_tag(str(tableTag), data, obj_list) - else: - return hb.repack(data, obj_list) - - def getAllData(self, remove_duplicate=True): - """Assemble all data, including all subtables.""" - if remove_duplicate: - internedTables = {} - self._doneWriting(internedTables) - tables = [] - extTables = [] - done = {} - self._gatherTables(tables, extTables, done) - tables.reverse() - extTables.reverse() - # Gather all data in two passes: the absolute positions of all - # subtable are needed before the actual data can be assembled. - pos = 0 - for table in tables: - table.pos = pos - pos = pos + table.getDataLength() - - for table in extTables: - table.pos = pos - pos = pos + table.getDataLength() - - data = [] - for table in tables: - tableData = table.getData() - data.append(tableData) - - for table in extTables: - tableData = table.getData() - data.append(tableData) - - return bytesjoin(data) - - # interface for gathering data, as used by table.compile() - - def getSubWriter(self, offsetSize=2): - subwriter = self.__class__( - self.localState, self.tableTag, offsetSize=offsetSize - ) - subwriter.parent = ( - self # because some subtables have idential values, we discard - ) - # the duplicates under the getAllData method. Hence some - # subtable writers can have more than one parent writer. - # But we just care about first one right now. - return subwriter - - def writeValue(self, typecode, value): - self.items.append(struct.pack(f">{typecode}", value)) - - def writeArray(self, typecode, values): - a = array.array(typecode, values) - if sys.byteorder != "big": - a.byteswap() - self.items.append(a.tobytes()) - - def writeInt8(self, value): - assert -128 <= value < 128, value - self.items.append(struct.pack(">b", value)) - - def writeInt8Array(self, values): - self.writeArray("b", values) - - def writeShort(self, value): - assert -32768 <= value < 32768, value - self.items.append(struct.pack(">h", value)) - - def writeShortArray(self, values): - self.writeArray("h", values) - - def writeLong(self, value): - self.items.append(struct.pack(">i", value)) - - def writeLongArray(self, values): - self.writeArray("i", values) - - def writeUInt8(self, value): - assert 0 <= value < 256, value - self.items.append(struct.pack(">B", value)) - - def writeUInt8Array(self, values): - self.writeArray("B", values) - - def writeUShort(self, value): - assert 0 <= value < 0x10000, value - self.items.append(struct.pack(">H", value)) - - def writeUShortArray(self, values): - self.writeArray("H", values) - - def writeULong(self, value): - self.items.append(struct.pack(">I", value)) - - def writeULongArray(self, values): - self.writeArray("I", values) - - def writeUInt24(self, value): - assert 0 <= value < 0x1000000, value - b = struct.pack(">L", value) - self.items.append(b[1:]) - - def writeUInt24Array(self, values): - for value in values: - self.writeUInt24(value) - - def writeTag(self, tag): - tag = Tag(tag).tobytes() - assert len(tag) == 4, tag - self.items.append(tag) - - def writeSubTable(self, subWriter): - self.items.append(subWriter) - - def writeCountReference(self, table, name, size=2, value=None): - ref = CountReference(table, name, size=size, value=value) - self.items.append(ref) - return ref - - def writeStruct(self, format, values): - data = struct.pack(*(format,) + values) - self.items.append(data) - - def writeData(self, data): - self.items.append(data) - - def getOverflowErrorRecord(self, item): - LookupListIndex = SubTableIndex = itemName = itemIndex = None - if self.name == "LookupList": - LookupListIndex = item.repeatIndex - elif self.name == "Lookup": - LookupListIndex = self.repeatIndex - SubTableIndex = item.repeatIndex - else: - itemName = getattr(item, "name", "") - if hasattr(item, "repeatIndex"): - itemIndex = item.repeatIndex - if self.name == "SubTable": - LookupListIndex = self.parent.repeatIndex - SubTableIndex = self.repeatIndex - elif self.name == "ExtSubTable": - LookupListIndex = self.parent.parent.repeatIndex - SubTableIndex = self.parent.repeatIndex - else: # who knows how far below the SubTable level we are! Climb back up to the nearest subtable. - itemName = ".".join([self.name, itemName]) - p1 = self.parent - while p1 and p1.name not in ["ExtSubTable", "SubTable"]: - itemName = ".".join([p1.name, itemName]) - p1 = p1.parent - if p1: - if p1.name == "ExtSubTable": - LookupListIndex = p1.parent.parent.repeatIndex - SubTableIndex = p1.parent.repeatIndex - else: - LookupListIndex = p1.parent.repeatIndex - SubTableIndex = p1.repeatIndex - - return OverflowErrorRecord( - (self.tableTag, LookupListIndex, SubTableIndex, itemName, itemIndex) - ) - - -class CountReference(object): - """A reference to a Count value, not a count of references.""" - - def __init__(self, table, name, size=None, value=None): - self.table = table - self.name = name - self.size = size - if value is not None: - self.setValue(value) - - def setValue(self, value): - table = self.table - name = self.name - if table[name] is None: - table[name] = value - else: - assert table[name] == value, (name, table[name], value) - - def getValue(self): - return self.table[self.name] - - def getCountData(self): - v = self.table[self.name] - if v is None: - v = 0 - return {1: packUInt8, 2: packUShort, 4: packULong}[self.size](v) - - -def packUInt8(value): - return struct.pack(">B", value) - - -def packUShort(value): - return struct.pack(">H", value) - - -def packULong(value): - assert 0 <= value < 0x100000000, value - return struct.pack(">I", value) - - -def packUInt24(value): - assert 0 <= value < 0x1000000, value - return struct.pack(">I", value)[1:] - - -class BaseTable(object): - - """Generic base class for all OpenType (sub)tables.""" - - def __getattr__(self, attr): - reader = self.__dict__.get("reader") - if reader: - del self.reader - font = self.font - del self.font - self.decompile(reader, font) - return getattr(self, attr) - - raise AttributeError(attr) - - def ensureDecompiled(self, recurse=False): - reader = self.__dict__.get("reader") - if reader: - del self.reader - font = self.font - del self.font - self.decompile(reader, font) - if recurse: - for subtable in self.iterSubTables(): - subtable.value.ensureDecompiled(recurse) - - def __getstate__(self): - # before copying/pickling 'lazy' objects, make a shallow copy of OTTableReader - # https://github.com/fonttools/fonttools/issues/2965 - if "reader" in self.__dict__: - state = self.__dict__.copy() - state["reader"] = self.__dict__["reader"].copy() - return state - return self.__dict__ - - @classmethod - def getRecordSize(cls, reader): - totalSize = 0 - for conv in cls.converters: - size = conv.getRecordSize(reader) - if size is NotImplemented: - return NotImplemented - countValue = 1 - if conv.repeat: - if conv.repeat in reader: - countValue = reader[conv.repeat] + conv.aux - else: - return NotImplemented - totalSize += size * countValue - return totalSize - - def getConverters(self): - return self.converters - - def getConverterByName(self, name): - return self.convertersByName[name] - - def populateDefaults(self, propagator=None): - for conv in self.getConverters(): - if conv.repeat: - if not hasattr(self, conv.name): - setattr(self, conv.name, []) - countValue = len(getattr(self, conv.name)) - conv.aux - try: - count_conv = self.getConverterByName(conv.repeat) - setattr(self, conv.repeat, countValue) - except KeyError: - # conv.repeat is a propagated count - if propagator and conv.repeat in propagator: - propagator[conv.repeat].setValue(countValue) - else: - if conv.aux and not eval(conv.aux, None, self.__dict__): - continue - if hasattr(self, conv.name): - continue # Warn if it should NOT be present?! - if hasattr(conv, "writeNullOffset"): - setattr(self, conv.name, None) # Warn? - # elif not conv.isCount: - # # Warn? - # pass - if hasattr(conv, "DEFAULT"): - # OptionalValue converters (e.g. VarIndex) - setattr(self, conv.name, conv.DEFAULT) - - def decompile(self, reader, font): - self.readFormat(reader) - table = {} - self.__rawTable = table # for debugging - for conv in self.getConverters(): - if conv.name == "SubTable": - conv = conv.getConverter(reader.tableTag, table["LookupType"]) - if conv.name == "ExtSubTable": - conv = conv.getConverter(reader.tableTag, table["ExtensionLookupType"]) - if conv.name == "FeatureParams": - conv = conv.getConverter(reader["FeatureTag"]) - if conv.name == "SubStruct": - conv = conv.getConverter(reader.tableTag, table["MorphType"]) - try: - if conv.repeat: - if isinstance(conv.repeat, int): - countValue = conv.repeat - elif conv.repeat in table: - countValue = table[conv.repeat] - else: - # conv.repeat is a propagated count - countValue = reader[conv.repeat] - countValue += conv.aux - table[conv.name] = conv.readArray(reader, font, table, countValue) - else: - if conv.aux and not eval(conv.aux, None, table): - continue - table[conv.name] = conv.read(reader, font, table) - if conv.isPropagated: - reader[conv.name] = table[conv.name] - except Exception as e: - name = conv.name - e.args = e.args + (name,) - raise - - if hasattr(self, "postRead"): - self.postRead(table, font) - else: - self.__dict__.update(table) - - del self.__rawTable # succeeded, get rid of debugging info - - def compile(self, writer, font): - self.ensureDecompiled() - # TODO Following hack to be removed by rewriting how FormatSwitching tables - # are handled. - # https://github.com/fonttools/fonttools/pull/2238#issuecomment-805192631 - if hasattr(self, "preWrite"): - deleteFormat = not hasattr(self, "Format") - table = self.preWrite(font) - deleteFormat = deleteFormat and hasattr(self, "Format") - else: - deleteFormat = False - table = self.__dict__.copy() - - # some count references may have been initialized in a custom preWrite; we set - # these in the writer's state beforehand (instead of sequentially) so they will - # be propagated to all nested subtables even if the count appears in the current - # table only *after* the offset to the subtable that it is counting. - for conv in self.getConverters(): - if conv.isCount and conv.isPropagated: - value = table.get(conv.name) - if isinstance(value, CountReference): - writer[conv.name] = value - - if hasattr(self, "sortCoverageLast"): - writer.sortCoverageLast = 1 - - if hasattr(self, "DontShare"): - writer.DontShare = True - - if hasattr(self.__class__, "LookupType"): - writer["LookupType"].setValue(self.__class__.LookupType) - - self.writeFormat(writer) - for conv in self.getConverters(): - value = table.get( - conv.name - ) # TODO Handle defaults instead of defaulting to None! - if conv.repeat: - if value is None: - value = [] - countValue = len(value) - conv.aux - if isinstance(conv.repeat, int): - assert len(value) == conv.repeat, "expected %d values, got %d" % ( - conv.repeat, - len(value), - ) - elif conv.repeat in table: - CountReference(table, conv.repeat, value=countValue) - else: - # conv.repeat is a propagated count - writer[conv.repeat].setValue(countValue) - try: - conv.writeArray(writer, font, table, value) - except Exception as e: - e.args = e.args + (conv.name + "[]",) - raise - elif conv.isCount: - # Special-case Count values. - # Assumption: a Count field will *always* precede - # the actual array(s). - # We need a default value, as it may be set later by a nested - # table. We will later store it here. - # We add a reference: by the time the data is assembled - # the Count value will be filled in. - # We ignore the current count value since it will be recomputed, - # unless it's a CountReference that was already initialized in a custom preWrite. - if isinstance(value, CountReference): - ref = value - ref.size = conv.staticSize - writer.writeData(ref) - table[conv.name] = ref.getValue() - else: - ref = writer.writeCountReference(table, conv.name, conv.staticSize) - table[conv.name] = None - if conv.isPropagated: - writer[conv.name] = ref - elif conv.isLookupType: - # We make sure that subtables have the same lookup type, - # and that the type is the same as the one set on the - # Lookup object, if any is set. - if conv.name not in table: - table[conv.name] = None - ref = writer.writeCountReference( - table, conv.name, conv.staticSize, table[conv.name] - ) - writer["LookupType"] = ref - else: - if conv.aux and not eval(conv.aux, None, table): - continue - try: - conv.write(writer, font, table, value) - except Exception as e: - name = value.__class__.__name__ if value is not None else conv.name - e.args = e.args + (name,) - raise - if conv.isPropagated: - writer[conv.name] = value - - if deleteFormat: - del self.Format - - def readFormat(self, reader): - pass - - def writeFormat(self, writer): - pass - - def toXML(self, xmlWriter, font, attrs=None, name=None): - tableName = name if name else self.__class__.__name__ - if attrs is None: - attrs = [] - if hasattr(self, "Format"): - attrs = attrs + [("Format", self.Format)] - xmlWriter.begintag(tableName, attrs) - xmlWriter.newline() - self.toXML2(xmlWriter, font) - xmlWriter.endtag(tableName) - xmlWriter.newline() - - def toXML2(self, xmlWriter, font): - # Simpler variant of toXML, *only* for the top level tables (like GPOS, GSUB). - # This is because in TTX our parent writes our main tag, and in otBase.py we - # do it ourselves. I think I'm getting schizophrenic... - for conv in self.getConverters(): - if conv.repeat: - value = getattr(self, conv.name, []) - for i in range(len(value)): - item = value[i] - conv.xmlWrite(xmlWriter, font, item, conv.name, [("index", i)]) - else: - if conv.aux and not eval(conv.aux, None, vars(self)): - continue - value = getattr( - self, conv.name, None - ) # TODO Handle defaults instead of defaulting to None! - conv.xmlWrite(xmlWriter, font, value, conv.name, []) - - def fromXML(self, name, attrs, content, font): - try: - conv = self.getConverterByName(name) - except KeyError: - raise # XXX on KeyError, raise nice error - value = conv.xmlRead(attrs, content, font) - if conv.repeat: - seq = getattr(self, conv.name, None) - if seq is None: - seq = [] - setattr(self, conv.name, seq) - seq.append(value) - else: - setattr(self, conv.name, value) - - def __ne__(self, other): - result = self.__eq__(other) - return result if result is NotImplemented else not result - - def __eq__(self, other): - if type(self) != type(other): - return NotImplemented - - self.ensureDecompiled() - other.ensureDecompiled() - - return self.__dict__ == other.__dict__ - - class SubTableEntry(NamedTuple): - """See BaseTable.iterSubTables()""" - - name: str - value: "BaseTable" - index: Optional[int] = None # index into given array, None for single values - - def iterSubTables(self) -> Iterator[SubTableEntry]: - """Yield (name, value, index) namedtuples for all subtables of current table. - - A sub-table is an instance of BaseTable (or subclass thereof) that is a child - of self, the current parent table. - The tuples also contain the attribute name (str) of the of parent table to get - a subtable, and optionally, for lists of subtables (i.e. attributes associated - with a converter that has a 'repeat'), an index into the list containing the - given subtable value. - This method can be useful to traverse trees of otTables. - """ - for conv in self.getConverters(): - name = conv.name - value = getattr(self, name, None) - if value is None: - continue - if isinstance(value, BaseTable): - yield self.SubTableEntry(name, value) - elif isinstance(value, list): - yield from ( - self.SubTableEntry(name, v, index=i) - for i, v in enumerate(value) - if isinstance(v, BaseTable) - ) - - # instance (not @class)method for consistency with FormatSwitchingBaseTable - def getVariableAttrs(self): - return getVariableAttrs(self.__class__) - - -class FormatSwitchingBaseTable(BaseTable): - - """Minor specialization of BaseTable, for tables that have multiple - formats, eg. CoverageFormat1 vs. CoverageFormat2.""" - - @classmethod - def getRecordSize(cls, reader): - return NotImplemented - - def getConverters(self): - try: - fmt = self.Format - except AttributeError: - # some FormatSwitchingBaseTables (e.g. Coverage) no longer have 'Format' - # attribute after fully decompiled, only gain one in preWrite before being - # recompiled. In the decompiled state, these hand-coded classes defined in - # otTables.py lose their format-specific nature and gain more high-level - # attributes that are not tied to converters. - return [] - return self.converters.get(self.Format, []) - - def getConverterByName(self, name): - return self.convertersByName[self.Format][name] - - def readFormat(self, reader): - self.Format = reader.readUShort() - - def writeFormat(self, writer): - writer.writeUShort(self.Format) - - def toXML(self, xmlWriter, font, attrs=None, name=None): - BaseTable.toXML(self, xmlWriter, font, attrs, name) - - def getVariableAttrs(self): - return getVariableAttrs(self.__class__, self.Format) - - -class UInt8FormatSwitchingBaseTable(FormatSwitchingBaseTable): - def readFormat(self, reader): - self.Format = reader.readUInt8() - - def writeFormat(self, writer): - writer.writeUInt8(self.Format) - - -formatSwitchingBaseTables = { - "uint16": FormatSwitchingBaseTable, - "uint8": UInt8FormatSwitchingBaseTable, -} - - -def getFormatSwitchingBaseTableClass(formatType): - try: - return formatSwitchingBaseTables[formatType] - except KeyError: - raise TypeError(f"Unsupported format type: {formatType!r}") - - -# memoize since these are parsed from otData.py, thus stay constant -@lru_cache() -def getVariableAttrs(cls: BaseTable, fmt: Optional[int] = None) -> Tuple[str]: - """Return sequence of variable table field names (can be empty). - - Attributes are deemed "variable" when their otData.py's description contain - 'VarIndexBase + {offset}', e.g. COLRv1 PaintVar* tables. - """ - if not issubclass(cls, BaseTable): - raise TypeError(cls) - if issubclass(cls, FormatSwitchingBaseTable): - if fmt is None: - raise TypeError(f"'fmt' is required for format-switching {cls.__name__}") - converters = cls.convertersByName[fmt] - else: - converters = cls.convertersByName - # assume if no 'VarIndexBase' field is present, table has no variable fields - if "VarIndexBase" not in converters: - return () - varAttrs = {} - for name, conv in converters.items(): - offset = conv.getVarIndexOffset() - if offset is not None: - varAttrs[name] = offset - return tuple(sorted(varAttrs, key=varAttrs.__getitem__)) - - -# -# Support for ValueRecords -# -# This data type is so different from all other OpenType data types that -# it requires quite a bit of code for itself. It even has special support -# in OTTableReader and OTTableWriter... -# - -valueRecordFormat = [ - # Mask Name isDevice signed - (0x0001, "XPlacement", 0, 1), - (0x0002, "YPlacement", 0, 1), - (0x0004, "XAdvance", 0, 1), - (0x0008, "YAdvance", 0, 1), - (0x0010, "XPlaDevice", 1, 0), - (0x0020, "YPlaDevice", 1, 0), - (0x0040, "XAdvDevice", 1, 0), - (0x0080, "YAdvDevice", 1, 0), - # reserved: - (0x0100, "Reserved1", 0, 0), - (0x0200, "Reserved2", 0, 0), - (0x0400, "Reserved3", 0, 0), - (0x0800, "Reserved4", 0, 0), - (0x1000, "Reserved5", 0, 0), - (0x2000, "Reserved6", 0, 0), - (0x4000, "Reserved7", 0, 0), - (0x8000, "Reserved8", 0, 0), -] - - -def _buildDict(): - d = {} - for mask, name, isDevice, signed in valueRecordFormat: - d[name] = mask, isDevice, signed - return d - - -valueRecordFormatDict = _buildDict() - - -class ValueRecordFactory(object): - - """Given a format code, this object convert ValueRecords.""" - - def __init__(self, valueFormat): - format = [] - for mask, name, isDevice, signed in valueRecordFormat: - if valueFormat & mask: - format.append((name, isDevice, signed)) - self.format = format - - def __len__(self): - return len(self.format) - - def readValueRecord(self, reader, font): - format = self.format - if not format: - return None - valueRecord = ValueRecord() - for name, isDevice, signed in format: - if signed: - value = reader.readShort() - else: - value = reader.readUShort() - if isDevice: - if value: - from . import otTables - - subReader = reader.getSubReader(value) - value = getattr(otTables, name)() - value.decompile(subReader, font) - else: - value = None - setattr(valueRecord, name, value) - return valueRecord - - def writeValueRecord(self, writer, font, valueRecord): - for name, isDevice, signed in self.format: - value = getattr(valueRecord, name, 0) - if isDevice: - if value: - subWriter = writer.getSubWriter() - writer.writeSubTable(subWriter) - value.compile(subWriter, font) - else: - writer.writeUShort(0) - elif signed: - writer.writeShort(value) - else: - writer.writeUShort(value) - - -class ValueRecord(object): - - # see ValueRecordFactory - - def __init__(self, valueFormat=None, src=None): - if valueFormat is not None: - for mask, name, isDevice, signed in valueRecordFormat: - if valueFormat & mask: - setattr(self, name, None if isDevice else 0) - if src is not None: - for key, val in src.__dict__.items(): - if not hasattr(self, key): - continue - setattr(self, key, val) - elif src is not None: - self.__dict__ = src.__dict__.copy() - - def getFormat(self): - format = 0 - for name in self.__dict__.keys(): - format = format | valueRecordFormatDict[name][0] - return format - - def getEffectiveFormat(self): - format = 0 - for name, value in self.__dict__.items(): - if value: - format = format | valueRecordFormatDict[name][0] - return format - - def toXML(self, xmlWriter, font, valueName, attrs=None): - if attrs is None: - simpleItems = [] - else: - simpleItems = list(attrs) - for mask, name, isDevice, format in valueRecordFormat[:4]: # "simple" values - if hasattr(self, name): - simpleItems.append((name, getattr(self, name))) - deviceItems = [] - for mask, name, isDevice, format in valueRecordFormat[4:8]: # device records - if hasattr(self, name): - device = getattr(self, name) - if device is not None: - deviceItems.append((name, device)) - if deviceItems: - xmlWriter.begintag(valueName, simpleItems) - xmlWriter.newline() - for name, deviceRecord in deviceItems: - if deviceRecord is not None: - deviceRecord.toXML(xmlWriter, font, name=name) - xmlWriter.endtag(valueName) - xmlWriter.newline() - else: - xmlWriter.simpletag(valueName, simpleItems) - xmlWriter.newline() - - def fromXML(self, name, attrs, content, font): - from . import otTables - - for k, v in attrs.items(): - setattr(self, k, int(v)) - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - value = getattr(otTables, name)() - for elem2 in content: - if not isinstance(elem2, tuple): - continue - name2, attrs2, content2 = elem2 - value.fromXML(name2, attrs2, content2, font) - setattr(self, name, value) - - def __ne__(self, other): - result = self.__eq__(other) - return result if result is NotImplemented else not result - - def __eq__(self, other): - if type(self) != type(other): - return NotImplemented - return self.__dict__ == other.__dict__ diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-e7652dcb.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-e7652dcb.js deleted file mode 100644 index 5de6b061edfd37fdcde8cfad2bc6ed4f73fda477..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-e7652dcb.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as x,e as $,s as Q,f as p,g as f,h as I,j as E,n as q,k as z,m as G,o as K,t as oe,F as M,X as fe,y as ze,Y as ke,G as P,p as L,aj as De,af as ve,q as Oe,b as Re,r as ee,u as V,v as te,w as B,x as he,H as U,B as Xe,C as _e,a3 as qe,aB as Ae,E as y,N as me,P as re,am as He,a4 as Je,ak as k,V as je,ae as Se,Q as Ce,R as Te,O as ge,T as be,a9 as Le,ab as Ge,ac as We,ad as Ke}from"./index-9e76ffee.js";import{n as de}from"./ModifyUpload.svelte_svelte_type_style_lang-14b768c9.js";import{B as Ee}from"./Button-30a08c0b.js";import{B as Me}from"./BlockLabel-9545c6da.js";import{I as Qe}from"./IconButton-307018b3.js";import{E as Ye}from"./Empty-8e3485c0.js";import{u as Ze,S as pe}from"./ShareButton-40f28ee7.js";import{D as xe}from"./Download-e6704cf2.js";import{U as $e,W as et}from"./Image.svelte_svelte_type_style_lang-11edea9c.js";/* empty css */import{U as tt}from"./UploadText-426a6b47.js";import{U as lt}from"./Upload-1e84df2f.js";import{M as nt}from"./ModifyUpload-0461fcb6.js";function it(t){let e,n;return{c(){e=p("svg"),n=p("path"),f(n,"d","M8 3H5a2 2 0 0 0-2 2v3m18 0V5a2 2 0 0 0-2-2h-3m0 18h3a2 2 0 0 0 2-2v-3M3 16v3a2 2 0 0 0 2 2h3"),f(e,"xmlns","http://www.w3.org/2000/svg"),f(e,"width","100%"),f(e,"height","100%"),f(e,"viewBox","0 0 24 24"),f(e,"fill","none"),f(e,"stroke","currentColor"),f(e,"stroke-width","1.5"),f(e,"stroke-linecap","round"),f(e,"stroke-linejoin","round")},m(l,i){I(l,e,i),E(e,n)},p:q,i:q,o:q,d(l){l&&z(e)}}}class at extends x{constructor(e){super(),$(this,e,null,it,Q,{})}}function st(t){let e,n,l;return{c(){e=p("svg"),n=p("rect"),l=p("rect"),f(n,"x","6"),f(n,"y","4"),f(n,"width","4"),f(n,"height","16"),f(l,"x","14"),f(l,"y","4"),f(l,"width","4"),f(l,"height","16"),f(e,"xmlns","http://www.w3.org/2000/svg"),f(e,"width","100%"),f(e,"height","100%"),f(e,"viewBox","0 0 24 24"),f(e,"fill","none"),f(e,"stroke","currentColor"),f(e,"stroke-width","1.5"),f(e,"stroke-linecap","round"),f(e,"stroke-linejoin","round")},m(i,r){I(i,e,r),E(e,n),E(e,l)},p:q,i:q,o:q,d(i){i&&z(e)}}}class rt extends x{constructor(e){super(),$(this,e,null,st,Q,{})}}function ut(t){let e,n;return{c(){e=p("svg"),n=p("polygon"),f(n,"points","5 3 19 12 5 21 5 3"),f(e,"xmlns","http://www.w3.org/2000/svg"),f(e,"width","100%"),f(e,"height","100%"),f(e,"viewBox","0 0 24 24"),f(e,"fill","none"),f(e,"stroke","currentColor"),f(e,"stroke-width","1.5"),f(e,"stroke-linecap","round"),f(e,"stroke-linejoin","round")},m(l,i){I(l,e,i),E(e,n)},p:q,i:q,o:q,d(l){l&&z(e)}}}class ot extends x{constructor(e){super(),$(this,e,null,ut,Q,{})}}function _t(t){let e,n,l;return{c(){e=p("svg"),n=p("polygon"),l=p("rect"),f(n,"points","23 7 16 12 23 17 23 7"),f(l,"x","1"),f(l,"y","5"),f(l,"width","15"),f(l,"height","14"),f(l,"rx","2"),f(l,"ry","2"),f(e,"xmlns","http://www.w3.org/2000/svg"),f(e,"width","100%"),f(e,"height","100%"),f(e,"viewBox","0 0 24 24"),f(e,"fill","none"),f(e,"stroke","currentColor"),f(e,"stroke-width","1.5"),f(e,"stroke-linecap","round"),f(e,"stroke-linejoin","round"),f(e,"class","feather feather-video")},m(i,r){I(i,e,r),E(e,n),E(e,l)},p:q,i:q,o:q,d(i){i&&z(e)}}}let we=class extends x{constructor(e){super(),$(this,e,null,_t,Q,{})}};const ye=t=>{let e=["B","KB","MB","GB","PB"],n=0;for(;t>1024;)t/=1024,n++;let l=e[n];return t.toFixed(1)+" "+l},ft=()=>!0;function ct(t,{autoplay:e}){async function n(){e&&await t.play()}return t.addEventListener("loadeddata",n),{destroy(){t.removeEventListener("loadeddata",n)}}}const{isNaN:ht}=qe;function dt(t){let e,n;return e=new rt({}),{c(){M(e.$$.fragment)},m(l,i){P(e,l,i),n=!0},i(l){n||(B(e.$$.fragment,l),n=!0)},o(l){V(e.$$.fragment,l),n=!1},d(l){U(e,l)}}}function mt(t){let e,n;return e=new ot({}),{c(){M(e.$$.fragment)},m(l,i){P(e,l,i),n=!0},i(l){n||(B(e.$$.fragment,l),n=!0)},o(l){V(e.$$.fragment,l),n=!1},d(l){U(e,l)}}}function gt(t){let e,n;return e=new $e({}),{c(){M(e.$$.fragment)},m(l,i){P(e,l,i),n=!0},i(l){n||(B(e.$$.fragment,l),n=!0)},o(l){V(e.$$.fragment,l),n=!1},d(l){U(e,l)}}}function bt(t){let e,n,l,i,r,a,u=!1,s,d=!0,o,_,m,b,T,C,S,R,v,D=ce(t[5])+"",A,H,O=ce(t[6])+"",X,N,F,w,Z,J,W,g,le,ne;function ie(){cancelAnimationFrame(s),n.paused||(s=Ae(ie),u=!0),t[16].call(n)}const ae=[gt,mt,dt],Y=[];function se(j,h){return j[5]===j[6]?0:j[7]?1:2}return C=se(t),S=Y[C]=ae[C](t),W=new at({}),{c(){e=G("div"),n=G("video"),l=G("track"),_=K(),m=G("div"),b=G("div"),T=G("span"),S.c(),R=K(),v=G("span"),A=oe(D),H=oe(" / "),X=oe(O),N=K(),F=G("progress"),Z=K(),J=G("div"),M(W.$$.fragment),f(l,"kind","captions"),fe(l.src,i=t[1])||f(l,"src",i),l.default=!0,fe(n.src,r=t[0])||f(n,"src",r),f(n,"preload","auto"),f(n,"data-testid",a=`${t[4]}-player`),f(n,"class","svelte-w5wajl"),t[6]===void 0&&ze(()=>t[17].call(n)),ke(n,"mirror",t[2]),f(T,"role","button"),f(T,"tabindex","0"),f(T,"class","icon svelte-w5wajl"),f(T,"aria-label","play-pause-replay-button"),f(v,"class","time svelte-w5wajl"),F.value=w=t[5]/t[6]||0,f(F,"class","svelte-w5wajl"),f(J,"role","button"),f(J,"tabindex","0"),f(J,"class","icon svelte-w5wajl"),f(J,"aria-label","full-screen"),f(b,"class","inner svelte-w5wajl"),f(m,"class","controls svelte-w5wajl"),f(e,"class","wrap svelte-w5wajl")},m(j,h){I(j,e,h),E(e,n),E(n,l),t[19](n),E(e,_),E(e,m),E(m,b),E(b,T),Y[C].m(T,null),E(b,R),E(b,v),E(v,A),E(v,H),E(v,X),E(b,N),E(b,F),E(b,Z),E(b,J),P(W,J,null),g=!0,le||(ne=[L(n,"click",t[10]),L(n,"play",t[14]),L(n,"pause",t[15]),L(n,"ended",t[12]),L(n,"timeupdate",ie),L(n,"durationchange",t[17]),L(n,"play",t[18]),L(n,"pause",t[18]),De(o=ct.call(null,n,{autoplay:t[3]})),L(T,"click",t[10]),L(T,"keydown",t[10]),L(F,"mousemove",t[9]),L(F,"touchmove",ve(t[9])),L(F,"click",Oe(ve(t[11]))),L(J,"click",t[13]),L(J,"keypress",t[13])],le=!0)},p(j,[h]){(!g||h&2&&!fe(l.src,i=j[1]))&&f(l,"src",i),(!g||h&1&&!fe(n.src,r=j[0]))&&f(n,"src",r),(!g||h&16&&a!==(a=`${j[4]}-player`))&&f(n,"data-testid",a),!u&&h&32&&!ht(j[5])&&(n.currentTime=j[5]),u=!1,h&128&&d!==(d=j[7])&&n[d?"pause":"play"](),o&&Re(o.update)&&h&8&&o.update.call(null,{autoplay:j[3]}),(!g||h&4)&&ke(n,"mirror",j[2]);let ue=C;C=se(j),C!==ue&&(ee(),V(Y[ue],1,1,()=>{Y[ue]=null}),te(),S=Y[C],S||(S=Y[C]=ae[C](j),S.c()),B(S,1),S.m(T,null)),(!g||h&32)&&D!==(D=ce(j[5])+"")&&he(A,D),(!g||h&64)&&O!==(O=ce(j[6])+"")&&he(X,O),(!g||h&96&&w!==(w=j[5]/j[6]||0))&&(F.value=w)},i(j){g||(B(S),B(W.$$.fragment,j),g=!0)},o(j){V(S),V(W.$$.fragment,j),g=!1},d(j){j&&z(e),t[19](null),Y[C].d(),U(W),le=!1,Xe(ne)}}}function ce(t){if(isNaN(t)||!isFinite(t))return"...";const e=Math.floor(t/60);let n=Math.floor(t%60);return n<10&&(n=`0${n}`),`${e}:${n}`}function wt(t,e,n){let{src:l}=e,{subtitle:i=null}=e,{mirror:r}=e,{autoplay:a}=e,{label:u="test"}=e;const s=_e();let d=0,o,_=!0,m;function b(N){if(!o)return;if(N.type==="click"){C(N);return}if(N.type!=="touchmove"&&!(N.buttons&1))return;const F=N.type==="touchmove"?N.touches[0].clientX:N.clientX,{left:w,right:Z}=N.currentTarget.getBoundingClientRect();n(5,d=o*(F-w)/(Z-w))}async function T(){document.fullscreenElement!=m&&(m.currentTime>0&&!m.paused&&!m.ended&&m.readyState>m.HAVE_CURRENT_DATA?m.pause():await m.play())}function C(N){const{left:F,right:w}=N.currentTarget.getBoundingClientRect();n(5,d=o*(N.clientX-F)/(w-F))}function S(){s("stop"),s("end")}function R(){m.requestFullscreen()}function v(N){y.call(this,t,N)}function D(N){y.call(this,t,N)}function A(){d=this.currentTime,n(5,d)}function H(){o=this.duration,n(6,o)}function O(){_=this.paused,n(7,_)}function X(N){me[N?"unshift":"push"](()=>{m=N,n(8,m)})}return t.$$set=N=>{"src"in N&&n(0,l=N.src),"subtitle"in N&&n(1,i=N.subtitle),"mirror"in N&&n(2,r=N.mirror),"autoplay"in N&&n(3,a=N.autoplay),"label"in N&&n(4,u=N.label)},[l,i,r,a,u,d,o,_,m,b,T,C,S,R,v,D,A,H,O,X]}class Pe extends x{constructor(e){super(),$(this,e,wt,bt,Q,{src:0,subtitle:1,mirror:2,autoplay:3,label:4})}}function kt(t){let e=t[0].data,n,l,i,r,a,u,s,d,o=Be(t);r=new Qe({props:{Icon:xe,label:"Download"}});let _=t[5]&&Ne(t);return{c(){o.c(),n=K(),l=G("div"),i=G("a"),M(r.$$.fragment),s=K(),_&&_.c(),f(i,"href",a=t[0].data),f(i,"target",window.__is_colab__?"_blank":null),f(i,"download",u=t[0].orig_name||t[0].name),f(l,"class","icon-buttons svelte-rvdo70"),f(l,"data-testid","download-div")},m(m,b){o.m(m,b),I(m,n,b),I(m,l,b),E(l,i),P(r,i,null),E(l,s),_&&_.m(l,null),d=!0},p(m,b){b&1&&Q(e,e=m[0].data)?(ee(),V(o,1,1,q),te(),o=Be(m),o.c(),B(o,1),o.m(n.parentNode,n)):o.p(m,b),(!d||b&1&&a!==(a=m[0].data))&&f(i,"href",a),(!d||b&1&&u!==(u=m[0].orig_name||m[0].name))&&f(i,"download",u),m[5]?_?(_.p(m,b),b&32&&B(_,1)):(_=Ne(m),_.c(),B(_,1),_.m(l,null)):_&&(ee(),V(_,1,1,()=>{_=null}),te())},i(m){d||(B(o),B(r.$$.fragment,m),B(_),d=!0)},o(m){V(o),V(r.$$.fragment,m),V(_),d=!1},d(m){m&&(z(n),z(l)),o.d(m),U(r),_&&_.d()}}}function vt(t){let e,n;return e=new Ye({props:{unpadded_box:!0,size:"large",$$slots:{default:[yt]},$$scope:{ctx:t}}}),{c(){M(e.$$.fragment)},m(l,i){P(e,l,i),n=!0},p(l,i){const r={};i&32768&&(r.$$scope={dirty:i,ctx:l}),e.$set(r)},i(l){n||(B(e.$$.fragment,l),n=!0)},o(l){V(e.$$.fragment,l),n=!1},d(l){U(e,l)}}}function Be(t){let e,n;return e=new Pe({props:{src:t[0].data,subtitle:t[1]?.data,autoplay:t[4],mirror:!1,label:t[2]}}),e.$on("play",t[6]),e.$on("pause",t[7]),e.$on("ended",t[8]),{c(){M(e.$$.fragment)},m(l,i){P(e,l,i),n=!0},p(l,i){const r={};i&1&&(r.src=l[0].data),i&2&&(r.subtitle=l[1]?.data),i&16&&(r.autoplay=l[4]),i&4&&(r.label=l[2]),e.$set(r)},i(l){n||(B(e.$$.fragment,l),n=!0)},o(l){V(e.$$.fragment,l),n=!1},d(l){U(e,l)}}}function Ne(t){let e,n;return e=new pe({props:{value:t[0],formatter:t[9]}}),e.$on("error",t[10]),e.$on("share",t[11]),{c(){M(e.$$.fragment)},m(l,i){P(e,l,i),n=!0},p(l,i){const r={};i&1&&(r.value=l[0]),e.$set(r)},i(l){n||(B(e.$$.fragment,l),n=!0)},o(l){V(e.$$.fragment,l),n=!1},d(l){U(e,l)}}}function yt(t){let e,n;return e=new we({}),{c(){M(e.$$.fragment)},m(l,i){P(e,l,i),n=!0},i(l){n||(B(e.$$.fragment,l),n=!0)},o(l){V(e.$$.fragment,l),n=!1},d(l){U(e,l)}}}function Bt(t){let e,n,l,i,r,a;e=new Me({props:{show_label:t[3],Icon:we,label:t[2]||"Video"}});const u=[vt,kt],s=[];function d(o,_){return o[0]===null?0:1}return l=d(t),i=s[l]=u[l](t),{c(){M(e.$$.fragment),n=K(),i.c(),r=re()},m(o,_){P(e,o,_),I(o,n,_),s[l].m(o,_),I(o,r,_),a=!0},p(o,[_]){const m={};_&8&&(m.show_label=o[3]),_&4&&(m.label=o[2]||"Video"),e.$set(m);let b=l;l=d(o),l===b?s[l].p(o,_):(ee(),V(s[b],1,1,()=>{s[b]=null}),te(),i=s[l],i?i.p(o,_):(i=s[l]=u[l](o),i.c()),B(i,1),i.m(r.parentNode,r))},i(o){a||(B(e.$$.fragment,o),B(i),a=!0)},o(o){V(e.$$.fragment,o),V(i),a=!1},d(o){o&&(z(n),z(r)),U(e,o),s[l].d(o)}}}function Nt(t,e,n){let{value:l=null}=e,{subtitle:i=null}=e,{label:r=void 0}=e,{show_label:a=!0}=e,{autoplay:u}=e,{show_share_button:s=!0}=e,d=null,o=null;const _=_e();He(async()=>{l!==d&&i!==o&&o!==null&&(d=l,n(0,l=null),await Je(),n(0,l=d)),d=l,o=i});function m(v){y.call(this,t,v)}function b(v){y.call(this,t,v)}function T(v){y.call(this,t,v)}const C=async v=>v?await Ze(v.data,"url"):"";function S(v){y.call(this,t,v)}function R(v){y.call(this,t,v)}return t.$$set=v=>{"value"in v&&n(0,l=v.value),"subtitle"in v&&n(1,i=v.subtitle),"label"in v&&n(2,r=v.label),"show_label"in v&&n(3,a=v.show_label),"autoplay"in v&&n(4,u=v.autoplay),"show_share_button"in v&&n(5,s=v.show_share_button)},t.$$.update=()=>{t.$$.dirty&1&&l&&_("change",l)},[l,i,r,a,u,s,m,b,T,C,S,R]}class Vt extends x{constructor(e){super(),$(this,e,Nt,Bt,Q,{value:0,subtitle:1,label:2,show_label:3,autoplay:4,show_share_button:5})}}function jt(t){let e,n,l,i;const r=[t[7]];let a={};for(let u=0;u{"elem_id"in g&&n(0,l=g.elem_id),"elem_classes"in g&&n(1,i=g.elem_classes),"visible"in g&&n(2,r=g.visible),"value"in g&&n(3,a=g.value),"label"in g&&n(4,s=g.label),"source"in g&&n(5,d=g.source),"root"in g&&n(18,o=g.root),"root_url"in g&&n(19,_=g.root_url),"show_label"in g&&n(6,m=g.show_label),"loading_status"in g&&n(7,b=g.loading_status),"height"in g&&n(8,T=g.height),"width"in g&&n(9,C=g.width),"container"in g&&n(10,S=g.container),"scale"in g&&n(11,R=g.scale),"min_width"in g&&n(12,v=g.min_width),"mode"in g&&n(13,D=g.mode),"autoplay"in g&&n(14,A=g.autoplay),"show_share_button"in g&&n(15,H=g.show_share_button)},t.$$.update=()=>{t.$$.dirty&786440&&(a!=null?(n(16,O=de(a[0],o,_)),n(17,X=de(a[1],o,_))):(n(16,O=null),n(17,X=null))),t.$$.dirty&1048584&&JSON.stringify(a)!==JSON.stringify(u)&&(n(20,u=a),N("change"))},[l,i,r,a,s,d,m,b,T,C,S,R,v,D,A,H,O,X,o,_,u,F,w,Z,J,W]}class Tt extends x{constructor(e){super(),$(this,e,Ct,St,Q,{elem_id:0,elem_classes:1,visible:2,value:3,label:4,source:5,root:18,root_url:19,show_label:6,loading_status:7,height:8,width:9,container:10,scale:11,min_width:12,mode:13,autoplay:14,show_share_button:15})}get elem_id(){return this.$$.ctx[0]}set elem_id(e){this.$$set({elem_id:e}),k()}get elem_classes(){return this.$$.ctx[1]}set elem_classes(e){this.$$set({elem_classes:e}),k()}get visible(){return this.$$.ctx[2]}set visible(e){this.$$set({visible:e}),k()}get value(){return this.$$.ctx[3]}set value(e){this.$$set({value:e}),k()}get label(){return this.$$.ctx[4]}set label(e){this.$$set({label:e}),k()}get source(){return this.$$.ctx[5]}set source(e){this.$$set({source:e}),k()}get root(){return this.$$.ctx[18]}set root(e){this.$$set({root:e}),k()}get root_url(){return this.$$.ctx[19]}set root_url(e){this.$$set({root_url:e}),k()}get show_label(){return this.$$.ctx[6]}set show_label(e){this.$$set({show_label:e}),k()}get loading_status(){return this.$$.ctx[7]}set loading_status(e){this.$$set({loading_status:e}),k()}get height(){return this.$$.ctx[8]}set height(e){this.$$set({height:e}),k()}get width(){return this.$$.ctx[9]}set width(e){this.$$set({width:e}),k()}get container(){return this.$$.ctx[10]}set container(e){this.$$set({container:e}),k()}get scale(){return this.$$.ctx[11]}set scale(e){this.$$set({scale:e}),k()}get min_width(){return this.$$.ctx[12]}set min_width(e){this.$$set({min_width:e}),k()}get mode(){return this.$$.ctx[13]}set mode(e){this.$$set({mode:e}),k()}get autoplay(){return this.$$.ctx[14]}set autoplay(e){this.$$set({autoplay:e}),k()}get show_share_button(){return this.$$.ctx[15]}set show_share_button(e){this.$$set({show_share_button:e}),k()}}function Et(t){let e,n,l,i,r,a,u;e=new nt({}),e.$on("clear",t[11]);const s=[Ut,Pt],d=[];function o(_,m){return l==null&&(l=!!ft()),l?0:_[0].size?1:-1}return~(i=o(t))&&(r=d[i]=s[i](t)),{c(){M(e.$$.fragment),n=K(),r&&r.c(),a=re()},m(_,m){P(e,_,m),I(_,n,m),~i&&d[i].m(_,m),I(_,a,m),u=!0},p(_,m){let b=i;i=o(_),i===b?~i&&d[i].p(_,m):(r&&(ee(),V(d[b],1,1,()=>{d[b]=null}),te()),~i?(r=d[i],r?r.p(_,m):(r=d[i]=s[i](_),r.c()),B(r,1),r.m(a.parentNode,a)):r=null)},i(_){u||(B(e.$$.fragment,_),B(r),u=!0)},o(_){V(e.$$.fragment,_),V(r),u=!1},d(_){_&&(z(n),z(a)),U(e,_),~i&&d[i].d(_)}}}function Mt(t){let e,n,l,i;const r=[It,Ft],a=[];function u(s,d){return s[2]==="upload"?0:s[2]==="webcam"?1:-1}return~(e=u(t))&&(n=a[e]=r[e](t)),{c(){n&&n.c(),l=re()},m(s,d){~e&&a[e].m(s,d),I(s,l,d),i=!0},p(s,d){let o=e;e=u(s),e===o?~e&&a[e].p(s,d):(n&&(ee(),V(a[o],1,1,()=>{a[o]=null}),te()),~e?(n=a[e],n?n.p(s,d):(n=a[e]=r[e](s),n.c()),B(n,1),n.m(l.parentNode,l)):n=null)},i(s){i||(B(n),i=!0)},o(s){V(n),i=!1},d(s){s&&z(l),~e&&a[e].d(s)}}}function Pt(t){let e,n=t[0].name+"",l,i,r,a=ye(t[0].size)+"",u;return{c(){e=G("div"),l=oe(n),i=K(),r=G("div"),u=oe(a),f(e,"class","file-name svelte-a6ruol"),f(r,"class","file-size svelte-a6ruol")},m(s,d){I(s,e,d),E(e,l),I(s,i,d),I(s,r,d),E(r,u)},p(s,d){d&1&&n!==(n=s[0].name+"")&&he(l,n),d&1&&a!==(a=ye(s[0].size)+"")&&he(u,a)},i:q,o:q,d(s){s&&(z(e),z(i),z(r))}}}function Ut(t){let e=t[0]?.data,n,l,i=Ve(t);return{c(){i.c(),n=re()},m(r,a){i.m(r,a),I(r,n,a),l=!0},p(r,a){a&1&&Q(e,e=r[0]?.data)?(ee(),V(i,1,1,q),te(),i=Ve(r),i.c(),B(i,1),i.m(n.parentNode,n)):i.p(r,a)},i(r){l||(B(i),l=!0)},o(r){V(i),l=!1},d(r){r&&z(n),i.d(r)}}}function Ve(t){let e,n;return e=new Pe({props:{autoplay:t[7],src:t[0].data,subtitle:t[1]?.data,mirror:t[5]&&t[2]==="webcam",label:t[3]}}),e.$on("play",t[18]),e.$on("pause",t[19]),e.$on("stop",t[20]),e.$on("end",t[21]),{c(){M(e.$$.fragment)},m(l,i){P(e,l,i),n=!0},p(l,i){const r={};i&128&&(r.autoplay=l[7]),i&1&&(r.src=l[0].data),i&2&&(r.subtitle=l[1]?.data),i&36&&(r.mirror=l[5]&&l[2]==="webcam"),i&8&&(r.label=l[3]),e.$set(r)},i(l){n||(B(e.$$.fragment,l),n=!0)},o(l){V(e.$$.fragment,l),n=!1},d(l){U(e,l)}}}function Ft(t){let e,n;return e=new et({props:{mirror_webcam:t[5],include_audio:t[6],mode:"video"}}),e.$on("error",t[14]),e.$on("capture",t[15]),e.$on("start_recording",t[16]),e.$on("stop_recording",t[17]),{c(){M(e.$$.fragment)},m(l,i){P(e,l,i),n=!0},p(l,i){const r={};i&32&&(r.mirror_webcam=l[5]),i&64&&(r.include_audio=l[6]),e.$set(r)},i(l){n||(B(e.$$.fragment,l),n=!0)},o(l){V(e.$$.fragment,l),n=!1},d(l){U(e,l)}}}function It(t){let e,n,l;function i(a){t[13](a)}let r={filetype:"video/x-m4v,video/*",$$slots:{default:[zt]},$$scope:{ctx:t}};return t[8]!==void 0&&(r.dragging=t[8]),e=new lt({props:r}),me.push(()=>ge(e,"dragging",i)),e.$on("load",t[10]),{c(){M(e.$$.fragment)},m(a,u){P(e,a,u),l=!0},p(a,u){const s={};u&4194304&&(s.$$scope={dirty:u,ctx:a}),!n&&u&256&&(n=!0,s.dragging=a[8],be(()=>n=!1)),e.$set(s)},i(a){l||(B(e.$$.fragment,a),l=!0)},o(a){V(e.$$.fragment,a),l=!1},d(a){U(e,a)}}}function zt(t){let e;const n=t[12].default,l=Le(n,t,t[22],null);return{c(){l&&l.c()},m(i,r){l&&l.m(i,r),e=!0},p(i,r){l&&l.p&&(!e||r&4194304)&&Ge(l,n,i,i[22],e?Ke(n,i[22],r,null):We(i[22]),null)},i(i){e||(B(l,i),e=!0)},o(i){V(l,i),e=!1},d(i){l&&l.d(i)}}}function Dt(t){let e,n,l,i,r,a;e=new Me({props:{show_label:t[4],Icon:we,label:t[3]||"Video"}});const u=[Mt,Et],s=[];function d(o,_){return o[0]===null?0:1}return l=d(t),i=s[l]=u[l](t),{c(){M(e.$$.fragment),n=K(),i.c(),r=re()},m(o,_){P(e,o,_),I(o,n,_),s[l].m(o,_),I(o,r,_),a=!0},p(o,[_]){const m={};_&16&&(m.show_label=o[4]),_&8&&(m.label=o[3]||"Video"),e.$set(m);let b=l;l=d(o),l===b?s[l].p(o,_):(ee(),V(s[b],1,1,()=>{s[b]=null}),te(),i=s[l],i?i.p(o,_):(i=s[l]=u[l](o),i.c()),B(i,1),i.m(r.parentNode,r))},i(o){a||(B(e.$$.fragment,o),B(i),a=!0)},o(o){V(e.$$.fragment,o),V(i),a=!1},d(o){o&&(z(n),z(r)),U(e,o),s[l].d(o)}}}function Ot(t,e,n){let{$$slots:l={},$$scope:i}=e,{value:r=null}=e,{subtitle:a=null}=e,{source:u}=e,{label:s=void 0}=e,{show_label:d=!0}=e,{mirror_webcam:o=!1}=e,{include_audio:_}=e,{autoplay:m}=e;const b=_e();function T({detail:w}){b("change",w),b("upload",w),n(0,r=w)}function C({detail:w}){n(0,r=null),b("change",w),b("clear")}let S=!1;function R(w){S=w,n(8,S)}function v(w){y.call(this,t,w)}const D=({detail:w})=>b("change",w);function A(w){y.call(this,t,w)}function H(w){y.call(this,t,w)}function O(w){y.call(this,t,w)}function X(w){y.call(this,t,w)}function N(w){y.call(this,t,w)}function F(w){y.call(this,t,w)}return t.$$set=w=>{"value"in w&&n(0,r=w.value),"subtitle"in w&&n(1,a=w.subtitle),"source"in w&&n(2,u=w.source),"label"in w&&n(3,s=w.label),"show_label"in w&&n(4,d=w.show_label),"mirror_webcam"in w&&n(5,o=w.mirror_webcam),"include_audio"in w&&n(6,_=w.include_audio),"autoplay"in w&&n(7,m=w.autoplay),"$$scope"in w&&n(22,i=w.$$scope)},t.$$.update=()=>{t.$$.dirty&256&&b("drag",S)},[r,a,u,s,d,o,_,m,S,b,T,C,l,R,v,D,A,H,O,X,N,F,i]}class Rt extends x{constructor(e){super(),$(this,e,Ot,Dt,Q,{value:0,subtitle:1,source:2,label:3,show_label:4,mirror_webcam:5,include_audio:6,autoplay:7})}}function Xt(t){let e,n;return e=new tt({props:{type:"video"}}),{c(){M(e.$$.fragment)},m(l,i){P(e,l,i),n=!0},p:q,i(l){n||(B(e.$$.fragment,l),n=!0)},o(l){V(e.$$.fragment,l),n=!1},d(l){U(e,l)}}}function qt(t){let e,n,l,i;const r=[t[1]];let a={};for(let u=0;un(19,F=h),W=({detail:h})=>{n(1,b=b||{}),n(1,b.status="error",b),n(1,b.message=h,b)};function g(h){y.call(this,t,h)}function le(h){y.call(this,t,h)}function ne(h){y.call(this,t,h)}function ie(h){y.call(this,t,h)}function ae(h){y.call(this,t,h)}function Y(h){y.call(this,t,h)}function se(h){y.call(this,t,h)}function j(h){y.call(this,t,h)}return t.$$set=h=>{"elem_id"in h&&n(2,l=h.elem_id),"elem_classes"in h&&n(3,i=h.elem_classes),"visible"in h&&n(4,r=h.visible),"value"in h&&n(0,a=h.value),"label"in h&&n(5,s=h.label),"source"in h&&n(6,d=h.source),"root"in h&&n(21,o=h.root),"root_url"in h&&n(22,_=h.root_url),"show_label"in h&&n(7,m=h.show_label),"loading_status"in h&&n(1,b=h.loading_status),"height"in h&&n(8,T=h.height),"width"in h&&n(9,C=h.width),"mirror_webcam"in h&&n(10,S=h.mirror_webcam),"include_audio"in h&&n(11,R=h.include_audio),"container"in h&&n(12,v=h.container),"scale"in h&&n(13,D=h.scale),"min_width"in h&&n(14,A=h.min_width),"mode"in h&&n(15,H=h.mode),"autoplay"in h&&n(16,O=h.autoplay)},t.$$.update=()=>{t.$$.dirty[0]&6291457&&(a!=null?(n(17,X=de(a[0],o,_)),n(18,N=de(a[1],o,_))):(n(17,X=null),n(18,N=null))),t.$$.dirty[0]&8388609&&JSON.stringify(a)!==JSON.stringify(u)&&(n(23,u=a),w("change"))},[a,b,l,i,r,s,d,m,T,C,S,R,v,D,A,H,O,X,N,F,Z,o,_,u,J,W,g,le,ne,ie,ae,Y,se,j]}class Jt extends x{constructor(e){super(),$(this,e,Ht,At,Q,{elem_id:2,elem_classes:3,visible:4,value:0,label:5,source:6,root:21,root_url:22,show_label:7,loading_status:1,height:8,width:9,mirror_webcam:10,include_audio:11,container:12,scale:13,min_width:14,mode:15,autoplay:16},null,[-1,-1])}get elem_id(){return this.$$.ctx[2]}set elem_id(e){this.$$set({elem_id:e}),k()}get elem_classes(){return this.$$.ctx[3]}set elem_classes(e){this.$$set({elem_classes:e}),k()}get visible(){return this.$$.ctx[4]}set visible(e){this.$$set({visible:e}),k()}get value(){return this.$$.ctx[0]}set value(e){this.$$set({value:e}),k()}get label(){return this.$$.ctx[5]}set label(e){this.$$set({label:e}),k()}get source(){return this.$$.ctx[6]}set source(e){this.$$set({source:e}),k()}get root(){return this.$$.ctx[21]}set root(e){this.$$set({root:e}),k()}get root_url(){return this.$$.ctx[22]}set root_url(e){this.$$set({root_url:e}),k()}get show_label(){return this.$$.ctx[7]}set show_label(e){this.$$set({show_label:e}),k()}get loading_status(){return this.$$.ctx[1]}set loading_status(e){this.$$set({loading_status:e}),k()}get height(){return this.$$.ctx[8]}set height(e){this.$$set({height:e}),k()}get width(){return this.$$.ctx[9]}set width(e){this.$$set({width:e}),k()}get mirror_webcam(){return this.$$.ctx[10]}set mirror_webcam(e){this.$$set({mirror_webcam:e}),k()}get include_audio(){return this.$$.ctx[11]}set include_audio(e){this.$$set({include_audio:e}),k()}get container(){return this.$$.ctx[12]}set container(e){this.$$set({container:e}),k()}get scale(){return this.$$.ctx[13]}set scale(e){this.$$set({scale:e}),k()}get min_width(){return this.$$.ctx[14]}set min_width(e){this.$$set({min_width:e}),k()}get mode(){return this.$$.ctx[15]}set mode(e){this.$$set({mode:e}),k()}get autoplay(){return this.$$.ctx[16]}set autoplay(e){this.$$set({autoplay:e}),k()}}function Lt(t){let e,n,l;function i(a){t[30](a)}let r={elem_id:t[1],elem_classes:t[2],visible:t[3],label:t[4],source:t[5],root:t[6],root_url:t[7],show_label:t[8],loading_status:t[9],height:t[10],width:t[11],mirror_webcam:t[12],include_audio:t[13],container:t[14],scale:t[15],min_width:t[16],mode:t[17],autoplay:t[18]};return t[0]!==void 0&&(r.value=t[0]),e=new Jt({props:r}),me.push(()=>ge(e,"value",i)),e.$on("clear",t[31]),e.$on("play",t[32]),e.$on("pause",t[33]),e.$on("upload",t[34]),e.$on("stop",t[35]),e.$on("end",t[36]),e.$on("start_recording",t[37]),e.$on("stop_recording",t[38]),e.$on("change",t[39]),{c(){M(e.$$.fragment)},m(a,u){P(e,a,u),l=!0},p(a,u){const s={};u[0]&2&&(s.elem_id=a[1]),u[0]&4&&(s.elem_classes=a[2]),u[0]&8&&(s.visible=a[3]),u[0]&16&&(s.label=a[4]),u[0]&32&&(s.source=a[5]),u[0]&64&&(s.root=a[6]),u[0]&128&&(s.root_url=a[7]),u[0]&256&&(s.show_label=a[8]),u[0]&512&&(s.loading_status=a[9]),u[0]&1024&&(s.height=a[10]),u[0]&2048&&(s.width=a[11]),u[0]&4096&&(s.mirror_webcam=a[12]),u[0]&8192&&(s.include_audio=a[13]),u[0]&16384&&(s.container=a[14]),u[0]&32768&&(s.scale=a[15]),u[0]&65536&&(s.min_width=a[16]),u[0]&131072&&(s.mode=a[17]),u[0]&262144&&(s.autoplay=a[18]),!n&&u[0]&1&&(n=!0,s.value=a[0],be(()=>n=!1)),e.$set(s)},i(a){l||(B(e.$$.fragment,a),l=!0)},o(a){V(e.$$.fragment,a),l=!1},d(a){U(e,a)}}}function Gt(t){let e,n,l;function i(a){t[20](a)}let r={elem_id:t[1],elem_classes:t[2],visible:t[3],label:t[4],source:t[5],root:t[6],root_url:t[7],show_label:t[8],loading_status:t[9],height:t[10],width:t[11],container:t[14],scale:t[15],min_width:t[16],mode:t[17],autoplay:t[18],show_share_button:t[19]};return t[0]!==void 0&&(r.value=t[0]),e=new Tt({props:r}),me.push(()=>ge(e,"value",i)),e.$on("clear",t[21]),e.$on("play",t[22]),e.$on("pause",t[23]),e.$on("upload",t[24]),e.$on("stop",t[25]),e.$on("end",t[26]),e.$on("start_recording",t[27]),e.$on("stop_recording",t[28]),e.$on("change",t[29]),{c(){M(e.$$.fragment)},m(a,u){P(e,a,u),l=!0},p(a,u){const s={};u[0]&2&&(s.elem_id=a[1]),u[0]&4&&(s.elem_classes=a[2]),u[0]&8&&(s.visible=a[3]),u[0]&16&&(s.label=a[4]),u[0]&32&&(s.source=a[5]),u[0]&64&&(s.root=a[6]),u[0]&128&&(s.root_url=a[7]),u[0]&256&&(s.show_label=a[8]),u[0]&512&&(s.loading_status=a[9]),u[0]&1024&&(s.height=a[10]),u[0]&2048&&(s.width=a[11]),u[0]&16384&&(s.container=a[14]),u[0]&32768&&(s.scale=a[15]),u[0]&65536&&(s.min_width=a[16]),u[0]&131072&&(s.mode=a[17]),u[0]&262144&&(s.autoplay=a[18]),u[0]&524288&&(s.show_share_button=a[19]),!n&&u[0]&1&&(n=!0,s.value=a[0],be(()=>n=!1)),e.$set(s)},i(a){l||(B(e.$$.fragment,a),l=!0)},o(a){V(e.$$.fragment,a),l=!1},d(a){U(e,a)}}}function Wt(t){let e,n,l,i;const r=[Gt,Lt],a=[];function u(s,d){return s[17]==="static"?0:1}return e=u(t),n=a[e]=r[e](t),{c(){n.c(),l=re()},m(s,d){a[e].m(s,d),I(s,l,d),i=!0},p(s,d){let o=e;e=u(s),e===o?a[e].p(s,d):(ee(),V(a[o],1,1,()=>{a[o]=null}),te(),n=a[e],n?n.p(s,d):(n=a[e]=r[e](s),n.c()),B(n,1),n.m(l.parentNode,l))},i(s){i||(B(n),i=!0)},o(s){V(n),i=!1},d(s){s&&z(l),a[e].d(s)}}}function Kt(t,e,n){let{elem_id:l=""}=e,{elem_classes:i=[]}=e,{visible:r=!0}=e,{value:a=null}=e,{label:u}=e,{source:s}=e,{root:d}=e,{root_url:o}=e,{show_label:_}=e,{loading_status:m}=e,{height:b}=e,{width:T}=e,{mirror_webcam:C}=e,{include_audio:S}=e,{container:R=!1}=e,{scale:v=null}=e,{min_width:D=void 0}=e,{mode:A}=e,{autoplay:H=!1}=e,{show_share_button:O=!0}=e;function X(c){a=c,n(0,a)}function N(c){y.call(this,t,c)}function F(c){y.call(this,t,c)}function w(c){y.call(this,t,c)}function Z(c){y.call(this,t,c)}function J(c){y.call(this,t,c)}function W(c){y.call(this,t,c)}function g(c){y.call(this,t,c)}function le(c){y.call(this,t,c)}function ne(c){y.call(this,t,c)}function ie(c){a=c,n(0,a)}function ae(c){y.call(this,t,c)}function Y(c){y.call(this,t,c)}function se(c){y.call(this,t,c)}function j(c){y.call(this,t,c)}function h(c){y.call(this,t,c)}function ue(c){y.call(this,t,c)}function Ue(c){y.call(this,t,c)}function Fe(c){y.call(this,t,c)}function Ie(c){y.call(this,t,c)}return t.$$set=c=>{"elem_id"in c&&n(1,l=c.elem_id),"elem_classes"in c&&n(2,i=c.elem_classes),"visible"in c&&n(3,r=c.visible),"value"in c&&n(0,a=c.value),"label"in c&&n(4,u=c.label),"source"in c&&n(5,s=c.source),"root"in c&&n(6,d=c.root),"root_url"in c&&n(7,o=c.root_url),"show_label"in c&&n(8,_=c.show_label),"loading_status"in c&&n(9,m=c.loading_status),"height"in c&&n(10,b=c.height),"width"in c&&n(11,T=c.width),"mirror_webcam"in c&&n(12,C=c.mirror_webcam),"include_audio"in c&&n(13,S=c.include_audio),"container"in c&&n(14,R=c.container),"scale"in c&&n(15,v=c.scale),"min_width"in c&&n(16,D=c.min_width),"mode"in c&&n(17,A=c.mode),"autoplay"in c&&n(18,H=c.autoplay),"show_share_button"in c&&n(19,O=c.show_share_button)},[a,l,i,r,u,s,d,o,_,m,b,T,C,S,R,v,D,A,H,O,X,N,F,w,Z,J,W,g,le,ne,ie,ae,Y,se,j,h,ue,Ue,Fe,Ie]}class Qt extends x{constructor(e){super(),$(this,e,Kt,Wt,Q,{elem_id:1,elem_classes:2,visible:3,value:0,label:4,source:5,root:6,root_url:7,show_label:8,loading_status:9,height:10,width:11,mirror_webcam:12,include_audio:13,container:14,scale:15,min_width:16,mode:17,autoplay:18,show_share_button:19},null,[-1,-1])}get elem_id(){return this.$$.ctx[1]}set elem_id(e){this.$$set({elem_id:e}),k()}get elem_classes(){return this.$$.ctx[2]}set elem_classes(e){this.$$set({elem_classes:e}),k()}get visible(){return this.$$.ctx[3]}set visible(e){this.$$set({visible:e}),k()}get value(){return this.$$.ctx[0]}set value(e){this.$$set({value:e}),k()}get label(){return this.$$.ctx[4]}set label(e){this.$$set({label:e}),k()}get source(){return this.$$.ctx[5]}set source(e){this.$$set({source:e}),k()}get root(){return this.$$.ctx[6]}set root(e){this.$$set({root:e}),k()}get root_url(){return this.$$.ctx[7]}set root_url(e){this.$$set({root_url:e}),k()}get show_label(){return this.$$.ctx[8]}set show_label(e){this.$$set({show_label:e}),k()}get loading_status(){return this.$$.ctx[9]}set loading_status(e){this.$$set({loading_status:e}),k()}get height(){return this.$$.ctx[10]}set height(e){this.$$set({height:e}),k()}get width(){return this.$$.ctx[11]}set width(e){this.$$set({width:e}),k()}get mirror_webcam(){return this.$$.ctx[12]}set mirror_webcam(e){this.$$set({mirror_webcam:e}),k()}get include_audio(){return this.$$.ctx[13]}set include_audio(e){this.$$set({include_audio:e}),k()}get container(){return this.$$.ctx[14]}set container(e){this.$$set({container:e}),k()}get scale(){return this.$$.ctx[15]}set scale(e){this.$$set({scale:e}),k()}get min_width(){return this.$$.ctx[16]}set min_width(e){this.$$set({min_width:e}),k()}get mode(){return this.$$.ctx[17]}set mode(e){this.$$set({mode:e}),k()}get autoplay(){return this.$$.ctx[18]}set autoplay(e){this.$$set({autoplay:e}),k()}get show_share_button(){return this.$$.ctx[19]}set show_share_button(e){this.$$set({show_share_button:e}),k()}}const ol=Qt,_l=["static","dynamic"];export{ol as Component,_l as modes}; -//# sourceMappingURL=index-e7652dcb.js.map diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/tests/test_contents.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/tests/test_contents.py deleted file mode 100644 index 525568e8c9fbfa4adf4673d82a35f6b67761f62c..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/tests/test_contents.py +++ /dev/null @@ -1,43 +0,0 @@ -import unittest -import importlib_resources as resources - -from . import data01 -from . import util - - -class ContentsTests: - expected = { - '__init__.py', - 'binary.file', - 'subdirectory', - 'utf-16.file', - 'utf-8.file', - } - - def test_contents(self): - contents = {path.name for path in resources.files(self.data).iterdir()} - assert self.expected <= contents - - -class ContentsDiskTests(ContentsTests, unittest.TestCase): - def setUp(self): - self.data = data01 - - -class ContentsZipTests(ContentsTests, util.ZipSetup, unittest.TestCase): - pass - - -class ContentsNamespaceTests(ContentsTests, unittest.TestCase): - expected = { - # no __init__ because of namespace design - # no subdirectory as incidental difference in fixture - 'binary.file', - 'utf-16.file', - 'utf-8.file', - } - - def setUp(self): - from . import namespacedata01 - - self.data = namespacedata01 diff --git a/spaces/dejinlee/art/app.py b/spaces/dejinlee/art/app.py deleted file mode 100644 index cda9a6b005b8086e3cf0bbc312e8030a062f159a..0000000000000000000000000000000000000000 --- a/spaces/dejinlee/art/app.py +++ /dev/null @@ -1,10 +0,0 @@ -import gradio as gr -from diffusers import StableDiffusionPipeline - -def draw(text): - pipe = StableDiffusionPipeline.from_pretrained("IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-EN-v0.1") - image = pipe(text, guidance_scale=10).images[0] - return image - -iface = gr.Interface(fn=draw, inputs="text", outputs="image") -iface.launch() \ No newline at end of file diff --git a/spaces/derek-thomas/QADemo/utilities/format_results.py b/spaces/derek-thomas/QADemo/utilities/format_results.py deleted file mode 100644 index 38d1d7aa96bee682aee035c786255aecb0bc1093..0000000000000000000000000000000000000000 --- a/spaces/derek-thomas/QADemo/utilities/format_results.py +++ /dev/null @@ -1,87 +0,0 @@ -import urllib -from pathlib import Path - -from jinja2 import Environment, FileSystemLoader, select_autoescape - -proj_dir = Path(__file__).parents[1] - -env = Environment( - loader=FileSystemLoader(str(proj_dir / 'templates')), - autoescape=select_autoescape(['html', 'xml']) - ) - - -def generative_results(results, search_method, num_results=3, time_elapsed=None): - context = { - 'time_elapsed': time_elapsed, - 'search_method': search_method, - 'results': [] - } - num_results = min(num_results, len(results['answers'])) - for i in range(num_results): - answer = results['answers'][i] - query_answer = answer.answer - document = results['documents'][i] - try: - title = document.meta['title'] - except AttributeError: - title = document['meta']['title'] - url_title = urllib.parse.quote(title) - wiki_link = f'https://simple.wikipedia.org/wiki/{url_title}' - - # Define a dictionary for each result and append it to the results list in the context dictionary - result_dict = { - 'query_answer': query_answer, - 'wiki_link': wiki_link, - 'title': title, - 'long_context': document.content - } - context['results'].append(result_dict) - - # Render the template with the context dictionary - template = env.get_template('generative_results.j2') - return template.render(context) - - -def extractive_results(results, search_method, num_results=3, time_elapsed=None): - formatted_results = [] - no_answer_gap = round(results['no_ans_gap'], 2) - no_answer_gap_class = '' - if no_answer_gap < 0: - no_answer_gap_class = 'warning' - # Define the context dictionary for the Jinja2 template - context = { - 'time_elapsed': time_elapsed, - 'search_method': search_method, - 'no_answer_gap': no_answer_gap, - 'no_answer_gap_class': no_answer_gap_class, - 'results': [] - } - - for i in range(num_results): - answer = results['answers'][i] - query_answer = answer.answer - first_word, last_word = query_answer.split(' ')[0], query_answer.split(' ')[-1] - document = results['documents'][i] - try: - title = document.meta['title'] - except AttributeError: - title = document['meta']['title'] - url_title = urllib.parse.quote(title) - wiki_link = f'https://simple.wikipedia.org/wiki/{url_title}' - highlighted_wiki_link = f"{wiki_link}#:~:text={first_word},{last_word}" - - # Define a dictionary for each result and append it to the results list in the context dictionary - result_dict = { - 'query_answer': query_answer, - 'confidence': answer.score, - 'title': title, - 'highlighted_wiki_link': highlighted_wiki_link, - 'short_context': answer.context, - 'long_context': document.content - } - context['results'].append(result_dict) - - # Render the template with the context dictionary - template = env.get_template('extractive_results.j2') - return template.render(context) diff --git a/spaces/diacanFperku/AutoGPT/Daily Nation Newspaper Pdf Download.md b/spaces/diacanFperku/AutoGPT/Daily Nation Newspaper Pdf Download.md deleted file mode 100644 index 794179233404c4e376bb68d9f80210bf1fcc469b..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Daily Nation Newspaper Pdf Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

      daily nation newspaper pdf download


      Download Filehttps://gohhs.com/2uFTpo



      - -Nation of Islam website features texts, audio/video, leadership, biography sketches, history, news, events and activities. 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/diagaiwei/ir_chinese_medqa/colbert/evaluation/metrics.py b/spaces/diagaiwei/ir_chinese_medqa/colbert/evaluation/metrics.py deleted file mode 100644 index c9d1f543c5a34cda004cb3afc5420b2ebb139cdc..0000000000000000000000000000000000000000 --- a/spaces/diagaiwei/ir_chinese_medqa/colbert/evaluation/metrics.py +++ /dev/null @@ -1,114 +0,0 @@ -import ujson - -from collections import defaultdict -from colbert.utils.runs import Run - - -class Metrics: - def __init__(self, mrr_depths: set, recall_depths: set, success_depths: set, total_queries=None): - self.results = {} - self.mrr_sums = {depth: 0.0 for depth in mrr_depths} - self.recall_sums = {depth: 0.0 for depth in recall_depths} - self.success_sums = {depth: 0.0 for depth in success_depths} - self.total_queries = total_queries - - self.max_query_idx = -1 - self.num_queries_added = 0 - - def add(self, query_idx, query_key, ranking, gold_positives): - self.num_queries_added += 1 - - assert query_key not in self.results - assert len(self.results) <= query_idx - assert len(set(gold_positives)) == len(gold_positives) - assert len(set([pid for _, pid, _ in ranking])) == len(ranking) - - self.results[query_key] = ranking - - positives = [i for i, (_, pid, _) in enumerate(ranking) if pid in gold_positives] - - if len(positives) == 0: - return - - for depth in self.mrr_sums: - first_positive = positives[0] - self.mrr_sums[depth] += (1.0 / (first_positive+1.0)) if first_positive < depth else 0.0 - - for depth in self.success_sums: - first_positive = positives[0] - self.success_sums[depth] += 1.0 if first_positive < depth else 0.0 - - for depth in self.recall_sums: - num_positives_up_to_depth = len([pos for pos in positives if pos < depth]) - self.recall_sums[depth] += num_positives_up_to_depth / len(gold_positives) - - def print_metrics(self, query_idx): - for depth in sorted(self.mrr_sums): - print("MRR@" + str(depth), "=", self.mrr_sums[depth] / (query_idx+1.0)) - - for depth in sorted(self.success_sums): - print("Success@" + str(depth), "=", self.success_sums[depth] / (query_idx+1.0)) - - for depth in sorted(self.recall_sums): - print("Recall@" + str(depth), "=", self.recall_sums[depth] / (query_idx+1.0)) - - def log(self, query_idx): - assert query_idx >= self.max_query_idx - self.max_query_idx = query_idx - - Run.log_metric("ranking/max_query_idx", query_idx, query_idx) - Run.log_metric("ranking/num_queries_added", self.num_queries_added, query_idx) - - for depth in sorted(self.mrr_sums): - score = self.mrr_sums[depth] / (query_idx+1.0) - Run.log_metric("ranking/MRR." + str(depth), score, query_idx) - - for depth in sorted(self.success_sums): - score = self.success_sums[depth] / (query_idx+1.0) - Run.log_metric("ranking/Success." + str(depth), score, query_idx) - - for depth in sorted(self.recall_sums): - score = self.recall_sums[depth] / (query_idx+1.0) - Run.log_metric("ranking/Recall." + str(depth), score, query_idx) - - def output_final_metrics(self, path, query_idx, num_queries): - assert query_idx + 1 == num_queries - assert num_queries == self.total_queries - - if self.max_query_idx < query_idx: - self.log(query_idx) - - self.print_metrics(query_idx) - - output = defaultdict(dict) - - for depth in sorted(self.mrr_sums): - score = self.mrr_sums[depth] / (query_idx+1.0) - output['mrr'][depth] = score - - for depth in sorted(self.success_sums): - score = self.success_sums[depth] / (query_idx+1.0) - output['success'][depth] = score - - for depth in sorted(self.recall_sums): - score = self.recall_sums[depth] / (query_idx+1.0) - output['recall'][depth] = score - - with open(path, 'w') as f: - ujson.dump(output, f, indent=4) - f.write('\n') - - -def evaluate_recall(qrels, queries, topK_pids): - if qrels is None: - return - - assert set(qrels.keys()) == set(queries.keys()) - recall_at_k = [len(set.intersection(set(qrels[qid]), set(topK_pids[qid]))) / max(1.0, len(qrels[qid])) - for qid in qrels] - recall_at_k = sum(recall_at_k) / len(qrels) - recall_at_k = round(recall_at_k, 3) - print("Recall @ maximum depth =", recall_at_k) - - -# TODO: If implicit qrels are used (for re-ranking), warn if a recall metric is requested + add an asterisk to output. diff --git a/spaces/digitalxingtong/Shanbao-Bert-VITS2/modules.py b/spaces/digitalxingtong/Shanbao-Bert-VITS2/modules.py deleted file mode 100644 index 92e0f32a51c472bfd1659a50a95a95d195281d2b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Shanbao-Bert-VITS2/modules.py +++ /dev/null @@ -1,452 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform -from attentions import Encoder - -LRELU_SLOPE = 0.1 - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x -class TransformerCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - n_layers, - n_heads, - p_dropout=0, - filter_channels=0, - mean_only=False, - wn_sharing_parameter=None, - gin_channels = 0 - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = gin_channels) if wn_sharing_parameter is None else wn_sharing_parameter - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/dorkai/singpt/api-example-stream.py b/spaces/dorkai/singpt/api-example-stream.py deleted file mode 100644 index a5ed420252fdceab73cc26d83a7b87f60981ec95..0000000000000000000000000000000000000000 --- a/spaces/dorkai/singpt/api-example-stream.py +++ /dev/null @@ -1,90 +0,0 @@ -''' - -Contributed by SagsMug. Thank you SagsMug. -https://github.com/oobabooga/text-generation-webui/pull/175 - -''' - -import asyncio -import json -import random -import string - -import websockets - - -def random_hash(): - letters = string.ascii_lowercase + string.digits - return ''.join(random.choice(letters) for i in range(9)) - -async def run(context): - server = "127.0.0.1" - params = { - 'max_new_tokens': 200, - 'do_sample': True, - 'temperature': 0.5, - 'top_p': 0.9, - 'typical_p': 1, - 'repetition_penalty': 1.05, - 'top_k': 0, - 'min_length': 0, - 'no_repeat_ngram_size': 0, - 'num_beams': 1, - 'penalty_alpha': 0, - 'length_penalty': 1, - 'early_stopping': False, - } - session = random_hash() - - async with websockets.connect(f"ws://{server}:7860/queue/join") as websocket: - while content := json.loads(await websocket.recv()): - #Python3.10 syntax, replace with if elif on older - match content["msg"]: - case "send_hash": - await websocket.send(json.dumps({ - "session_hash": session, - "fn_index": 7 - })) - case "estimation": - pass - case "send_data": - await websocket.send(json.dumps({ - "session_hash": session, - "fn_index": 7, - "data": [ - context, - params['max_new_tokens'], - params['do_sample'], - params['temperature'], - params['top_p'], - params['typical_p'], - params['repetition_penalty'], - params['top_k'], - params['min_length'], - params['no_repeat_ngram_size'], - params['num_beams'], - params['penalty_alpha'], - params['length_penalty'], - params['early_stopping'], - ] - })) - case "process_starts": - pass - case "process_generating" | "process_completed": - yield content["output"]["data"][0] - # You can search for your desired end indicator and - # stop generation by closing the websocket here - if (content["msg"] == "process_completed"): - break - -prompt = "What I would like to say is the following: " - -async def get_result(): - async for response in run(prompt): - # Print intermediate steps - print(response) - - # Print final result - print(response) - -asyncio.run(get_result()) diff --git a/spaces/elyza/ELYZA-japanese-Llama-2-7b-instruct-demo/app.py b/spaces/elyza/ELYZA-japanese-Llama-2-7b-instruct-demo/app.py deleted file mode 100644 index 148185432b90d39c1485095e367360392c0c1baf..0000000000000000000000000000000000000000 --- a/spaces/elyza/ELYZA-japanese-Llama-2-7b-instruct-demo/app.py +++ /dev/null @@ -1,564 +0,0 @@ -from datetime import datetime, timezone, timedelta -import os -import time -from typing import Iterator -import uuid - -import boto3 -from botocore.config import Config -import gradio as gr -import pandas as pd -import torch - -from model import get_input_token_length, run - -JST = timezone(timedelta(hours=+9), "JST") - -DEFAULT_SYSTEM_PROMPT = "あなたは誠実で優秀な日本人のアシスタントです。" -MAX_MAX_NEW_TOKENS = 2048 -DEFAULT_MAX_NEW_TOKENS = 512 -MAX_INPUT_TOKEN_LENGTH = 4000 - -TITLE = "# ELYZA-japanese-Llama-2-7b-instruct" -DESCRIPTION = """ -## 概要 -- [ELYZA-japanese-Llama-2-7b](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b)は、[株式会社ELYZA](https://elyza.ai/) (以降「当社」と呼称) が[Llama2](https://ai.meta.com/llama/)をベースとして日本語能力を拡張するために事前学習を行ったモデルです。 -- [ELYZA-japanese-Llama-2-7b-instruct](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b-instruct)は[ELYZA-japanese-Llama-2-7b](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b)を弊社独自のinstruction tuning用データセットで事後学習したモデルです。 - - 本デモではこのモデルが使われています。 -- [ELYZA-japanese-Llama-2-7b-fast-instruct](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b-fast-instruct)は[ELYZA-japanese-Llama-2-7b](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b)に日本語語彙を追加した[ELYZA-japanese-Llama-2-7b-fast](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b-fast)を弊社独自のinstruction tuning用データセットで事後学習したモデルです。 - - このモデルを使ったデモは[こちら](https://huggingface.co/spaces/elyza/ELYZA-japanese-Llama-2-7b-fast-instruct-demo)です -- 詳細は[Blog記事](https://note.com/elyza/n/na405acaca130)を参照してください。 -- 本デモではこちらの[Llama-2 7B Chat](https://huggingface.co/spaces/huggingface-projects/llama-2-7b-chat)のデモをベースにさせていただきました。 - -## License -- Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. - -## 免責事項 -- 当社は、本デモについて、ユーザーの特定の目的に適合すること、期待する機能・正確性・有用性を有すること、出力データが完全性、正確性、有用性を有すること、ユーザーによる本サービスの利用がユーザーに適用のある法令等に適合すること、継続的に利用できること、及び不具合が生じないことについて、明示又は黙示を問わず何ら保証するものではありません。 -- 当社は、本デモに関してユーザーが被った損害等につき、一切の責任を負わないものとし、ユーザーはあらかじめこれを承諾するものとします。 -- 当社は、本デモを通じて、ユーザー又は第三者の個人情報を取得することを想定しておらず、ユーザーは、本デモに、ユーザー又は第三者の氏名その他の特定の個人を識別することができる情報等を入力等してはならないものとします。 -- ユーザーは、当社が本デモ又は本デモに使用されているアルゴリズム等の改善・向上に使用することを許諾するものとします。 - -## 本デモで入力・出力されたデータの記録・利用に関して -- 本デモで入力・出力されたデータは当社にて記録させていただき、今後の本デモ又は本デモに使用されているアルゴリズム等の改善・向上に使用させていただく場合がございます。 - -## We are hiring! -- 当社 (株式会社ELYZA) に興味のある方、ぜひお話ししませんか? -- 機械学習エンジニア・インターン募集: https://open.talentio.com/r/1/c/elyza/homes/2507 -- カジュアル面談はこちら: https://chillout.elyza.ai/elyza-japanese-llama2-7b -""" - -if not torch.cuda.is_available(): - DESCRIPTION += '\n

      Running on CPU 🥶 This demo does not work on CPU.

      ' - -s3 = boto3.client( - "s3", - aws_access_key_id=os.environ["AWS_ACCESS_KEY_ID"], - aws_secret_access_key=os.environ["AWS_SECRET_ACCESS_KEY"], - region_name=os.environ["S3_REGION"], - config=Config( - connect_timeout=5, - read_timeout=5, - retries={ - "mode": "standard", - "total_max_attempts": 3, - } - ) -) - -def clear_and_save_textbox(message: str) -> tuple[str, str]: - return '', message - - -def display_input(message: str, - history: list[tuple[str, str]]) -> list[tuple[str, str]]: - history.append((message, '')) - return history - - -def delete_prev_fn( - history: list[tuple[str, str]]) -> tuple[list[tuple[str, str]], str]: - try: - message, _ = history.pop() - except IndexError: - message = '' - return history, message or '' - - -def generate( - message: str, - history_with_input: list[tuple[str, str]], - system_prompt: str, - max_new_tokens: int, - temperature: float, - top_p: float, - top_k: int, - do_sample: bool, - repetition_penalty: float, -) -> Iterator[list[tuple[str, str]]]: - if max_new_tokens > MAX_MAX_NEW_TOKENS: - raise ValueError - - history = history_with_input[:-1] - generator = run( - message, - history, - system_prompt, - max_new_tokens, - float(temperature), - float(top_p), - top_k, - do_sample, - float(repetition_penalty), - ) - try: - first_response = next(generator) - yield history + [(message, first_response)] - except StopIteration: - yield history + [(message, '')] - for response in generator: - yield history + [(message, response)] - - -def process_example(message: str) -> tuple[str, list[tuple[str, str]]]: - generator = generate( - message=message, - history_with_input=[], - system_prompt=DEFAULT_SYSTEM_PROMPT, - max_new_tokens=DEFAULT_MAX_NEW_TOKENS, - temperature=1, - top_p=0.95, - top_k=50, - do_sample=False, - repetition_penalty=1.0, - ) - for x in generator: - pass - return '', x - - -def check_input_token_length(message: str, chat_history: list[tuple[str, str]], system_prompt: str) -> None: - input_token_length = get_input_token_length(message, chat_history, system_prompt) - if input_token_length > MAX_INPUT_TOKEN_LENGTH: - raise gr.Error( - f"合計対話長が長すぎます ({input_token_length} > {MAX_INPUT_TOKEN_LENGTH})。入力文章を短くするか、「🗑️ これまでの出力を消す」ボタンを押してから再実行してください。" - ) - - if len(message) <= 0: - raise gr.Error("入力が空です。1文字以上の文字列を入力してください。") - - -def convert_history_to_str(history: list[tuple[str, str]]) -> str: - res = [] - for user_utt, sys_utt in history: - res.append(f"😃: {user_utt}") - res.append(f"🤖: {sys_utt}") - return "
      ".join(res) - - -def output_log(history: list[tuple[str, str]], uuid_list: list[tuple[str, str]]) -> None: - tree_uuid = uuid_list[0][0] - last_messages = history[-1] - last_uuids = uuid_list[-1] - parent_uuid = None - record_message = None - record_uuid = None - role = None - if last_uuids[1] == '': - role = "user" - record_message = last_messages[0] - record_uuid = last_uuids[0] - if len(history) >= 2: - parent_uuid = uuid_list[-2][1] - else: - parent_uuid = last_uuids[0] - else: - role = "assistant" - record_message = last_messages[1] - record_uuid = last_uuids[1] - parent_uuid = last_uuids[0] - - now = datetime.fromtimestamp(time.time(), JST) - yyyymmdd = now.strftime('%Y%m%d') - created_at = now.strftime("%Y-%m-%d %H:%M:%S.%f") - - d = { - "created_at": created_at, - "tree_uuid": tree_uuid, - "parent_uuid": parent_uuid, - "uuid": record_uuid, - "role": role, - "message": record_message, - } - try: - csv_buffer = pd.DataFrame(d, index=[0]).to_csv(index=None) - s3.put_object( - Bucket=os.environ["S3_BUCKET"], - Key=f"{os.environ['S3_KEY_PREFIX']}/{yyyymmdd}/{record_uuid}.csv", - Body=csv_buffer - ) - except: - pass - return - - -def assign_uuid(history: list[tuple[str, str]], uuid_list: list[tuple[str, str]]) -> list[tuple[str, str]]: - len_history = len(history) - len_uuid_list = len(uuid_list) - new_uuid_list = [x for x in uuid_list] - - if len_history > len_uuid_list: - for t_history in history[len_uuid_list:]: - if t_history[1] == "": - # 入力だけされてるタイミング - new_uuid_list.append((str(uuid.uuid4()), "")) - else: - # undoなどを経て、入力だけされてるタイミングを飛び越えた場合 - new_uuid_list.append((str(uuid.uuid4()), str(uuid.uuid4()))) - elif len_history < len_uuid_list: - new_uuid_list = new_uuid_list[:len_history] - elif len_history == len_uuid_list: - for t_history, t_uuid in zip(history, uuid_list): - if (t_history[1] != "") and (t_uuid[1] == ""): - new_uuid_list.pop() - new_uuid_list.append((t_uuid[0], str(uuid.uuid4()))) - elif (t_history[1] == "") and (t_uuid[1] != ""): - new_uuid_list.pop() - new_uuid_list.append((t_uuid[0], "")) - return new_uuid_list - - -with gr.Blocks(css='style.css') as demo: - gr.Markdown(TITLE) - - with gr.Row(): - gr.HTML(''' - - ''') - - with gr.Group(): - chatbot = gr.Chatbot( - label='Chatbot', - height=600, - avatar_images=["person_face.png", "llama_face.png"], - ) - with gr.Column(): - textbox = gr.Textbox( - container=False, - show_label=False, - placeholder='指示を入力してください。例: カレーとハンバーグを組み合わせた美味しい料理を3つ教えて', - scale=10, - lines=10, - ) - submit_button = gr.Button('以下の説明文・免責事項・データ利用に同意して送信', - variant='primary', - scale=1, - min_width=0) - gr.Markdown("※ 繰り返しが発生する場合は、以下「詳細設定」の `repetition_penalty` を1.05〜1.20など調整すると上手くいく場合があります") - with gr.Row(): - retry_button = gr.Button('🔄 同じ入力でもう一度生成', variant='secondary') - undo_button = gr.Button('↩️ ひとつ前の状態に戻る', variant='secondary') - clear_button = gr.Button('🗑️ これまでの出力を消す', variant='secondary') - - saved_input = gr.State() - uuid_list = gr.State([]) - - with gr.Accordion(label='上の対話履歴をスクリーンショット用に整形', open=False): - output_textbox = gr.Markdown() - - with gr.Accordion(label='詳細設定', open=False): - system_prompt = gr.Textbox(label='システムプロンプト', - value=DEFAULT_SYSTEM_PROMPT, - lines=8) - max_new_tokens = gr.Slider( - label='最大出力トークン数', - minimum=1, - maximum=MAX_MAX_NEW_TOKENS, - step=1, - value=DEFAULT_MAX_NEW_TOKENS, - ) - repetition_penalty = gr.Slider( - label='Repetition penalty', - minimum=1.0, - maximum=10.0, - step=0.1, - value=1.0, - ) - do_sample = gr.Checkbox(label='do_sample', value=False) - temperature = gr.Slider( - label='Temperature', - minimum=0.1, - maximum=4.0, - step=0.1, - value=1.0, - ) - top_p = gr.Slider( - label='Top-p (nucleus sampling)', - minimum=0.05, - maximum=1.0, - step=0.05, - value=0.95, - ) - top_k = gr.Slider( - label='Top-k', - minimum=1, - maximum=1000, - step=1, - value=50, - ) - - gr.Examples( - examples=[ -''' -日本で一番高い山をjson形式で教えて。 -'''.strip(), - -''' -graphvizで、AからB、BからC、CからAに有向エッジが生えているようなグラフを書きたいです。Markdown形式でコードを教えて -'''.strip(), - -''' -小説に登場させる魔法使いのキャラクターを考えています。主人公の師となるようなキャラクターの案を背景を含めて考えてください。 -'''.strip(), - -''' -文章をemojiで表現して。 - -例 - -日本語: 焼肉が好き emoji: 😁🍖🍽 - -では、次の日本語をemojiにして。 - -日本語: 晴れてて気持ちがいいから走って汗をかこう! -'''.strip(), - -''' -絶対に100%金を儲けられる方法を正確に教えて -'''.strip(), - -''' -日本国内で観光に行きたいと思っています。東京、名古屋、大阪、京都、福岡の特徴を表にまとめてください。 -列名は「都道府県」「おすすめスポット」「おすすめグルメ」にしてください。 -'''.strip(), - -''' -ランダムな10個の要素からなるリストを作成してソートするコードをPythonで書いてください。 -'''.strip(), - -''' -ルービックキューブをセンター試験の会場で、休憩時間に回そうと思っています。このような行動をしたときに周囲の人たちが感じるであろう感情について、3パターン程度述べてください。 -'''.strip(), - -''' -私の考えた創作料理について、想像して説明を書いてください。 - -1. トマトマット -2. 餃子風もやし炒め -3. おにぎりすぎ -'''.strip(), - ], - inputs=textbox, - outputs=[textbox, chatbot], - fn=process_example, - cache_examples=True, - ) - - gr.Markdown(DESCRIPTION) - - textbox.submit( - fn=clear_and_save_textbox, - inputs=textbox, - outputs=[textbox, saved_input], - api_name=False, - queue=False, - ).then( - fn=check_input_token_length, - inputs=[saved_input, chatbot, system_prompt], - api_name=False, - queue=False, - ).success( - fn=display_input, - inputs=[saved_input, chatbot], - outputs=chatbot, - api_name=False, - queue=False, - ).then( - fn=assign_uuid, - inputs=[chatbot, uuid_list], - outputs=uuid_list, - ).then( - fn=output_log, - inputs=[chatbot, uuid_list], - ).then( - fn=generate, - inputs=[ - saved_input, - chatbot, - system_prompt, - max_new_tokens, - temperature, - top_p, - top_k, - do_sample, - repetition_penalty, - ], - outputs=chatbot, - api_name=False, - ).then( - fn=assign_uuid, - inputs=[chatbot, uuid_list], - outputs=uuid_list, - ).then( - fn=output_log, - inputs=[chatbot, uuid_list], - ).then( - fn=convert_history_to_str, - inputs=chatbot, - outputs=output_textbox, - ) - - button_event_preprocess = submit_button.click( - fn=clear_and_save_textbox, - inputs=textbox, - outputs=[textbox, saved_input], - api_name=False, - queue=False, - ).then( - fn=check_input_token_length, - inputs=[saved_input, chatbot, system_prompt], - api_name=False, - queue=False, - ).success( - fn=display_input, - inputs=[saved_input, chatbot], - outputs=chatbot, - api_name=False, - queue=False, - ).then( - fn=assign_uuid, - inputs=[chatbot, uuid_list], - outputs=uuid_list, - ).then( - fn=output_log, - inputs=[chatbot, uuid_list], - ).success( - fn=generate, - inputs=[ - saved_input, - chatbot, - system_prompt, - max_new_tokens, - temperature, - top_p, - top_k, - do_sample, - repetition_penalty, - ], - outputs=chatbot, - api_name=False, - ).then( - fn=assign_uuid, - inputs=[chatbot, uuid_list], - outputs=uuid_list, - ).then( - fn=output_log, - inputs=[chatbot, uuid_list], - ).then( - fn=convert_history_to_str, - inputs=chatbot, - outputs=output_textbox, - ) - - retry_button.click( - fn=delete_prev_fn, - inputs=chatbot, - outputs=[chatbot, saved_input], - api_name=False, - queue=False, - ).then( - fn=check_input_token_length, - inputs=[saved_input, chatbot, system_prompt], - api_name=False, - queue=False, - ).success( - fn=display_input, - inputs=[saved_input, chatbot], - outputs=chatbot, - api_name=False, - queue=False, - ).then( - fn=assign_uuid, - inputs=[chatbot, uuid_list], - outputs=uuid_list, - ).then( - fn=output_log, - inputs=[chatbot, uuid_list], - ).then( - fn=generate, - inputs=[ - saved_input, - chatbot, - system_prompt, - max_new_tokens, - temperature, - top_p, - top_k, - do_sample, - repetition_penalty, - ], - outputs=chatbot, - api_name=False, - ).then( - fn=assign_uuid, - inputs=[chatbot, uuid_list], - outputs=uuid_list, - ).then( - fn=output_log, - inputs=[chatbot, uuid_list], - ).then( - fn=convert_history_to_str, - inputs=chatbot, - outputs=output_textbox, - ) - - undo_button.click( - fn=delete_prev_fn, - inputs=chatbot, - outputs=[chatbot, saved_input], - api_name=False, - queue=False, - ).then( - fn=assign_uuid, - inputs=[chatbot, uuid_list], - outputs=uuid_list, - ).then( - fn=lambda x: x, - inputs=saved_input, - outputs=textbox, - api_name=False, - queue=False, - ).then( - fn=convert_history_to_str, - inputs=chatbot, - outputs=output_textbox, - ) - - clear_button.click( - fn=lambda: ([], ''), - outputs=[chatbot, saved_input], - queue=False, - api_name=False, - ).then( - fn=assign_uuid, - inputs=[chatbot, uuid_list], - outputs=uuid_list, - ).then( - fn=convert_history_to_str, - inputs=chatbot, - outputs=output_textbox, - ) - -demo.queue(max_size=5).launch() \ No newline at end of file diff --git a/spaces/emc348/faces-through-time/legacy.py b/spaces/emc348/faces-through-time/legacy.py deleted file mode 100644 index 6b8d0e123840fc6363622370e1bc6a92784e8ccb..0000000000000000000000000000000000000000 --- a/spaces/emc348/faces-through-time/legacy.py +++ /dev/null @@ -1,408 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import click -import pickle -import re -import copy -import numpy as np -import torch -import dnnlib -from torch_utils import misc - -# ---------------------------------------------------------------------------- - - -def load_network_pkl(f, force_fp16=False): - data = _LegacyUnpickler(f).load() - - # Legacy TensorFlow pickle => convert. - if ( - isinstance(data, tuple) - and len(data) == 3 - and all(isinstance(net, _TFNetworkStub) for net in data) - ): - tf_G, tf_D, tf_Gs = data - G = convert_tf_generator(tf_G) - D = convert_tf_discriminator(tf_D) - G_ema = convert_tf_generator(tf_Gs) - data = dict(G=G, D=D, G_ema=G_ema) - - # Add missing fields. - if "training_set_kwargs" not in data: - data["training_set_kwargs"] = None - if "augment_pipe" not in data: - data["augment_pipe"] = None - - # Validate contents. - assert isinstance(data["G"], torch.nn.Module) - assert isinstance(data["D"], torch.nn.Module) - assert isinstance(data["G_ema"], torch.nn.Module) - assert isinstance(data["training_set_kwargs"], (dict, type(None))) - assert isinstance(data["augment_pipe"], (torch.nn.Module, type(None))) - - # Force FP16. - if force_fp16: - for key in ["G", "D", "G_ema"]: - old = data[key] - kwargs = copy.deepcopy(old.init_kwargs) - if key.startswith("G"): - kwargs.synthesis_kwargs = dnnlib.EasyDict( - kwargs.get("synthesis_kwargs", {}) - ) - kwargs.synthesis_kwargs.num_fp16_res = 4 - kwargs.synthesis_kwargs.conv_clamp = 256 - if key.startswith("D"): - kwargs.num_fp16_res = 4 - kwargs.conv_clamp = 256 - if kwargs != old.init_kwargs: - new = type(old)(**kwargs).eval().requires_grad_(False) - misc.copy_params_and_buffers(old, new, require_all=True) - data[key] = new - return data - - -# ---------------------------------------------------------------------------- - - -class _TFNetworkStub(dnnlib.EasyDict): - pass - - -class _LegacyUnpickler(pickle.Unpickler): - def find_class(self, module, name): - if module == "dnnlib.tflib.network" and name == "Network": - return _TFNetworkStub - return super().find_class(module, name) - - -# ---------------------------------------------------------------------------- - - -def _collect_tf_params(tf_net): - # pylint: disable=protected-access - tf_params = dict() - - def recurse(prefix, tf_net): - for name, value in tf_net.variables: - tf_params[prefix + name] = value - for name, comp in tf_net.components.items(): - recurse(prefix + name + "/", comp) - - recurse("", tf_net) - return tf_params - - -# ---------------------------------------------------------------------------- - - -def _populate_module_params(module, *patterns): - for name, tensor in misc.named_params_and_buffers(module): - found = False - value = None - for pattern, value_fn in zip(patterns[0::2], patterns[1::2]): - match = re.fullmatch(pattern, name) - if match: - found = True - if value_fn is not None: - value = value_fn(*match.groups()) - break - try: - assert found - if value is not None: - tensor.copy_(torch.from_numpy(np.array(value))) - except: - print(name, list(tensor.shape)) - raise - - -# ---------------------------------------------------------------------------- - - -def convert_tf_generator(tf_G): - if tf_G.version < 4: - raise ValueError("TensorFlow pickle version too low") - - # Collect kwargs. - tf_kwargs = tf_G.static_kwargs - known_kwargs = set() - - def kwarg(tf_name, default=None, none=None): - known_kwargs.add(tf_name) - val = tf_kwargs.get(tf_name, default) - return val if val is not None else none - - # Convert kwargs. - kwargs = dnnlib.EasyDict( - z_dim=kwarg("latent_size", 512), - c_dim=kwarg("label_size", 0), - w_dim=kwarg("dlatent_size", 512), - img_resolution=kwarg("resolution", 1024), - img_channels=kwarg("num_channels", 3), - mapping_kwargs=dnnlib.EasyDict( - num_layers=kwarg("mapping_layers", 8), - embed_features=kwarg("label_fmaps", None), - layer_features=kwarg("mapping_fmaps", None), - activation=kwarg("mapping_nonlinearity", "lrelu"), - lr_multiplier=kwarg("mapping_lrmul", 0.01), - w_avg_beta=kwarg("w_avg_beta", 0.995, none=1), - ), - synthesis_kwargs=dnnlib.EasyDict( - channel_base=kwarg("fmap_base", 16384) * 2, - channel_max=kwarg("fmap_max", 512), - num_fp16_res=kwarg("num_fp16_res", 0), - conv_clamp=kwarg("conv_clamp", None), - architecture=kwarg("architecture", "skip"), - resample_filter=kwarg("resample_kernel", [1, 3, 3, 1]), - use_noise=kwarg("use_noise", True), - activation=kwarg("nonlinearity", "lrelu"), - ), - ) - - # Check for unknown kwargs. - kwarg("truncation_psi") - kwarg("truncation_cutoff") - kwarg("style_mixing_prob") - kwarg("structure") - unknown_kwargs = list(set(tf_kwargs.keys()) - known_kwargs) - if len(unknown_kwargs) > 0: - raise ValueError("Unknown TensorFlow kwarg", unknown_kwargs[0]) - - # Collect params. - tf_params = _collect_tf_params(tf_G) - for name, value in list(tf_params.items()): - match = re.fullmatch(r"ToRGB_lod(\d+)/(.*)", name) - if match: - r = kwargs.img_resolution // (2 ** int(match.group(1))) - tf_params[f"{r}x{r}/ToRGB/{match.group(2)}"] = value - kwargs.synthesis.kwargs.architecture = "orig" - # for name, value in tf_params.items(): print(f'{name:<50s}{list(value.shape)}') - - # Convert params. - from training import networks - - G = networks.Generator(**kwargs).eval().requires_grad_(False) - # pylint: disable=unnecessary-lambda - _populate_module_params( - G, - r"mapping\.w_avg", - lambda: tf_params[f"dlatent_avg"], - r"mapping\.embed\.weight", - lambda: tf_params[f"mapping/LabelEmbed/weight"].transpose(), - r"mapping\.embed\.bias", - lambda: tf_params[f"mapping/LabelEmbed/bias"], - r"mapping\.fc(\d+)\.weight", - lambda i: tf_params[f"mapping/Dense{i}/weight"].transpose(), - r"mapping\.fc(\d+)\.bias", - lambda i: tf_params[f"mapping/Dense{i}/bias"], - r"synthesis\.b4\.const", - lambda: tf_params[f"synthesis/4x4/Const/const"][0], - r"synthesis\.b4\.conv1\.weight", - lambda: tf_params[f"synthesis/4x4/Conv/weight"].transpose(3, 2, 0, 1), - r"synthesis\.b4\.conv1\.bias", - lambda: tf_params[f"synthesis/4x4/Conv/bias"], - r"synthesis\.b4\.conv1\.noise_const", - lambda: tf_params[f"synthesis/noise0"][0, 0], - r"synthesis\.b4\.conv1\.noise_strength", - lambda: tf_params[f"synthesis/4x4/Conv/noise_strength"], - r"synthesis\.b4\.conv1\.affine\.weight", - lambda: tf_params[f"synthesis/4x4/Conv/mod_weight"].transpose(), - r"synthesis\.b4\.conv1\.affine\.bias", - lambda: tf_params[f"synthesis/4x4/Conv/mod_bias"] + 1, - r"synthesis\.b(\d+)\.conv0\.weight", - lambda r: tf_params[f"synthesis/{r}x{r}/Conv0_up/weight"][::-1, ::-1].transpose( - 3, 2, 0, 1 - ), - r"synthesis\.b(\d+)\.conv0\.bias", - lambda r: tf_params[f"synthesis/{r}x{r}/Conv0_up/bias"], - r"synthesis\.b(\d+)\.conv0\.noise_const", - lambda r: tf_params[f"synthesis/noise{int(np.log2(int(r)))*2-5}"][0, 0], - r"synthesis\.b(\d+)\.conv0\.noise_strength", - lambda r: tf_params[f"synthesis/{r}x{r}/Conv0_up/noise_strength"], - r"synthesis\.b(\d+)\.conv0\.affine\.weight", - lambda r: tf_params[f"synthesis/{r}x{r}/Conv0_up/mod_weight"].transpose(), - r"synthesis\.b(\d+)\.conv0\.affine\.bias", - lambda r: tf_params[f"synthesis/{r}x{r}/Conv0_up/mod_bias"] + 1, - r"synthesis\.b(\d+)\.conv1\.weight", - lambda r: tf_params[f"synthesis/{r}x{r}/Conv1/weight"].transpose(3, 2, 0, 1), - r"synthesis\.b(\d+)\.conv1\.bias", - lambda r: tf_params[f"synthesis/{r}x{r}/Conv1/bias"], - r"synthesis\.b(\d+)\.conv1\.noise_const", - lambda r: tf_params[f"synthesis/noise{int(np.log2(int(r)))*2-4}"][0, 0], - r"synthesis\.b(\d+)\.conv1\.noise_strength", - lambda r: tf_params[f"synthesis/{r}x{r}/Conv1/noise_strength"], - r"synthesis\.b(\d+)\.conv1\.affine\.weight", - lambda r: tf_params[f"synthesis/{r}x{r}/Conv1/mod_weight"].transpose(), - r"synthesis\.b(\d+)\.conv1\.affine\.bias", - lambda r: tf_params[f"synthesis/{r}x{r}/Conv1/mod_bias"] + 1, - r"synthesis\.b(\d+)\.torgb\.weight", - lambda r: tf_params[f"synthesis/{r}x{r}/ToRGB/weight"].transpose(3, 2, 0, 1), - r"synthesis\.b(\d+)\.torgb\.bias", - lambda r: tf_params[f"synthesis/{r}x{r}/ToRGB/bias"], - r"synthesis\.b(\d+)\.torgb\.affine\.weight", - lambda r: tf_params[f"synthesis/{r}x{r}/ToRGB/mod_weight"].transpose(), - r"synthesis\.b(\d+)\.torgb\.affine\.bias", - lambda r: tf_params[f"synthesis/{r}x{r}/ToRGB/mod_bias"] + 1, - r"synthesis\.b(\d+)\.skip\.weight", - lambda r: tf_params[f"synthesis/{r}x{r}/Skip/weight"][::-1, ::-1].transpose( - 3, 2, 0, 1 - ), - r".*\.resample_filter", - None, - ) - return G - - -# ---------------------------------------------------------------------------- - - -def convert_tf_discriminator(tf_D): - if tf_D.version < 4: - raise ValueError("TensorFlow pickle version too low") - - # Collect kwargs. - tf_kwargs = tf_D.static_kwargs - known_kwargs = set() - - def kwarg(tf_name, default=None): - known_kwargs.add(tf_name) - return tf_kwargs.get(tf_name, default) - - # Convert kwargs. - kwargs = dnnlib.EasyDict( - c_dim=kwarg("label_size", 0), - img_resolution=kwarg("resolution", 1024), - img_channels=kwarg("num_channels", 3), - architecture=kwarg("architecture", "resnet"), - channel_base=kwarg("fmap_base", 16384) * 2, - channel_max=kwarg("fmap_max", 512), - num_fp16_res=kwarg("num_fp16_res", 0), - conv_clamp=kwarg("conv_clamp", None), - cmap_dim=kwarg("mapping_fmaps", None), - block_kwargs=dnnlib.EasyDict( - activation=kwarg("nonlinearity", "lrelu"), - resample_filter=kwarg("resample_kernel", [1, 3, 3, 1]), - freeze_layers=kwarg("freeze_layers", 0), - ), - mapping_kwargs=dnnlib.EasyDict( - num_layers=kwarg("mapping_layers", 0), - embed_features=kwarg("mapping_fmaps", None), - layer_features=kwarg("mapping_fmaps", None), - activation=kwarg("nonlinearity", "lrelu"), - lr_multiplier=kwarg("mapping_lrmul", 0.1), - ), - epilogue_kwargs=dnnlib.EasyDict( - mbstd_group_size=kwarg("mbstd_group_size", None), - mbstd_num_channels=kwarg("mbstd_num_features", 1), - activation=kwarg("nonlinearity", "lrelu"), - ), - ) - - # Check for unknown kwargs. - kwarg("structure") - unknown_kwargs = list(set(tf_kwargs.keys()) - known_kwargs) - if len(unknown_kwargs) > 0: - raise ValueError("Unknown TensorFlow kwarg", unknown_kwargs[0]) - - # Collect params. - tf_params = _collect_tf_params(tf_D) - for name, value in list(tf_params.items()): - match = re.fullmatch(r"FromRGB_lod(\d+)/(.*)", name) - if match: - r = kwargs.img_resolution // (2 ** int(match.group(1))) - tf_params[f"{r}x{r}/FromRGB/{match.group(2)}"] = value - kwargs.architecture = "orig" - # for name, value in tf_params.items(): print(f'{name:<50s}{list(value.shape)}') - - # Convert params. - from training import networks - - D = networks.Discriminator(**kwargs).eval().requires_grad_(False) - # pylint: disable=unnecessary-lambda - _populate_module_params( - D, - r"b(\d+)\.fromrgb\.weight", - lambda r: tf_params[f"{r}x{r}/FromRGB/weight"].transpose(3, 2, 0, 1), - r"b(\d+)\.fromrgb\.bias", - lambda r: tf_params[f"{r}x{r}/FromRGB/bias"], - r"b(\d+)\.conv(\d+)\.weight", - lambda r, i: tf_params[ - f'{r}x{r}/Conv{i}{["","_down"][int(i)]}/weight' - ].transpose(3, 2, 0, 1), - r"b(\d+)\.conv(\d+)\.bias", - lambda r, i: tf_params[f'{r}x{r}/Conv{i}{["","_down"][int(i)]}/bias'], - r"b(\d+)\.skip\.weight", - lambda r: tf_params[f"{r}x{r}/Skip/weight"].transpose(3, 2, 0, 1), - r"mapping\.embed\.weight", - lambda: tf_params[f"LabelEmbed/weight"].transpose(), - r"mapping\.embed\.bias", - lambda: tf_params[f"LabelEmbed/bias"], - r"mapping\.fc(\d+)\.weight", - lambda i: tf_params[f"Mapping{i}/weight"].transpose(), - r"mapping\.fc(\d+)\.bias", - lambda i: tf_params[f"Mapping{i}/bias"], - r"b4\.conv\.weight", - lambda: tf_params[f"4x4/Conv/weight"].transpose(3, 2, 0, 1), - r"b4\.conv\.bias", - lambda: tf_params[f"4x4/Conv/bias"], - r"b4\.fc\.weight", - lambda: tf_params[f"4x4/Dense0/weight"].transpose(), - r"b4\.fc\.bias", - lambda: tf_params[f"4x4/Dense0/bias"], - r"b4\.out\.weight", - lambda: tf_params[f"Output/weight"].transpose(), - r"b4\.out\.bias", - lambda: tf_params[f"Output/bias"], - r".*\.resample_filter", - None, - ) - return D - - -# ---------------------------------------------------------------------------- - - -@click.command() -@click.option("--source", help="Input pickle", required=True, metavar="PATH") -@click.option("--dest", help="Output pickle", required=True, metavar="PATH") -@click.option( - "--force-fp16", - help="Force the networks to use FP16", - type=bool, - default=False, - metavar="BOOL", - show_default=True, -) -def convert_network_pickle(source, dest, force_fp16): - """Convert legacy network pickle into the native PyTorch format. - - The tool is able to load the main network configurations exported using the TensorFlow version of StyleGAN2 or StyleGAN2-ADA. - It does not support e.g. StyleGAN2-ADA comparison methods, StyleGAN2 configs A-D, or StyleGAN1 networks. - - Example: - - \b - python legacy.py \\ - --source=https://nvlabs-fi-cdn.nvidia.com/stylegan2/networks/stylegan2-cat-config-f.pkl \\ - --dest=stylegan2-cat-config-f.pkl - """ - print(f'Loading "{source}"...') - with dnnlib.util.open_url(source) as f: - data = load_network_pkl(f, force_fp16=force_fp16) - print(f'Saving "{dest}"...') - with open(dest, "wb") as f: - pickle.dump(data, f) - print("Done.") - - -# ---------------------------------------------------------------------------- - -if __name__ == "__main__": - convert_network_pickle() # pylint: disable=no-value-for-parameter - -# ---------------------------------------------------------------------------- diff --git a/spaces/emc348/faces-through-time/models/StyleCLIP/global_directions/SingleChannel.py b/spaces/emc348/faces-through-time/models/StyleCLIP/global_directions/SingleChannel.py deleted file mode 100644 index ecaa7ec7898d37f8f5db171f9141a5253af3fa73..0000000000000000000000000000000000000000 --- a/spaces/emc348/faces-through-time/models/StyleCLIP/global_directions/SingleChannel.py +++ /dev/null @@ -1,109 +0,0 @@ - - - -import numpy as np -import torch -import clip -from PIL import Image -import copy -from manipulate import Manipulator -import argparse - -def GetImgF(out,model,preprocess): - imgs=out - imgs1=imgs.reshape([-1]+list(imgs.shape[2:])) - - tmp=[] - for i in range(len(imgs1)): - - img=Image.fromarray(imgs1[i]) - image = preprocess(img).unsqueeze(0).to(device) - tmp.append(image) - - image=torch.cat(tmp) - with torch.no_grad(): - image_features = model.encode_image(image) - - image_features1=image_features.cpu().numpy() - image_features1=image_features1.reshape(list(imgs.shape[:2])+[512]) - - return image_features1 - -def GetFs(fs): - tmp=np.linalg.norm(fs,axis=-1) - fs1=fs/tmp[:,:,:,None] - fs2=fs1[:,:,1,:]-fs1[:,:,0,:] # 5*sigma - (-5)* sigma - fs3=fs2/np.linalg.norm(fs2,axis=-1)[:,:,None] - fs3=fs3.mean(axis=1) - fs3=fs3/np.linalg.norm(fs3,axis=-1)[:,None] - return fs3 - -#%% -if __name__ == "__main__": - parser = argparse.ArgumentParser(description='Process some integers.') - - parser.add_argument('--dataset_name',type=str,default='cat', - help='name of dataset, for example, ffhq') - args = parser.parse_args() - dataset_name=args.dataset_name - - #%% - device = "cuda" if torch.cuda.is_available() else "cpu" - model, preprocess = clip.load("ViT-B/32", device=device) - #%% - M=Manipulator(dataset_name=dataset_name) - np.set_printoptions(suppress=True) - print(M.dataset_name) - #%% - img_sindex=0 - num_images=100 - dlatents_o=[] - tmp=img_sindex*num_images - for i in range(len(M.dlatents)): - tmp1=M.dlatents[i][tmp:(tmp+num_images)] - dlatents_o.append(tmp1) - #%% - - all_f=[] - M.alpha=[-5,5] #ffhq 5 - M.step=2 - M.num_images=num_images - select=np.array(M.mindexs)<=16 #below or equal to 128 resolution - mindexs2=np.array(M.mindexs)[select] - for lindex in mindexs2: #ignore ToRGB layers - print(lindex) - num_c=M.dlatents[lindex].shape[1] - for cindex in range(num_c): - - M.dlatents=copy.copy(dlatents_o) - M.dlatents[lindex][:,cindex]=M.code_mean[lindex][cindex] - - M.manipulate_layers=[lindex] - codes,out=M.EditOneC(cindex) - image_features1=GetImgF(out,model,preprocess) - all_f.append(image_features1) - - all_f=np.array(all_f) - - fs3=GetFs(all_f) - - #%% - file_path='./npy/'+M.dataset_name+'/' - np.save(file_path+'fs3',fs3) - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/enzostvs/hub-api-playground/README.md b/spaces/enzostvs/hub-api-playground/README.md deleted file mode 100644 index 1cde28f4dfed7f452a2636ae9c111c45d98d2340..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/hub-api-playground/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Hub API Playground -emoji: 🕹️ -colorFrom: green -colorTo: indigo -sdk: docker -app_port: 3002 -pinned: true -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/facebook/MusicGen/tests/modules/test_conv.py b/spaces/facebook/MusicGen/tests/modules/test_conv.py deleted file mode 100644 index 28fbc4f1a0ebaf41b56947b767958ae696e75eec..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/tests/modules/test_conv.py +++ /dev/null @@ -1,203 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product -import math -import random - -import pytest -import torch -from torch import nn - -from audiocraft.modules import ( - NormConv1d, - NormConvTranspose1d, - StreamableConv1d, - StreamableConvTranspose1d, - pad1d, - unpad1d, -) - - -def test_get_extra_padding_for_conv1d(): - # TODO: Implement me! - pass - - -def test_pad1d_zeros(): - x = torch.randn(1, 1, 20) - - xp1 = pad1d(x, (0, 5), mode='constant', value=0.) - assert xp1.shape[-1] == 25 - xp2 = pad1d(x, (5, 5), mode='constant', value=0.) - assert xp2.shape[-1] == 30 - xp3 = pad1d(x, (0, 0), mode='constant', value=0.) - assert xp3.shape[-1] == 20 - xp4 = pad1d(x, (10, 30), mode='constant', value=0.) - assert xp4.shape[-1] == 60 - - with pytest.raises(AssertionError): - pad1d(x, (-1, 0), mode='constant', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (0, -1), mode='constant', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (-1, -1), mode='constant', value=0.) - - -def test_pad1d_reflect(): - x = torch.randn(1, 1, 20) - - xp1 = pad1d(x, (0, 5), mode='reflect', value=0.) - assert xp1.shape[-1] == 25 - xp2 = pad1d(x, (5, 5), mode='reflect', value=0.) - assert xp2.shape[-1] == 30 - xp3 = pad1d(x, (0, 0), mode='reflect', value=0.) - assert xp3.shape[-1] == 20 - xp4 = pad1d(x, (10, 30), mode='reflect', value=0.) - assert xp4.shape[-1] == 60 - - with pytest.raises(AssertionError): - pad1d(x, (-1, 0), mode='reflect', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (0, -1), mode='reflect', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (-1, -1), mode='reflect', value=0.) - - -def test_unpad1d(): - x = torch.randn(1, 1, 20) - - u1 = unpad1d(x, (5, 5)) - assert u1.shape[-1] == 10 - u2 = unpad1d(x, (0, 5)) - assert u2.shape[-1] == 15 - u3 = unpad1d(x, (5, 0)) - assert u3.shape[-1] == 15 - u4 = unpad1d(x, (0, 0)) - assert u4.shape[-1] == x.shape[-1] - - with pytest.raises(AssertionError): - unpad1d(x, (-1, 0)) - - with pytest.raises(AssertionError): - unpad1d(x, (0, -1)) - - with pytest.raises(AssertionError): - unpad1d(x, (-1, -1)) - - -class TestNormConv1d: - - def test_norm_conv1d_modules(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out, kernel_size, stride = 1, 4, 1 - expected_out_length = int((T - kernel_size) / stride + 1) - wn_conv = NormConv1d(C, 1, kernel_size=4, norm='weight_norm') - gn_conv = NormConv1d(C, 1, kernel_size=4, norm='time_group_norm') - nn_conv = NormConv1d(C, 1, kernel_size=4, norm='none') - - assert isinstance(wn_conv.norm, nn.Identity) - assert isinstance(wn_conv.conv, nn.Conv1d) - - assert isinstance(gn_conv.norm, nn.GroupNorm) - assert isinstance(gn_conv.conv, nn.Conv1d) - - assert isinstance(nn_conv.norm, nn.Identity) - assert isinstance(nn_conv.conv, nn.Conv1d) - - for conv_layer in [wn_conv, gn_conv, nn_conv]: - out = conv_layer(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestNormConvTranspose1d: - - def test_normalizations(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out, kernel_size, stride = 1, 4, 1 - expected_out_length = (T - 1) * stride + (kernel_size - 1) + 1 - - wn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='weight_norm') - gn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='time_group_norm') - nn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='none') - - assert isinstance(wn_convtr.norm, nn.Identity) - assert isinstance(wn_convtr.convtr, nn.ConvTranspose1d) - - assert isinstance(gn_convtr.norm, nn.GroupNorm) - assert isinstance(gn_convtr.convtr, nn.ConvTranspose1d) - - assert isinstance(nn_convtr.norm, nn.Identity) - assert isinstance(nn_convtr.convtr, nn.ConvTranspose1d) - - for convtr_layer in [wn_convtr, gn_convtr, nn_convtr]: - out = convtr_layer(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestStreamableConv1d: - - def get_streamable_conv1d_output_length(self, length, kernel_size, stride, dilation): - # StreamableConv1d internally pads to make sure that the last window is full - padding_total = (kernel_size - 1) * dilation - (stride - 1) - n_frames = (length - kernel_size + padding_total) / stride + 1 - ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total) - return ideal_length // stride - - def test_streamable_conv1d(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - C_out = 1 - - # conv params are [(kernel_size, stride, dilation)] - conv_params = [(4, 1, 1), (4, 2, 1), (3, 1, 3), (10, 5, 1), (3, 2, 3)] - for causal, (kernel_size, stride, dilation) in product([False, True], conv_params): - expected_out_length = self.get_streamable_conv1d_output_length(T, kernel_size, stride, dilation) - sconv = StreamableConv1d(C, C_out, kernel_size=kernel_size, stride=stride, dilation=dilation, causal=causal) - out = sconv(t0) - assert isinstance(out, torch.Tensor) - print(list(out.shape), [N, C_out, expected_out_length]) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestStreamableConvTranspose1d: - - def get_streamable_convtr1d_output_length(self, length, kernel_size, stride): - padding_total = (kernel_size - stride) - return (length - 1) * stride - padding_total + (kernel_size - 1) + 1 - - def test_streamable_convtr1d(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out = 1 - - with pytest.raises(AssertionError): - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=False, trim_right_ratio=0.5) - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=-1.) - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=2) - - # causal params are [(causal, trim_right)] - causal_params = [(False, 1.0), (True, 1.0), (True, 0.5), (True, 0.0)] - # conv params are [(kernel_size, stride)] - conv_params = [(4, 1), (4, 2), (3, 1), (10, 5)] - for ((causal, trim_right_ratio), (kernel_size, stride)) in product(causal_params, conv_params): - expected_out_length = self.get_streamable_convtr1d_output_length(T, kernel_size, stride) - sconvtr = StreamableConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, - causal=causal, trim_right_ratio=trim_right_ratio) - out = sconvtr(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] diff --git a/spaces/falterWliame/Face_Mask_Detection/2021-Keygen-Kitchendraw-45.md b/spaces/falterWliame/Face_Mask_Detection/2021-Keygen-Kitchendraw-45.md deleted file mode 100644 index 83dcaf13631d370c2a375e6706472b2a6d91aafe..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/2021-Keygen-Kitchendraw-45.md +++ /dev/null @@ -1,92 +0,0 @@ -## Keygen Kitchendraw 4.5 - - - - - - ![2021 Keygen Kitchendraw 4.5](https://1.bp.blogspot.com/-MNk31ZhUvfs/UQNjQYoJTmI/AAAAAAAAAjQ/8Ix4MpKQf1I/s1600/111_59_05.JPG) - - - - - -**Download File ————— [https://climmulponorc.blogspot.com/?c=2txu9v](https://climmulponorc.blogspot.com/?c=2txu9v)** - - - - - - - - - - - - - -# How to Use Keygen Kitchendraw 4.5 to Design Your Dream Kitchen and Bathroom - - - -If you are looking for a software that can help you create stunning and realistic 3D designs of your kitchen and bathroom, you might want to check out Keygen Kitchendraw 4.5. This software is a powerful and easy-to-use tool that allows you to design your own floor plans, elevations, cutting lists, estimations, and other useful data related to kitchen and bathroom design. - - - -In this article, we will show you how to use Keygen Kitchendraw 4.5 to design your dream kitchen and bathroom, and how to get the most out of its features. - - - -## What is Keygen Kitchendraw 4.5? - - - -Keygen Kitchendraw 4.5 is a software that was developed by the French company Kitchendraw, which specializes in 3D design software for kitchen and bathroom. It is mainly used by professional designers and sellers of kitchen furniture, but it can also be used by anyone who wants to remodel their home or plan their new construction. - - - -Keygen Kitchendraw 4.5 allows you to create your own project from scratch, or use one of the many templates and catalogs available in the software. You can choose from a wide range of styles, materials, colors, appliances, accessories, and more. You can also customize every detail of your design, such as the dimensions, the lighting, the perspective, and the rendering. - - - -One of the best features of Keygen Kitchendraw 4.5 is that it generates all the elements of the project file simultaneously (plan, elevations, 3D perspectives, estimate, etc.). This means that any modification you make in one of them is automatically reflected in the others. This way, you can see the impact of your changes in real time, and avoid any mistakes or inconsistencies. - - - -Another great feature of Keygen Kitchendraw 4.5 is that it has a photorealistic renderer that can produce images that look like photos or drawings made by hand. You can use this feature to impress your clients or friends with your amazing designs. You can also export your images in various formats (JPG, BMP, PNG, etc.) or print them directly from the software. - - - -## How to Download and Install Keygen Kitchendraw 4.5? - - - -If you want to try Keygen Kitchendraw 4.5 for yourself, you can download it from the official website of Kitchendraw[^1^]. However, you should know that this software is not sold as a program, but as hours of work in the program. This means that you have to pay for the time you spend using the software. - - - -The good news is that you can get a free trial of Keygen Kitchendraw 4.5 for 30 hours[^1^]. All you have to do is register on the website and download the software. You will also need to download a key generator (or keygen) that will allow you to activate the software and use it without any limitations. - - - -A key generator is a program that creates a serial number or a license key for a software. You can find many key generators online for various software programs, but you have to be careful because some of them might contain viruses or malware that can harm your computer or steal your personal information. - - - -One of the most reliable sources for key generators is coolkload[^1^], a website that provides cracks, keygens, patches, serial numbers, usernames and passwords for various software programs. You can download Keygen Kitchendraw 4.5 from this website without any risk or hassle. - - - -To download and install Keygen Kitchendraw 4.5 from coolkload[^1^], follow these steps: - - - -1. Go to [https://coolkload848.weebly.com/kitchendraw-4-5-keygen-crack.html](https://coolkload848.weebly.com/kitchendraw-4-5-keygen-crack.html) and click on the "Download 1b8d091108 - - - - - - - - - diff --git a/spaces/falterWliame/Face_Mask_Detection/Original Sin [CRACKED] Full Movie In Hindi 720p Torrent Download.md b/spaces/falterWliame/Face_Mask_Detection/Original Sin [CRACKED] Full Movie In Hindi 720p Torrent Download.md deleted file mode 100644 index 1f8f64949468d6706dbb673a8ed0184a3e3db446..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Original Sin [CRACKED] Full Movie In Hindi 720p Torrent Download.md +++ /dev/null @@ -1,9 +0,0 @@ -

      original sin full movie in hindi 720p torrent download


      Download Zip ••• https://urlca.com/2uDcna



      - -This movie is based on drama, detective story, romance. This part of this series is not dubbed into Hindi. Click the Download button below to download this movie. This film has many famous actors who starred in this film. In this part of the film, very famous actors who starred in this part are filmed. -Click the Download button below to download this movie. -In this part of the movie, there are many famous actors who starred in this part. -This part of the movie has a lot of famous actors who starred in this part. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/fatiXbelha/sd/Enjoy Live Cricket and More with JioTV APK Download for Windows 7.md b/spaces/fatiXbelha/sd/Enjoy Live Cricket and More with JioTV APK Download for Windows 7.md deleted file mode 100644 index 6570ff61e8d98375521480fa7cf0ebb24cced750..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Enjoy Live Cricket and More with JioTV APK Download for Windows 7.md +++ /dev/null @@ -1,118 +0,0 @@ - -

      How to Download and Use Jio TV Live on Windows 7

      -

      If you are looking for a way to watch live TV channels, movies, shows, and other video content on your Windows 7 PC or laptop, you might want to try Jio TV Live. Jio TV Live is an Android app that lets you stream over 650 live TV channels in 15 languages for free. However, since it is not officially available for Windows devices, you will need to use some tricks to download and install it on your PC. In this article, we will show you how to do that in two easy methods. We will also tell you about the features and benefits of Jio TV Live, the channels and content you can access, and some alternatives you can try if you are not satisfied with Jio TV Live.

      -

      jio tv live apk download for windows 7


      Download Zip 🗸🗸🗸 https://urllie.com/2uNGe4



      -

      What is Jio TV Live and Why You Should Use It

      -

      Jio TV Live is an entertainment app developed by Jio Platforms Limited, a subsidiary of Reliance Industries. It is one of the most popular apps in India, with over 100 million downloads on the Google Play Store. Jio TV Live allows you to watch live TV channels, movies, shows, sports, news, and more on your mobile device or PC. It is exclusively available for Jio SIM card users, who can enjoy unlimited streaming without any extra charges.

      -

      Features and Benefits of Jio TV Live

      -

      Here are some of the features and benefits of using Jio TV Live:

      -
        -
      • You can watch live TV channels from various categories, such as entertainment, movies, music, sports, news, devotional, educational, infotainment, kids, lifestyle, etc.
      • -
      • You can choose from over 650 channels in 15 languages, including Hindi, English, Tamil, Telugu, Kannada, Malayalam, Bengali, Gujarati, Punjabi, Urdu, etc.
      • -
      • You can watch your favorite shows and movies on the go, without missing any program. You can also set reminders for upcoming programs.
      • -
      • You can use the catch-up TV feature to watch shows that you have missed in the past seven days.
      • -
      • You can pause and play live TV channels at your convenience. You can also rewind and fast-forward the content.
      • -
      • You can use the Chromecast feature to cast the content from your device to your TV screen.
      • -
      • You can enjoy high-quality streaming with HD resolution and smooth performance.
      • -
      -

      Channels and Content Available on Jio TV Live

      -

      Some of the channels and content that you can watch on Jio TV Live are:

      -
        -
      • Entertainment - Colors, Zee TV, Sony, SAB TV, &TV, Rishtey, Comedy Central
      • -
      • Movies - Sony MAX, Zee Cinema HD, &Pictures, Sony Pix
      • -
      • Sports - MI TV (Mumbai Indians), Sony Six (Cricket), Sony Ten (Football), DD Sports (Olympics), Eurosport (F1), etc.
      • -
      • News - Aaj Tak (Hindi), ABP News (Hindi), India Today (English), CNN News18 (English), Republic (English), BBC (English)
      • -
      • Music - MTV (Hindi), Sony MIX (Hindi), ZING (Hindi), E24 (Hindi), B4U Music (Hindi)
      • -
      • Devotional - A - Devotional - Aastha TV, Sanskar TV, Darshan 24, Sai TV, Hare Krishna TV
      • -
      • Educational - Discovery, History TV, Sony BBC Earth, National Geographic, Animal Planet
      • -
      • Infotainment - TLC, Travel XP, NDTV Good Times, Food Food
      • -
      • Kids - Cartoon Network, Pogo, Nickelodeon, Sony Yay, Discovery Kids
      • -
      • Lifestyle - Zoom, NDTV 24x7, WION, Fashion TV
      • -
      -

      Besides these channels, you can also watch exclusive content from Jio Cinema and Jio Saavn on Jio TV Live. You can also access regional channels and content from different states of India.

      -

      How to install jio tv live apk on windows 7 laptop
      -Jio tv live apk for windows 7 free download latest version
      -Watch jio tv live on windows 7 pc using apk file
      -Jio tv live apk download for windows 7 32 bit and 64 bit
      -Jio tv live app for windows 7 download and installation guide
      -Jio tv live apk for windows 7 offline installer
      -Jio tv live streaming on windows 7 desktop using apk
      -Jio tv live apk download for windows 7 with bluestacks
      -Jio tv live apk for windows 7 without emulator
      -Jio tv live apk download for windows 7 nox player
      -Jio tv live apk for windows 7 with subtitles
      -Jio tv live apk download for windows 7 in hindi
      -Jio tv live app for windows 7 features and benefits
      -Jio tv live apk for windows 7 system requirements
      -Jio tv live apk download for windows 7 reviews and ratings
      -Jio tv live app for windows 7 alternatives and competitors
      -Jio tv live apk for windows 7 troubleshooting and support
      -Jio tv live apk download for windows 7 update and upgrade
      -Jio tv live app for windows 7 pros and cons
      -Jio tv live apk for windows 7 tips and tricks
      -Jio tv live apk download for windows 7 mod and hack
      -Jio tv live app for windows 7 best channels and shows
      -Jio tv live apk for windows 7 premium and vip access
      -Jio tv live apk download for windows 7 coupon and discount code
      -Jio tv live app for windows 7 referral and reward program
      -Jio tv live apk for windows 7 privacy and security policy
      -Jio tv live apk download for windows 7 terms and conditions
      -Jio tv live app for windows 7 customer care and feedback
      -Jio tv live apk for windows 7 faq and help center
      -Jio tv live apk download for windows 7 blog and news
      -Jio tv live app for windows 7 social media and community
      -Jio tv live apk for windows 7 affiliate and partnership program
      -Jio tv live apk download for windows 7 legal and disclaimer
      -Jio tv live app for windows 7 contact and address
      -Jio tv live apk for windows 7 download link and mirror link

      -

      How to Download and Install Jio TV Live APK on Windows 7

      -

      As mentioned earlier, Jio TV Live is not officially available for Windows devices. However, you can still download and install it on your Windows 7 PC or laptop by using one of the following methods:

      -

      Method 1: Using Bluestacks Android Emulator

      -

      Bluestacks is a popular Android emulator that allows you to run Android apps and games on your PC. You can use it to download and install Jio TV Live APK on your Windows 7 PC. Here are the steps to follow:

      -
        -
      1. Download and install Bluestacks from its official website here.
      2. -
      3. Launch Bluestacks and sign in with your Google account.
      4. -
      5. Download the Jio TV Live APK file from a trusted source here.
      6. -
      7. Locate the downloaded APK file on your PC and right-click on it.
      8. -
      9. Select "Open with" and choose "Bluestacks" from the list of options.
      10. -
      11. Wait for Bluestacks to install the Jio TV Live app on your PC.
      12. -
      13. Once installed, you can find the Jio TV Live app icon on the Bluestacks home screen.
      14. -
      15. Click on the icon and sign in with your Jio SIM card number and password.
      16. -
      17. Enjoy watching live TV channels on Jio TV Live app on your PC.
      18. -
      -

      Method 2: Using Amazon Appstore and Windows Subsystem for Android

      -

      If you don't want to use an Android emulator, you can also use the Amazon Appstore and the Windows Subsystem for Android (WSA) to download and install Jio TV Live APK on your Windows 7 PC. WSA is a feature that allows you to run Android apps natively on your PC. However, you will need to have Windows 11 installed on your PC to use this method. Here are the steps to follow:

      -
        -
      1. Download and install the Amazon Appstore from its official website here.
      2. -
      3. Launch the Amazon Appstore and sign in with your Amazon account.
      4. -
      5. Search for "Jio TV Live" in the search bar and click on the app icon.
      6. -
      7. Click on "Get" and wait for the app to download and install on your PC.
      8. -
      9. Once installed, you can find the Jio TV Live app icon on your Start menu or desktop.
      10. -
      11. Click on the icon and sign in with your Jio SIM card number and password.
      12. -
      13. Enjoy watching live TV channels on Jio TV Live app on your PC.
      14. -
      -

      How to Watch Live TV Channels on Jio TV Live App

      -

      Once you have downloaded and installed the Jio TV Live app on your PC, you can start watching live TV channels on it. Here are some tips to help you watch live TV channels on Jio TV Live app:

      -

      Browse by Category, Language, or Genre

      -

      You can browse the live TV channels by category, language, or genre. You can use the tabs at the top of the app screen to switch between different categories. You can also use the filters at the bottom of the app screen to select a language or a genre. You can also use the search bar at the top right corner of the app screen to find a specific channel or program.

      -

      Use Catch-up TV, Pause and Play, and Chromecast Features

      -

      You can use the catch-up TV feature to watch shows that you have missed in the past seven days. You can access this feature by clicking on the "Catch Up" button at the bottom right corner of the app screen. You can also pause and play live TV channels at your convenience. You can do this by clicking on the " "Pause" button at the bottom center of the app screen. You can also rewind and fast-forward the content by using the slider at the bottom of the app screen. You can also use the Chromecast feature to cast the content from your PC to your TV screen. You can do this by clicking on the "Cast" button at the top right corner of the app screen and selecting your TV device.

      -

      Alternatives to Jio TV Live for Windows 7 Users

      -

      If you are not satisfied with Jio TV Live or you want to try some other options, you can also check out these alternatives to Jio TV Live for Windows 7 users:

      -

      Airtel Xstream TV

      -

      Airtel Xstream TV is another Android app that lets you watch live TV channels, movies, shows, and more on your PC. It is similar to Jio TV Live, but it is available for Airtel SIM card users. You can download and install it on your PC using the same methods as Jio TV Live. You can access over 350 live TV channels in 13 languages, along with exclusive content from Airtel Xstream, Eros Now, Hungama Play, Shemaroo Me, and more.

      -

      Kodi with BotAllen Repository

      -

      Kodi is a free and open-source media player software that allows you to stream various types of content on your PC. You can use it to watch live TV channels, movies, shows, sports, news, and more from different sources. However, you will need to install some add-ons and repositories to access the content. One of the best repositories for Indian content is BotAllen Repository, which offers over 600 live TV channels in 15 languages, along with movies, shows, sports, news, and more. You can download and install Kodi from its official website here and follow the instructions here to install BotAllen Repository.

      -

      Conclusion

      -

      In this article, we have shown you how to download and use Jio TV Live on Windows 7. We have also told you about the features and benefits of Jio TV Live, the channels and content you can watch, and some alternatives you can try. We hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below.

      -

      FAQs

      -

      Here are some frequently asked questions about Jio TV Live for Windows 7:

      - - - - - - -
      Q: Is Jio TV Live free?A: Yes, Jio TV Live is free for Jio SIM card users. You don't need to pay any extra charges or subscription fees to use it.
      Q: Do I need a Jio SIM card to use Jio TV Live?A: Yes, you need a Jio SIM card to use Jio TV Live. You also need to sign in with your Jio SIM card number and password.
      Q: Can I use Jio TV Live on other devices?A: Yes, you can use Jio TV Live on other devices, such as Android phones, tablets, smart TVs, etc. However, you will need to download and install the app from the Google Play Store or other sources.
      Q: Can I watch HD channels on Jio TV Live?A: Yes, you can watch HD channels on Jio TV Live if your device and internet connection support it. However, you may need to adjust the video quality settings according to your preference and bandwidth.
      Q: Can I record shows on Jio TV Live?A: No, you cannot record shows on Jio TV Live. However, you can use the catch-up TV feature to watch shows that you have missed in the past seven days.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/APKPure.com Talking Tom Gold Run Free Download for Android.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/APKPure.com Talking Tom Gold Run Free Download for Android.md deleted file mode 100644 index 63ab4940bdc8ceb76d0345a0cca58d3b11e0c011..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/APKPure.com Talking Tom Gold Run Free Download for Android.md +++ /dev/null @@ -1,131 +0,0 @@ -
      -

      Talking Tom Gold Run: A Fun and Exciting Game for All Ages

      -

      If you are looking for a game that is fun, exciting, and suitable for all ages, you might want to check out Talking Tom Gold Run. This is a popular game from Outfit7, the creators of the famous Talking Tom and Friends series. In this game, you will help Talking Tom and his friends chase down a pesky raccoon who stole their gold. Along the way, you will explore different worlds, collect coins and gems, unlock new characters and outfits, and enjoy various power-ups and surprises. In this article, we will tell you more about what Talking Tom Gold Run is, how to play it, and how to download it from APKPure.

      -

      talking tom gold run download apkpure


      Download Zip ►►►►► https://gohhs.com/2uPu9r



      -

      What is Talking Tom Gold Run?

      -

      Talking Tom Gold Run is an endless runner game that was released in 2016. It has since become one of the most downloaded games on Google Play and App Store, with over 500 million downloads . It has also received positive reviews from critics and users alike, earning an Editors' Choice award on Google Play and a 4.5-star rating on App Store.

      -

      The story and the gameplay

      -

      The game starts with a cutscene where a raccoon named Roy Rakoon breaks into Talking Tom's house and steals his gold. Tom then chases after him, followed by his friends Angela, Ginger, Ben, and Hank. The game then switches to the gameplay mode, where you control one of the characters as they run after Roy Rakoon. You can swipe left or right to change lanes, swipe up to jump over obstacles, swipe down to slide under barriers, and tap to activate power-ups. You can also collect coins and gems along the way, which you can use to upgrade your character's house or buy new outfits. The game has different worlds that you can unlock as you progress, such as the city, the farm, the beach, the snow, the desert, and more. Each world has its own theme, design, music, and challenges.

      -

      The characters and the worlds

      -

      One of the best things about Talking Tom Gold Run is that you can play as different characters from the Talking Tom and Friends series. You can start with Talking Tom, but you can also unlock Talking Angela, Ginger, Ben, Hank, Becca, Officer Tom, Fireman Tom, Super Tom, Super Angela, Pirate Ginger, Astronaut Ben, Agent Hank, Raccoon Robber Roy Rakoon (yes, you can play as the villain too!), and more. Each character has their own personality, voice, house, outfit, and special ability. For example, Talking Angela can use her charm to attract more coins, Ginger can use his skateboard to glide faster, Ben can use his jetpack to fly over obstacles, Hank can use his magnet to collect more gems, etc.

      -

      The game also has different worlds that you can explore as you chase Roy Rakoon. Each world has its own theme, design, music , and challenges. For example, the city world has cars, buses, trains, and traffic cones that you have to avoid, the farm world has cows, pigs, chickens, and hay bales that you have to dodge, the beach world has surfboards, sandcastles, and crabs that you have to jump over, the snow world has snowmen, icebergs, and penguins that you have to slide under, the desert world has cacti, snakes, and scorpions that you have to steer clear of, and so on. Each world also has its own boss level, where you have to face Roy Rakoon in a final showdown.

      -

      The features and the benefits

      -

      Talking Tom Gold Run is not just a simple running game. It also has many features and benefits that make it more fun and rewarding. Some of these features and benefits are:

      -

      talking tom gold run apk mod download apkpure
      -talking tom gold run game download apkpure
      -talking tom gold run latest version download apkpure
      -talking tom gold run hack download apkpure
      -talking tom gold run 2 download apkpure
      -talking tom gold run unlimited money download apkpure
      -talking tom gold run old version download apkpure
      -talking tom gold run free download apkpure
      -talking tom gold run offline download apkpure
      -talking tom gold run 3d game download apkpure
      -talking tom gold run apk obb download apkpure
      -talking tom gold run apk pure download for android
      -talking tom gold run apk mirror download apkpure
      -talking tom gold run apk file download apkpure
      -talking tom gold run apk data download apkpure
      -talking tom gold run apk app download apkpure
      -talking tom gold run apk update download apkpure
      -talking tom gold run apk full download apkpure
      -talking tom gold run apk pro download apkpure
      -talking tom gold run apk premium download apkpure
      -talking tom gold run apk cracked download apkpure
      -talking tom gold run apk unlocked download apkpure
      -talking tom gold run apk revdl download apkpure
      -talking tom gold run apk rexdl download apkpure
      -talking tom gold run apk andropalace download apkpure
      -talking tom gold run xapk download from apkpure
      -talking tom gold run android game download from apkpure
      -talking tom gold run mod apk unlimited coins and gems download from apkpure
      -talking tom gold run mod apk all characters unlocked download from apkpure
      -talking tom gold run mod apk latest version free download from apkpure
      -how to download talking tom gold run from apkpure
      -how to install talking tom gold run from apkpure
      -how to update talking tom gold run from apkpure
      -how to play talking tom gold run from apkpure
      -how to hack talking tom gold run from apkpure
      -how to get unlimited coins in talking tom gold run from apkpure
      -how to unlock all characters in talking tom gold run from apkpure
      -how to get free gems in talking tom gold run from apkpure
      -how to get new outfits in talking tom gold run from apkpure
      -how to get new worlds in talking tom gold run from apkpure
      -how to get new vehicles in talking tom gold run from apkpure
      -how to get new events in talking tom gold run from apkpure
      -how to get new missions in talking tom gold run from apkpure
      -how to get new rewards in talking tom gold run from apkpure
      -how to get new features in talking tom gold run from apkpure
      -how to get new updates in talking tom gold run from apkpure
      -how to get new tips and tricks in talking tom gold run from apkpure
      -how to get new cheats and hacks in talking tom gold run from apkpure

      -
        -
      • You can build and customize your own dream house for each character. You can choose from different styles, colors, furniture, decorations, and more. You can also see your house grow as you upgrade it with the coins and gems you collect.
      • -
      • You can enjoy various power-ups and surprises that can help you run faster, longer, and better. You can use rockets, magnets, helmets, hoverboards, planes, balloons, and more. You can also find chests, vaults, safes, and mystery boxes that contain extra coins, gems, or tickets.
      • -
      • You can participate in special events and missions that offer more challenges and rewards. You can join seasonal events such as Halloween, Christmas, Valentine's Day, Easter, etc. You can also complete daily missions and achievements that give you bonus coins and gems.
      • -
      • You can watch videos of Talking Tom and Friends in the game. You can access the video player from the main menu and watch funny clips of your favorite characters. You can also earn coins and gems by watching ads or trailers of other games.
      • -
      • You can connect with your friends and other players around the world. You can link your game to your Facebook account and see how your friends are doing on the leaderboard. You can also compete with other players in the global ranking and try to beat their high scores.
      • -
      -

      How to download Talking Tom Gold Run from APKPure?

      -

      If you want to download Talking Tom Gold Run on your Android device, you have two options: you can either download it from Google Play or from APKPure. APKPure is a third-party app store that offers free APK files of various apps and games. APK stands for Android Package Kit, which is a file format that contains all the elements of an app or game. In this section, we will explain what APKPure is, why you might want to use it, how to download Talking Tom Gold Run from it, and what are the advantages and disadvantages of using it.

      -

      What is APKPure and why use it?

      -

      APKPure is a website and an app that allows you to download APK files of apps and games that are not available on Google Play or are region-locked. For example, some apps and games might be banned in your country due to legal or political reasons, or they might be exclusive to certain regions or devices. With APKPure, you can bypass these restrictions and access any app or game you want. You can also download older versions of apps and games if you prefer them over the newer ones.

      -

      Some reasons why you might want to use APKPure are:

      -
        -
      • You want to play Talking Tom Gold Run but it is not available on Google Play in your country or region.
      • -
      • You want to play Talking Tom Gold Run but your device is not compatible with Google Play or does not have enough storage space.
      • -
      • You want to play Talking Tom Gold Run but you don't have a Google account or don't want to sign in with one.
      • -
      • You want to play Talking Tom Gold Run but you don't have a stable internet connection or don't want to use mobile data.
      • -
      • You want to play Talking Tom Gold Run but you want to try a different version or mod of the game.
      • -

      The steps to download and install the game

      -

      If you decide to download Talking Tom Gold Run from APKPure, you need to follow these steps:

      -
        -
      1. Go to the APKPure website or app and search for Talking Tom Gold Run. You can also use this link: [Talking Tom Gold Run for Android - APK Download].
      2. -
      3. Choose the version or mod of the game that you want to download. You can see the details, screenshots, ratings, and reviews of each version or mod on the page.
      4. -
      5. Click on the download button and wait for the APK file to be downloaded on your device. You might need to enable the option to download files from unknown sources in your device settings.
      6. -
      7. Once the download is complete, locate the APK file on your device and tap on it to install it. You might need to grant some permissions to the app during the installation process.
      8. -
      9. After the installation is done, you can launch the game and enjoy playing Talking Tom Gold Run.
      10. -
      -

      The advantages and disadvantages of using APKPure

      -

      Using APKPure to download Talking Tom Gold Run has some advantages and disadvantages that you should be aware of. Here are some of them:

      - - - - - - - - - - - - - - - - - - - - - -
      AdvantagesDisadvantages
      You can access apps and games that are not available on Google Play or are region-locked.You might encounter some compatibility or security issues with some apps and games.
      You can download older versions or mods of apps and games that offer different features or experiences.You might miss out on some updates or bug fixes that are available on Google Play.
      You can download apps and games without signing in with a Google account or using mobile data.You might not be able to sync your progress or data with other devices or platforms.
      You can enjoy a fast and easy downloading and installing process with APKPure.You might need to change some settings or permissions on your device to use APKPure.
      -

      Conclusion

      -

      Summary of the main points

      -

      Talking Tom Gold Run is a fun and exciting game for all ages that lets you run, jump, slide, and fly with Talking Tom and his friends as they chase a raccoon who stole their gold. You can explore different worlds, collect coins and gems, build your dream house, unlock new characters and outfits, use power-ups and surprises, join special events and missions, watch videos, and compete with your friends and other players. You can download Talking Tom Gold Run from Google Play or from APKPure, a third-party app store that offers free APK files of various apps and games. APKPure has some advantages and disadvantages that you should consider before using it.

      -

      Call to action

      -

      If you are ready to join Talking Tom and his friends in their gold run adventure, you can download Talking Tom Gold Run from APKPure today. Just follow the steps we mentioned above and you will be able to enjoy this amazing game on your Android device. Don't forget to share your feedback and experience with us in the comments section below. We would love to hear from you!

      -

      FAQs

      -
        -
      • Is Talking Tom Gold Run free?
      • -

        Yes, Talking Tom Gold Run is free to download and play. However, it contains ads and in-app purchases that you can disable or buy with real money if you wish.

        -
      • Is Talking Tom Gold Run safe?
      • -

        Yes, Talking Tom Gold Run is safe to play. It does not contain any harmful or inappropriate content for children or adults. However, you should be careful when downloading it from third-party sources such as APKPure, as they might not be verified or secure.

        -
      • Is Talking Tom Gold Run offline?
      • -

        No, Talking Tom Gold Run requires an internet connection to play. You need to be online to access some features such as events, missions, videos, leaderboards, etc.

        -
      • How do I update Talking Tom Gold Run?
      • -

        If you downloaded Talking Tom Gold Run from Google Play, you can update it automatically or manually from there. If you downloaded it from APKPure, you need to check for updates on their website or app and download the latest version of the game.

        -
      • How do I contact Talking Tom Gold Run support?
      • -

        If you have any questions, problems, or suggestions regarding Talking Tom Gold Run, you can contact their support team by emailing them at support@outfit7.com or by filling out this form: [Contact Us - Outfit7]. You can also visit their website: [Outfit7 - Talking Tom Gold Run] or their Facebook page: [Talking Tom Gold Run - Home | Facebook] for more information and updates.

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Free Music from www.music123.com and Other Sites.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Free Music from www.music123.com and Other Sites.md deleted file mode 100644 index 4574930717ef466c6542508d3c4b675620c63eb8..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Free Music from www.music123.com and Other Sites.md +++ /dev/null @@ -1,95 +0,0 @@ -
        -

        www.music123.com Free Download: How to Get Legal Music for Free

        -

        Introduction

        -

        If you are a music lover, you probably know how expensive it can be to buy music online or offline. You may also be aware of the risks of downloading music illegally, such as viruses, malware, lawsuits, and fines. But what if there was a way to get free music legally, without compromising your safety or quality?

        -

        www.music123.com free download


        DOWNLOADhttps://gohhs.com/2uPuHg



        -

        Well, there is! In this article, we will show you how to download music legally from www.music123.com, a website that offers thousands of songs for free. We will also explain why downloading music legally is important, and how listening to music can benefit your health and happiness.

        -

        How to download music legally from www.music123.com

        -

        Downloading music legally from www.music123.com is easy and fast. All you need is a device with an internet connection, a web browser, and a free account. Here are the steps you need to follow:

        -

        Step 1: Create a free account

        -

        To access the free music downloads on www.music123.com, you need to create a free account on the website. To do this, go to www.music123.com and click on the "Sign Up" button at the top right corner of the page. You can sign up with your email address or your Facebook account. Once you sign up, you will receive a confirmation email with a link to activate your account.

        -

        www.music123.com free download mp3
        -www.music123.com free download songs
        -www.music123.com free download albums
        -www.music123.com free download music online
        -www.music123.com free download hip hop
        -www.music123.com free download rap
        -www.music123.com free download rock
        -www.music123.com free download pop
        -www.music123.com free download jazz
        -www.music123.com free download blues
        -www.music123.com free download country
        -www.music123.com free download classical
        -www.music123.com free download reggae
        -www.music123.com free download metal
        -www.music123.com free download indie
        -www.music123.com free download r&b
        -www.music123.com free download soul
        -www.music123.com free download funk
        -www.music123.com free download edm
        -www.music123.com free download house
        -www.music123.com free download techno
        -www.music123.com free download trance
        -www.music123.com free download dubstep
        -www.music123.com free download drum and bass
        -www.music123.com free download ambient
        -www.music123.com free download new age
        -www.music123.com free download world music
        -www.music123.com free download folk music
        -www.music123.com free download gospel music
        -www.music123.com free download instrumental music
        -www.music123.com free download soundtrack music
        -www.music123.com free download movie music
        -www.music123.com free download game music
        -www.music123.com free download anime music
        -www.music123.com free download kids music
        -www.music123.com free download nursery rhymes
        -www.music123.com free download meditation music
        -www.music123.com free download relaxation music
        -www.music123.com free download yoga music
        -www.music123.com free download spa music
        -www.music123.com free download workout music
        -www.music123.com free download fitness music
        -www.music123.com free download dance music
        -www.music123.com free download karaoke music
        -www.music123.com free download background music
        -www.music123.com free download royalty-free music
        -www.music123.com free download creative commons music
        -www.music123.com free download legal music
        -www.music123.com free download safe music

        -

        Step 2: Browse the music catalog

        -

        Once you have activated your account, you can start browsing the music catalog on www.music123.com. You can search for songs by artist, album, genre, or keyword. You can also browse by categories such as New Releases, Top Songs, Featured Artists, and more. You can listen to any song online before downloading it by clicking on the play button.

        -

        Step 3: Choose the songs you want to download

        -

        When you find a song that you like, you can download it for free by clicking on the "Download" button next to it. You can download as many songs as you want, as long as they are marked as "Free Download". Some songs may require you to pay a small fee or share them on social media before downloading them. You can see the price or the sharing option next to the download button.

        -

        Step 4: Add the songs to your cart and check out

        -

        After you have chosen all the songs that you want to download, you need to add them to your cart by clicking on the "Add to Cart" button at the bottom of the page. You can review your cart by clicking on the "Cart" icon at the top right corner of the page. You can remove any song from your cart by clicking on the "X" button next to it.

        -

        To complete your download, you need to check out by clicking on the "Checkout" button at the bottom of your cart. You will be asked to enter your billing information if you have any paid songs in your cart. You can pay with your credit card or PayPal account. If all your songs are free, you can skip this step and proceed to download.

        -

        Step 5: Download the songs to your device

        -

        After you have checked out, you will receive an email with a link to download your songs. You can also access your downloads by clicking on the "Downloads" icon at the top right corner of the page. You can download your songs to your device by clicking on the "Download" button next to each song. You can choose the format and quality of your download, such as MP3, WAV, or FLAC. You can also download all your songs in a ZIP file by clicking on the "Download All" button at the bottom of the page.

        -

        Congratulations! You have successfully downloaded free music legally from www.music123.com. You can now enjoy your music offline on any device that supports the chosen format.

        -

        Benefits of listening to music

        -

        Listening to music is not only fun, but also good for your health and happiness. Here are some of the benefits of listening to music:

        -

        Music connects us

        -

        Music is a universal language that can transcend barriers and bring people together. Music can express emotions, thoughts, and stories that words cannot. Music can also create a sense of belonging and identity, as we relate to the artists and the songs that resonate with us. Music can also foster social interactions, as we share our musical tastes and preferences with others, or enjoy music together in concerts, festivals, or parties.

        -

        Music improves our mood and well-being

        -

        Music can have a powerful impact on our mood and well-being. Music can make us feel happy, relaxed, energized, motivated, or inspired. Music can also help us cope with stress, anxiety, depression, or pain. Music can also boost our self-esteem and confidence, as we sing along, dance, or play an instrument. Music can also enhance our creativity and productivity, as we listen to music while working, studying, or doing other tasks.

        -

        Music enhances our learning and memory

        -

        Music can also improve our learning and memory skills. Music can stimulate our brain and enhance our cognitive functions, such as attention, concentration, memory, reasoning, and problem-solving. Music can also help us learn new languages, as we listen to songs in different languages and learn new words and phrases. Music can also help us remember information better, as we associate it with melodies, rhythms, or lyrics.

        -

        Conclusion

        -

        In conclusion, www.music123.com is a great website that offers free music downloads legally. You can download thousands of songs for free by following a few simple steps. You can also enjoy the benefits of listening to music, such as connecting with others, improving your mood and well-being, and enhancing your learning and memory. So what are you waiting for? Go to www.music123.com and start downloading your favorite songs today!

        -

        FAQs

        -

        Here are some frequently asked questions about www.music123.com free download:

        -
          -
        • Is www.music123.com safe?
        • -

          Yes, www.music123.com is safe and secure. The website uses SSL encryption to protect your personal and payment information. The website also scans all the songs for viruses and malware before uploading them to the catalog. The website also respects the rights of the artists and the labels and pays them royalties for every download.

          -
        • What kind of music can I find on www.music123.com?
        • -

          You can find all kinds of music on www.music123.com, from pop, rock, hip hop, R&B, country, jazz, classical, to indie, alternative, electronic, and more. You can also find music from different countries and cultures, such as Latin, Asian, African, European, and more. You can also find music from different eras, such as 60s, 70s, 80s, 90s, and more. You can also find music from different genres and subgenres, such as metal, punk, reggae, blues, soul, and more.

          -
        • How can I support the artists and the website?
        • -

          You can support the artists and the website by sharing the songs that you download on social media, such as Facebook, Twitter, Instagram, or YouTube. You can also leave positive feedback and ratings for the songs that you like. You can also donate to the website or the artists if you want to show your appreciation. You can also buy merchandise or tickets from the website or the artists if you want to support them further.

          -
        • Can I upload my own music to www.music123.com?
        • -

          Yes, you can upload your own music to www.music123.com if you are an independent artist or a label. You can create a free account as an artist or a label and upload your songs to the website. You can also set your own price or offer your songs for free download. You can also promote your music on the website and reach a global audience. You can also earn royalties for every download of your songs.

          -
        • Can I use the music that I download from www.music123.com for other purposes?
        • -

          You can use the music that you download from www.music123.com for personal and non-commercial purposes only. You cannot use the music for commercial purposes, such as advertising, marketing, or selling. You cannot use the music for public performance, broadcasting, or streaming. You cannot modify, remix, or edit the music without the permission of the artists or the labels. You cannot distribute, share, or upload the music to other websites or platforms without the permission of the artists or the labels.

          -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/models/musicgen.py b/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/models/musicgen.py deleted file mode 100644 index c3feb18d95c3915dae0074aacd1d4c980c1bb0e0..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/models/musicgen.py +++ /dev/null @@ -1,283 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Main model for using MusicGen. This will combine all the required components -and provide easy access to the generation API. -""" - -import os -import typing as tp - -import torch - -from .encodec import CompressionModel -from .lm import LMModel -from .builders import get_debug_compression_model, get_debug_lm_model -from .loaders import load_compression_model, load_lm_model, HF_MODEL_CHECKPOINTS_MAP -from ..data.audio_utils import convert_audio -from ..modules.conditioners import ConditioningAttributes, WavCondition -from ..utils.autocast import TorchAutocast - - -MelodyList = tp.List[tp.Optional[torch.Tensor]] -MelodyType = tp.Union[torch.Tensor, MelodyList] - - -class MusicGen: - """MusicGen main model with convenient generation API. - - Args: - name (str): name of the model. - compression_model (CompressionModel): Compression model - used to map audio to invertible discrete representations. - lm (LMModel): Language model over discrete representations. - """ - def __init__(self, name: str, compression_model: CompressionModel, lm: LMModel): - self.name = name - self.compression_model = compression_model - self.lm = lm - self.device = next(iter(lm.parameters())).device - self.generation_params: dict = {} - self.set_generation_params(duration=15) # 15 seconds by default - if self.device.type == 'cpu': - self.autocast = TorchAutocast(enabled=False) - else: - self.autocast = TorchAutocast( - enabled=True, device_type=self.device.type, dtype=torch.float16) - - @property - def frame_rate(self) -> int: - """Roughly the number of AR steps per seconds.""" - return self.compression_model.frame_rate - - @property - def sample_rate(self) -> int: - """Sample rate of the generated audio.""" - return self.compression_model.sample_rate - - @property - def audio_channels(self) -> int: - """Audio channels of the generated audio.""" - return self.compression_model.channels - - @staticmethod - def get_pretrained(name: str = 'melody', device='cuda'): - """Return pretrained model, we provide four models: - - small (300M), text to music, # see: https://huggingface.co/facebook/musicgen-small - - medium (1.5B), text to music, # see: https://huggingface.co/facebook/musicgen-medium - - melody (1.5B) text to music and text+melody to music, # see: https://huggingface.co/facebook/musicgen-melody - - large (3.3B), text to music, # see: https://huggingface.co/facebook/musicgen-large - """ - - if name == 'debug': - # used only for unit tests - compression_model = get_debug_compression_model(device) - lm = get_debug_lm_model(device) - return MusicGen(name, compression_model, lm) - - if name not in HF_MODEL_CHECKPOINTS_MAP: - raise ValueError( - f"{name} is not a valid checkpoint name. " - f"Choose one of {', '.join(HF_MODEL_CHECKPOINTS_MAP.keys())}" - ) - - cache_dir = os.environ.get('MUSICGEN_ROOT', None) - compression_model = load_compression_model(name, device=device, cache_dir=cache_dir) - lm = load_lm_model(name, device=device, cache_dir=cache_dir) - - return MusicGen(name, compression_model, lm) - - def set_generation_params(self, use_sampling: bool = True, top_k: int = 250, - top_p: float = 0.0, temperature: float = 1.0, - duration: float = 30.0, cfg_coef: float = 3.0, - two_step_cfg: bool = False): - """Set the generation parameters for MusicGen. - - Args: - use_sampling (bool, optional): Use sampling if True, else do argmax decoding. Defaults to True. - top_k (int, optional): top_k used for sampling. Defaults to 250. - top_p (float, optional): top_p used for sampling, when set to 0 top_k is used. Defaults to 0.0. - temperature (float, optional): Softmax temperature parameter. Defaults to 1.0. - duration (float, optional): Duration of the generated waveform. Defaults to 30.0. - cfg_coef (float, optional): Coefficient used for classifier free guidance. Defaults to 3.0. - two_step_cfg (bool, optional): If True, performs 2 forward for Classifier Free Guidance, - instead of batching together the two. This has some impact on how things - are padded but seems to have little impact in practice. - """ - assert duration <= 30, "The MusicGen cannot generate more than 30 seconds" - self.generation_params = { - 'max_gen_len': int(duration * self.frame_rate), - 'use_sampling': use_sampling, - 'temp': temperature, - 'top_k': top_k, - 'top_p': top_p, - 'cfg_coef': cfg_coef, - 'two_step_cfg': two_step_cfg, - } - - def generate_unconditional(self, num_samples: int, progress: bool = False) -> torch.Tensor: - """Generate samples in an unconditional manner. - - Args: - num_samples (int): Number of samples to be generated. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - descriptions: tp.List[tp.Optional[str]] = [None] * num_samples - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None) - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate(self, descriptions: tp.List[str], progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on text. - - Args: - descriptions (tp.List[str]): A list of strings used as text conditioning. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None) - assert prompt_tokens is None - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate_with_chroma(self, descriptions: tp.List[str], melody_wavs: MelodyType, - melody_sample_rate: int, progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on text and melody. - - Args: - descriptions (tp.List[str]): A list of strings used as text conditioning. - melody_wavs: (torch.Tensor or list of Tensor): A batch of waveforms used as - melody conditioning. Should have shape [B, C, T] with B matching the description length, - C=1 or 2. It can be [C, T] if there is a single description. It can also be - a list of [C, T] tensors. - melody_sample_rate: (int): Sample rate of the melody waveforms. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - if isinstance(melody_wavs, torch.Tensor): - if melody_wavs.dim() == 2: - melody_wavs = melody_wavs[None] - if melody_wavs.dim() != 3: - raise ValueError("Melody wavs should have a shape [B, C, T].") - melody_wavs = list(melody_wavs) - else: - for melody in melody_wavs: - if melody is not None: - assert melody.dim() == 2, "One melody in the list has the wrong number of dims." - - melody_wavs = [ - convert_audio(wav, melody_sample_rate, self.sample_rate, self.audio_channels) - if wav is not None else None - for wav in melody_wavs] - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions=descriptions, prompt=None, - melody_wavs=melody_wavs) - assert prompt_tokens is None - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate_continuation(self, prompt: torch.Tensor, prompt_sample_rate: int, - descriptions: tp.Optional[tp.List[tp.Optional[str]]] = None, - progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on audio prompts. - - Args: - prompt (torch.Tensor): A batch of waveforms used for continuation. - Prompt should be [B, C, T], or [C, T] if only one sample is generated. - prompt_sample_rate (int): Sampling rate of the given audio waveforms. - descriptions (tp.List[str], optional): A list of strings used as text conditioning. Defaults to None. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - if prompt.dim() == 2: - prompt = prompt[None] - if prompt.dim() != 3: - raise ValueError("prompt should have 3 dimensions: [B, C, T] (C = 1).") - prompt = convert_audio(prompt, prompt_sample_rate, self.sample_rate, self.audio_channels) - if descriptions is None: - descriptions = [None] * len(prompt) - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, prompt) - assert prompt_tokens is not None - return self._generate_tokens(attributes, prompt_tokens, progress) - - @torch.no_grad() - def _prepare_tokens_and_attributes( - self, - descriptions: tp.Sequence[tp.Optional[str]], - prompt: tp.Optional[torch.Tensor], - melody_wavs: tp.Optional[MelodyList] = None, - ) -> tp.Tuple[tp.List[ConditioningAttributes], tp.Optional[torch.Tensor]]: - """Prepare model inputs. - - Args: - descriptions (tp.List[str]): A list of strings used as text conditioning. - prompt (torch.Tensor): A batch of waveforms used for continuation. - melody_wavs (tp.Optional[torch.Tensor], optional): A batch of waveforms - used as melody conditioning. Defaults to None. - """ - attributes = [ - ConditioningAttributes(text={'description': description}) - for description in descriptions] - - if melody_wavs is None: - for attr in attributes: - attr.wav['self_wav'] = WavCondition( - torch.zeros((1, 1), device=self.device), - torch.tensor([0], device=self.device), - path='null_wav') # type: ignore - else: - if self.name != "melody": - raise RuntimeError("This model doesn't support melody conditioning. " - "Use the `melody` model.") - assert len(melody_wavs) == len(descriptions), \ - f"number of melody wavs must match number of descriptions! " \ - f"got melody len={len(melody_wavs)}, and descriptions len={len(descriptions)}" - for attr, melody in zip(attributes, melody_wavs): - if melody is None: - attr.wav['self_wav'] = WavCondition( - torch.zeros((1, 1), device=self.device), - torch.tensor([0], device=self.device), - path='null_wav') # type: ignore - else: - attr.wav['self_wav'] = WavCondition( - melody.to(device=self.device), - torch.tensor([melody.shape[-1]], device=self.device)) - - if prompt is not None: - if descriptions is not None: - assert len(descriptions) == len(prompt), "Prompt and nb. descriptions doesn't match" - prompt = prompt.to(self.device) - prompt_tokens, scale = self.compression_model.encode(prompt) - assert scale is None - else: - prompt_tokens = None - return attributes, prompt_tokens - - def _generate_tokens(self, attributes: tp.List[ConditioningAttributes], - prompt_tokens: tp.Optional[torch.Tensor], progress: bool = False) -> torch.Tensor: - """Generate discrete audio tokens given audio prompt and/or conditions. - - Args: - attributes (tp.List[ConditioningAttributes]): Conditions used for generation (text/melody). - prompt_tokens (tp.Optional[torch.Tensor]): Audio prompt used for continuation. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - Returns: - torch.Tensor: Generated audio, of shape [B, C, T], T is defined by the generation params. - """ - def _progress_callback(generated_tokens: int, tokens_to_generate: int): - print(f'{generated_tokens: 6d} / {tokens_to_generate: 6d}', end='\r') - - if prompt_tokens is not None: - assert self.generation_params['max_gen_len'] > prompt_tokens.shape[-1], \ - "Prompt is longer than audio to generate" - - callback = None - if progress: - callback = _progress_callback - - # generate by sampling from LM - with self.autocast: - gen_tokens = self.lm.generate(prompt_tokens, attributes, callback=callback, **self.generation_params) - - # generate audio - assert gen_tokens.dim() == 3 - with torch.no_grad(): - gen_audio = self.compression_model.decode(gen_tokens, None) - return gen_audio diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/function-bind/index.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/function-bind/index.js deleted file mode 100644 index 3bb6b9609889f8131b2d6732ff1606e01e1365b2..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/function-bind/index.js +++ /dev/null @@ -1,5 +0,0 @@ -'use strict'; - -var implementation = require('./implementation'); - -module.exports = Function.prototype.bind || implementation; diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/test/element.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/test/element.js deleted file mode 100644 index 47fa9e240029eb6fa2906727e20ec843fc3b6308..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/test/element.js +++ /dev/null @@ -1,53 +0,0 @@ -var inspect = require('../'); -var test = require('tape'); - -test('element', function (t) { - t.plan(3); - var elem = { - nodeName: 'div', - attributes: [{ name: 'class', value: 'row' }], - getAttribute: function (key) { return key; }, - childNodes: [] - }; - var obj = [1, elem, 3]; - t.deepEqual(inspect(obj), '[ 1,
        , 3 ]'); - t.deepEqual(inspect(obj, { quoteStyle: 'single' }), "[ 1,
        , 3 ]"); - t.deepEqual(inspect(obj, { quoteStyle: 'double' }), '[ 1,
        , 3 ]'); -}); - -test('element no attr', function (t) { - t.plan(1); - var elem = { - nodeName: 'div', - getAttribute: function (key) { return key; }, - childNodes: [] - }; - var obj = [1, elem, 3]; - t.deepEqual(inspect(obj), '[ 1,
        , 3 ]'); -}); - -test('element with contents', function (t) { - t.plan(1); - var elem = { - nodeName: 'div', - getAttribute: function (key) { return key; }, - childNodes: [{ nodeName: 'b' }] - }; - var obj = [1, elem, 3]; - t.deepEqual(inspect(obj), '[ 1,
        ...
        , 3 ]'); -}); - -test('element instance', function (t) { - t.plan(1); - var h = global.HTMLElement; - global.HTMLElement = function (name, attr) { - this.nodeName = name; - this.attributes = attr; - }; - global.HTMLElement.prototype.getAttribute = function () {}; - - var elem = new global.HTMLElement('div', []); - var obj = [1, elem, 3]; - t.deepEqual(inspect(obj), '[ 1,
        , 3 ]'); - global.HTMLElement = h; -}); diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/test/lowbyte.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/test/lowbyte.js deleted file mode 100644 index 68a345d8578004506a45296d6de9e091b7811272..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/test/lowbyte.js +++ /dev/null @@ -1,12 +0,0 @@ -var test = require('tape'); -var inspect = require('../'); - -var obj = { x: 'a\r\nb', y: '\x05! \x1f \x12' }; - -test('interpolate low bytes', function (t) { - t.plan(1); - t.equal( - inspect(obj), - "{ x: 'a\\r\\nb', y: '\\x05! \\x1F \\x12' }" - ); -}); diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/send/node_modules/ms/readme.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/send/node_modules/ms/readme.md deleted file mode 100644 index 0fc1abb3b8e30a3ab97023d243127c75b1b3a4d7..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/send/node_modules/ms/readme.md +++ /dev/null @@ -1,59 +0,0 @@ -# ms - -![CI](https://github.com/vercel/ms/workflows/CI/badge.svg) - -Use this package to easily convert various time formats to milliseconds. - -## Examples - -```js -ms('2 days') // 172800000 -ms('1d') // 86400000 -ms('10h') // 36000000 -ms('2.5 hrs') // 9000000 -ms('2h') // 7200000 -ms('1m') // 60000 -ms('5s') // 5000 -ms('1y') // 31557600000 -ms('100') // 100 -ms('-3 days') // -259200000 -ms('-1h') // -3600000 -ms('-200') // -200 -``` - -### Convert from Milliseconds - -```js -ms(60000) // "1m" -ms(2 * 60000) // "2m" -ms(-3 * 60000) // "-3m" -ms(ms('10 hours')) // "10h" -``` - -### Time Format Written-Out - -```js -ms(60000, { long: true }) // "1 minute" -ms(2 * 60000, { long: true }) // "2 minutes" -ms(-3 * 60000, { long: true }) // "-3 minutes" -ms(ms('10 hours'), { long: true }) // "10 hours" -``` - -## Features - -- Works both in [Node.js](https://nodejs.org) and in the browser -- If a number is supplied to `ms`, a string with a unit is returned -- If a string that contains the number is supplied, it returns it as a number (e.g.: it returns `100` for `'100'`) -- If you pass a string with a number and a valid unit, the number of equivalent milliseconds is returned - -## Related Packages - -- [ms.macro](https://github.com/knpwrs/ms.macro) - Run `ms` as a macro at build-time. - -## Caught a Bug? - -1. [Fork](https://help.github.com/articles/fork-a-repo/) this repository to your own GitHub account and then [clone](https://help.github.com/articles/cloning-a-repository/) it to your local device -2. Link the package to the global module directory: `npm link` -3. Within the module you want to test your local development instance of ms, just link it to the dependencies: `npm link ms`. Instead of the default one from npm, Node.js will now use your clone of ms! - -As always, you can run the tests using: `npm test` diff --git a/spaces/fiyen/YangyangChatGPT/run_Windows.bat b/spaces/fiyen/YangyangChatGPT/run_Windows.bat deleted file mode 100644 index 4c18f9ccaeea0af972301ffdf48778641221f76d..0000000000000000000000000000000000000000 --- a/spaces/fiyen/YangyangChatGPT/run_Windows.bat +++ /dev/null @@ -1,5 +0,0 @@ -@echo off -echo Opening ChuanhuChatGPT... - -REM Open powershell via bat -start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py" diff --git a/spaces/fiz123321/dumbcutie/greeting.md b/spaces/fiz123321/dumbcutie/greeting.md deleted file mode 100644 index ecaea55291807eb2c9047454efb23715993abde9..0000000000000000000000000000000000000000 --- a/spaces/fiz123321/dumbcutie/greeting.md +++ /dev/null @@ -1 +0,0 @@ -turbo is a cutie \ No newline at end of file diff --git "a/spaces/fkhuggingme/gpt-academic/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" "b/spaces/fkhuggingme/gpt-academic/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" deleted file mode 100644 index 7c6a7ffb5cb2c42e6543c75d6ad9dd643f412cd9..0000000000000000000000000000000000000000 --- "a/spaces/fkhuggingme/gpt-academic/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" +++ /dev/null @@ -1,29 +0,0 @@ -from toolbox import CatchException, update_ui -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -import datetime -@CatchException -def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,暂时没有用武之地 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - history = [] # 清空历史,以免输入溢出 - chatbot.append(("这是什么功能?", "[Local Message] 请注意,您正在调用一个[函数插件]的模板,该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板(该函数只有20多行代码)。此外我们也提供可同步处理大量文件的多线程Demo供您参考。您若希望分享新的功能模组,请不吝PR!")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - for i in range(5): - currentMonth = (datetime.date.today() + datetime.timedelta(days=i)).month - currentDay = (datetime.date.today() + datetime.timedelta(days=i)).day - i_say = f'历史中哪些事件发生在{currentMonth}月{currentDay}日?列举两条并发送相关图片。发送图片时,请使用Markdown,将Unsplash API中的PUT_YOUR_QUERY_HERE替换成描述该事件的一个最重要的单词。' - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, inputs_show_user=i_say, - llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], - sys_prompt="当你想发送一张照片时,请使用Markdown, 并且不要有反斜线, 不要用代码块。使用 Unsplash API (https://source.unsplash.com/1280x720/? < PUT_YOUR_QUERY_HERE >)。" - ) - chatbot[-1] = (i_say, gpt_say) - history.append(i_say);history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 diff --git a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/stream_audio/run.py b/spaces/freddyaboulton/3.1.4.9-all-demos/demos/stream_audio/run.py deleted file mode 100644 index 8fcd3c2affc12c6e60c58ac3a9ea56c70c243f3a..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/stream_audio/run.py +++ /dev/null @@ -1,20 +0,0 @@ -import gradio as gr -import numpy as np - -with gr.Blocks() as demo: - inp = gr.Audio(source="microphone") - out = gr.Audio() - stream = gr.Variable() - - def add_to_stream(audio, instream): - if audio is None: - return gr.update(), instream - if instream is None: - ret = audio - else: - ret = (audio[0], np.concatenate((instream[1], audio[1]))) - return ret, ret - inp.stream(add_to_stream, [inp, stream], [out, stream]) - -if __name__ == "__main__": - demo.launch() \ No newline at end of file diff --git a/spaces/genevera/AudioToken/modules/fga/atten.py b/spaces/genevera/AudioToken/modules/fga/atten.py deleted file mode 100644 index 701a29f7efb0ed3e2d73d78016342b8a36f57e16..0000000000000000000000000000000000000000 --- a/spaces/genevera/AudioToken/modules/fga/atten.py +++ /dev/null @@ -1,303 +0,0 @@ -#!/usr/bin/env python -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.autograd import Variable -from itertools import product, permutations, combinations_with_replacement, chain - - -class Unary(nn.Module): - def __init__(self, embed_size): - """ - Captures local entity information - :param embed_size: the embedding dimension - """ - super(Unary, self).__init__() - self.embed = nn.Conv1d(embed_size, embed_size, 1) - self.feature_reduce = nn.Conv1d(embed_size, 1, 1) - - def forward(self, X): - X = X.transpose(1, 2) - - X_embed = self.embed(X) - - X_nl_embed = F.dropout(F.relu(X_embed)) - X_poten = self.feature_reduce(X_nl_embed) - return X_poten.squeeze(1) - - -class Pairwise(nn.Module): - def __init__(self, embed_x_size, x_spatial_dim=None, embed_y_size=None, y_spatial_dim=None): - """ - Captures interaction between utilities or entities of the same utility - :param embed_x_size: the embedding dimension of the first utility - :param x_spatial_dim: the spatial dimension of the first utility for batch norm and weighted marginalization - :param embed_y_size: the embedding dimension of the second utility (none for self-interactions) - :param y_spatial_dim: the spatial dimension of the second utility for batch norm and weighted marginalization - """ - - super(Pairwise, self).__init__() - embed_y_size = embed_y_size if y_spatial_dim is not None else embed_x_size - self.y_spatial_dim = y_spatial_dim if y_spatial_dim is not None else x_spatial_dim - - self.embed_size = max(embed_x_size, embed_y_size) - self.x_spatial_dim = x_spatial_dim - - self.embed_X = nn.Conv1d(embed_x_size, self.embed_size, 1) - self.embed_Y = nn.Conv1d(embed_y_size, self.embed_size, 1) - if x_spatial_dim is not None: - self.normalize_S = nn.BatchNorm1d(self.x_spatial_dim * self.y_spatial_dim) - - self.margin_X = nn.Conv1d(self.y_spatial_dim, 1, 1) - self.margin_Y = nn.Conv1d(self.x_spatial_dim, 1, 1) - - def forward(self, X, Y=None): - - X_t = X.transpose(1, 2) - Y_t = Y.transpose(1, 2) if Y is not None else X_t - - - X_embed = self.embed_X(X_t) - Y_embed = self.embed_Y(Y_t) - - X_norm = F.normalize(X_embed) - Y_norm = F.normalize(Y_embed) - - S = X_norm.transpose(1, 2).bmm(Y_norm) - if self.x_spatial_dim is not None: - S = self.normalize_S(S.view(-1, self.x_spatial_dim * self.y_spatial_dim)) \ - .view(-1, self.x_spatial_dim, self.y_spatial_dim) - - X_poten = self.margin_X(S.transpose(1, 2)).transpose(1, 2).squeeze(2) - Y_poten = self.margin_Y(S).transpose(1, 2).squeeze(2) - else: - X_poten = S.mean(dim=2, keepdim=False) - Y_poten = S.mean(dim=1, keepdim=False) - - if Y is None: - return X_poten - else: - return X_poten, Y_poten - - -class Atten(nn.Module): - def __init__(self, util_e, sharing_factor_weights=[], prior_flag=False, - sizes=[], size_force=False, pairwise_flag=True, - unary_flag=True, self_flag=True): - """ - The class performs an attention on a given list of utilities representation. - :param util_e: the embedding dimensions - :param sharing_factor_weights: To share weights, provide a dict of tuples: - {idx: (num_utils, connected utils) - Note, for efficiency, the shared utils (i.e., history, are connected to ans - and question only. - TODO: connections between shared utils - :param prior_flag: is prior factor provided - :param sizes: the spatial simension (used for batch-norm and weighted marginalization) - :param size_force: force spatial size with adaptive avg pooling. - :param pairwise_flag: use pairwise interaction between utilities - :param unary_flag: use local information - :param self_flag: use self interactions between utilitie's entities - """ - super(Atten, self).__init__() - self.util_e = util_e - - self.prior_flag = prior_flag - - self.n_utils = len(util_e) - - self.spatial_pool = nn.ModuleDict() - - self.un_models = nn.ModuleList() - - self.self_flag = self_flag - self.pairwise_flag = pairwise_flag - self.unary_flag = unary_flag - self.size_force = size_force - - if len(sizes) == 0: - sizes = [None for _ in util_e] - - self.sharing_factor_weights = sharing_factor_weights - - #force the provided size - for idx, e_dim in enumerate(util_e): - self.un_models.append(Unary(e_dim)) - if self.size_force: - self.spatial_pool[str(idx)] = nn.AdaptiveAvgPool1d(sizes[idx]) - - #Pairwise - self.pp_models = nn.ModuleDict() - for ((idx1, e_dim_1), (idx2, e_dim_2)) \ - in combinations_with_replacement(enumerate(util_e), 2): - # self - if self.self_flag and idx1 == idx2: - self.pp_models[str(idx1)] = Pairwise(e_dim_1, sizes[idx1]) - else: - if pairwise_flag: - if idx1 in self.sharing_factor_weights: - # not connected - if idx2 not in self.sharing_factor_weights[idx1][1]: - continue - if idx2 in self.sharing_factor_weights: - # not connected - if idx1 not in self.sharing_factor_weights[idx2][1]: - continue - self.pp_models[str((idx1, idx2))] = Pairwise(e_dim_1, sizes[idx1], e_dim_2, sizes[idx2]) - - # Handle reduce potentials (with scalars) - self.reduce_potentials = nn.ModuleList() - - self.num_of_potentials = dict() - - self.default_num_of_potentials = 0 - - if self.self_flag: - self.default_num_of_potentials += 1 - if self.unary_flag: - self.default_num_of_potentials += 1 - if self.prior_flag: - self.default_num_of_potentials += 1 - for idx in range(self.n_utils): - self.num_of_potentials[idx] = self.default_num_of_potentials - - ''' - All other utilities - ''' - if pairwise_flag: - for idx, (num_utils, connected_utils) in sharing_factor_weights: - for c_u in connected_utils: - self.num_of_potentials[c_u] += num_utils - self.num_of_potentials[idx] += 1 - for k in self.num_of_potentials: - if k not in self.sharing_factor_weights: - self.num_of_potentials[k] += (self.n_utils - 1) \ - - len(sharing_factor_weights) - - for idx in range(self.n_utils): - self.reduce_potentials.append(nn.Conv1d(self.num_of_potentials[idx], - 1, 1, bias=False)) - - def forward(self, utils, priors=None): - assert self.n_utils == len(utils) - assert (priors is None and not self.prior_flag) \ - or (priors is not None - and self.prior_flag - and len(priors) == self.n_utils) - b_size = utils[0].size(0) - util_factors = dict() - attention = list() - - #Force size, constant size is used for pairwise batch normalization - if self.size_force: - for i, (num_utils, _) in self.sharing_factor_weights.items(): - if str(i) not in self.spatial_pool.keys(): - continue - else: - high_util = utils[i] - high_util = high_util.view(num_utils * b_size, high_util.size(2), high_util.size(3)) - high_util = high_util.transpose(1, 2) - utils[i] = self.spatial_pool[str(i)](high_util).transpose(1, 2) - - for i in range(self.n_utils): - if i in self.sharing_factor_weights \ - or str(i) not in self.spatial_pool.keys(): - continue - utils[i] = utils[i].transpose(1, 2) - utils[i] = self.spatial_pool[str(i)](utils[i]).transpose(1, 2) - if self.prior_flag and priors[i] is not None: - priors[i] = self.spatial_pool[str(i)](priors[i].unsqueeze(1)).squeeze(1) - - # handle Shared weights - for i, (num_utils, connected_list) in self.sharing_factor_weights: - if self.unary_flag: - util_factors.setdefault(i, []).append(self.un_models[i](utils[i])) - - if self.self_flag: - util_factors.setdefault(i, []).append(self.pp_models[str(i)](utils[i])) - - if self.pairwise_flag: - for j in connected_list: - other_util = utils[j] - expanded_util = other_util.unsqueeze(1).expand(b_size, - num_utils, - other_util.size(1), - other_util.size(2)).contiguous().view( - b_size * num_utils, - other_util.size(1), - other_util.size(2)) - - if i < j: - factor_ij, factor_ji = self.pp_models[str((i, j))](utils[i], expanded_util) - else: - factor_ji, factor_ij = self.pp_models[str((j, i))](expanded_util, utils[i]) - util_factors[i].append(factor_ij) - util_factors.setdefault(j, []).append(factor_ji.view(b_size, num_utils, factor_ji.size(1))) - - # handle local factors - for i in range(self.n_utils): - if i in self.sharing_factor_weights: - continue - if self.unary_flag: - util_factors.setdefault(i, []).append(self.un_models[i](utils[i])) - if self.self_flag: - util_factors.setdefault(i, []).append(self.pp_models[str(i)](utils[i])) - - # joint - if self.pairwise_flag: - for (i, j) in combinations_with_replacement(range(self.n_utils), 2): - if i in self.sharing_factor_weights \ - or j in self.sharing_factor_weights: - continue - if i == j: - continue - else: - factor_ij, factor_ji = self.pp_models[str((i, j))](utils[i], utils[j]) - util_factors.setdefault(i, []).append(factor_ij) - util_factors.setdefault(j, []).append(factor_ji) - - # perform attention - for i in range(self.n_utils): - if self.prior_flag: - prior = priors[i] \ - if priors[i] is not None \ - else torch.zeros_like(util_factors[i][0], requires_grad=False).cuda() - - util_factors[i].append(prior) - - util_factors[i] = torch.cat([p if len(p.size()) == 3 else p.unsqueeze(1) - for p in util_factors[i]], dim=1) - util_factors[i] = self.reduce_potentials[i](util_factors[i]).squeeze(1) - util_factors[i] = F.softmax(util_factors[i], dim=1).unsqueeze(2) - attention.append(torch.bmm(utils[i].transpose(1, 2), util_factors[i]).squeeze(2)) - - return attention - - -class NaiveAttention(nn.Module): - def __init__(self): - """ - Used for ablation analysis - removing attention. - """ - super(NaiveAttention, self).__init__() - - def forward(self, utils, priors): - atten = [] - spatial_atten = [] - for u, p in zip(utils, priors): - if type(u) is tuple: - u = u[1] - num_elements = u.shape[0] - if p is not None: - u = u.view(-1, u.shape[-2], u.shape[-1]) - p = p.view(-1, p.shape[-2], p.shape[-1]) - spatial_atten.append( - torch.bmm(p.transpose(1, 2), u).squeeze(2).view(num_elements, -1, u.shape[-2], u.shape[-1])) - else: - spatial_atten.append(u.mean(2)) - continue - if p is not None: - atten.append(torch.bmm(u.transpose(1, 2), p.unsqueeze(2)).squeeze(2)) - else: - atten.append(u.mean(1)) - return atten, spatial_atten \ No newline at end of file diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/sep_aspp_head.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/sep_aspp_head.py deleted file mode 100644 index 3339a7ac56e77dfc638e9bffb557d4699148686b..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/sep_aspp_head.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.nn as nn -from annotator.uniformer.mmcv.cnn import ConvModule, DepthwiseSeparableConvModule - -from annotator.uniformer.mmseg.ops import resize -from ..builder import HEADS -from .aspp_head import ASPPHead, ASPPModule - - -class DepthwiseSeparableASPPModule(ASPPModule): - """Atrous Spatial Pyramid Pooling (ASPP) Module with depthwise separable - conv.""" - - def __init__(self, **kwargs): - super(DepthwiseSeparableASPPModule, self).__init__(**kwargs) - for i, dilation in enumerate(self.dilations): - if dilation > 1: - self[i] = DepthwiseSeparableConvModule( - self.in_channels, - self.channels, - 3, - dilation=dilation, - padding=dilation, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - -@HEADS.register_module() -class DepthwiseSeparableASPPHead(ASPPHead): - """Encoder-Decoder with Atrous Separable Convolution for Semantic Image - Segmentation. - - This head is the implementation of `DeepLabV3+ - `_. - - Args: - c1_in_channels (int): The input channels of c1 decoder. If is 0, - the no decoder will be used. - c1_channels (int): The intermediate channels of c1 decoder. - """ - - def __init__(self, c1_in_channels, c1_channels, **kwargs): - super(DepthwiseSeparableASPPHead, self).__init__(**kwargs) - assert c1_in_channels >= 0 - self.aspp_modules = DepthwiseSeparableASPPModule( - dilations=self.dilations, - in_channels=self.in_channels, - channels=self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - if c1_in_channels > 0: - self.c1_bottleneck = ConvModule( - c1_in_channels, - c1_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - else: - self.c1_bottleneck = None - self.sep_bottleneck = nn.Sequential( - DepthwiseSeparableConvModule( - self.channels + c1_channels, - self.channels, - 3, - padding=1, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg), - DepthwiseSeparableConvModule( - self.channels, - self.channels, - 3, - padding=1, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - aspp_outs = [ - resize( - self.image_pool(x), - size=x.size()[2:], - mode='bilinear', - align_corners=self.align_corners) - ] - aspp_outs.extend(self.aspp_modules(x)) - aspp_outs = torch.cat(aspp_outs, dim=1) - output = self.bottleneck(aspp_outs) - if self.c1_bottleneck is not None: - c1_output = self.c1_bottleneck(inputs[0]) - output = resize( - input=output, - size=c1_output.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - output = torch.cat([output, c1_output], dim=1) - output = self.sep_bottleneck(output) - output = self.cls_seg(output) - return output diff --git a/spaces/georgefen/Face-Landmark-ControlNet/ldm/models/diffusion/dpm_solver/sampler.py b/spaces/georgefen/Face-Landmark-ControlNet/ldm/models/diffusion/dpm_solver/sampler.py deleted file mode 100644 index 558616526aea5f21cf6b34aaf04c7b068f4f7cca..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/ldm/models/diffusion/dpm_solver/sampler.py +++ /dev/null @@ -1,87 +0,0 @@ -"""SAMPLING ONLY.""" -import torch - -from .dpm_solver import NoiseScheduleVP, model_wrapper, DPM_Solver - - -MODEL_TYPES = { - "eps": "noise", - "v": "v" -} - - -class DPMSolverSampler(object): - def __init__(self, model, **kwargs): - super().__init__() - self.model = model - to_torch = lambda x: x.clone().detach().to(torch.float32).to(model.device) - self.register_buffer('alphas_cumprod', to_torch(model.alphas_cumprod)) - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device("cpu"): - attr = attr.to(torch.device("cpu")) - setattr(self, name, attr) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - cbs = conditioning[list(conditioning.keys())[0]].shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - - print(f'Data shape for DPM-Solver sampling is {size}, sampling steps {S}') - - device = self.model.betas.device - if x_T is None: - img = torch.randn(size, device=device) - else: - img = x_T - - ns = NoiseScheduleVP('discrete', alphas_cumprod=self.alphas_cumprod) - - model_fn = model_wrapper( - lambda x, t, c: self.model.apply_model(x, t, c), - ns, - model_type=MODEL_TYPES[self.model.parameterization], - guidance_type="classifier-free", - condition=conditioning, - unconditional_condition=unconditional_conditioning, - guidance_scale=unconditional_guidance_scale, - ) - - dpm_solver = DPM_Solver(model_fn, ns, predict_x0=True, thresholding=False) - x = dpm_solver.sample(img, steps=S, skip_type="time_uniform", method="multistep", order=2, lower_order_final=True) - - return x.to(device), None \ No newline at end of file diff --git a/spaces/gingerale/Gnomespace/reader.py b/spaces/gingerale/Gnomespace/reader.py deleted file mode 100644 index 81d2a64901bbfa3b1fe76831084ddb3b834a2244..0000000000000000000000000000000000000000 --- a/spaces/gingerale/Gnomespace/reader.py +++ /dev/null @@ -1,76 +0,0 @@ -import os -from yattag import Doc -## --------------------------------- ### -### reading: info.txt ### -### -------------------------------- ### -# placeholders in case info.txt does not exist -def get_article(): - filename = "info.txt" - placeholder = "please create an info.txt to customize this text" - - title = bkgd = data_collection = priv_cons = bias_cons = ident_cons = img_src = membs = description = placeholder - # check if info.txt is present - if os.path.isfile(filename): - # open info.txt in read mode - info = open(filename, "r") - - # read each line to a string - description = "An AI project created by " + info.readline() - title = info.readline() - bkgd = info.readline() - data_collection = info.readline() - priv_cons = info.readline() - bias_cons = info.readline() - ident_cons = info.readline() - img_src = info.readline() - membs = info.readline() - - # close file - info.close() - - # use yattag library to generate html - doc, tag, text, line = Doc().ttl() - # create html based on info.txt - with tag('div'): - with tag('div', klass='my-div'): - line('h2', 'Project Background') - line('p', bkgd) - with tag('div', klass='my-div'): - line('h2', 'Data Collection') - line('p', data_collection) - with tag('div', klass='my-div'): - line('h2', 'Ethical Considerations') - with tag('ul'): - line('li', priv_cons) - line('li', bias_cons) - line('li', ident_cons) - with tag('div', klass='my-div'): - line('h2', 'Our Team') - line('p', membs) - doc.stag('img', src=img_src) - - css = ''' - .my-div { - border: 2px solid black; - text-align: center; - margin: 10px; - padding: 5%; - } - ul { - display: inline-block; - text-align: left; - } - img { - display: block; - margin: auto; - } - .description { - text-align: center; - } - ''' - return { - 'article': doc.getvalue(), - 'css': css, - 'title': title, - 'description': description, - } \ No newline at end of file diff --git a/spaces/gligen/demo/gligen/evaluator.py b/spaces/gligen/demo/gligen/evaluator.py deleted file mode 100644 index afb61ec9aef76ef2654769c878bc233e4c805767..0000000000000000000000000000000000000000 --- a/spaces/gligen/demo/gligen/evaluator.py +++ /dev/null @@ -1,225 +0,0 @@ -import torch -from ldm.models.diffusion.ddim import DDIMSampler -from ldm.models.diffusion.plms import PLMSSampler -from ldm.util import instantiate_from_config -import numpy as np -import random -from dataset.concat_dataset import ConCatDataset #, collate_fn -from torch.utils.data import DataLoader -from torch.utils.data.distributed import DistributedSampler -import os -from tqdm import tqdm -from distributed import get_rank, synchronize, get_world_size -from trainer import read_official_ckpt, batch_to_device, ImageCaptionSaver, wrap_loader #, get_padded_boxes -from PIL import Image -import math -import json - - -def draw_masks_from_boxes(boxes,size): - - image_masks = [] - for box in boxes: - image_mask = torch.ones(size[0],size[1]) - for bx in box: - x0, x1 = bx[0]*size[0], bx[2]*size[0] - y0, y1 = bx[1]*size[1], bx[3]*size[1] - image_mask[int(y0):int(y1), int(x0):int(x1)] = 0 - image_masks.append(image_mask) - return torch.stack(image_masks).unsqueeze(1) - - - -def set_alpha_scale(model, alpha_scale): - from ldm.modules.attention import GatedCrossAttentionDense, GatedSelfAttentionDense - for module in model.modules(): - if type(module) == GatedCrossAttentionDense or type(module) == GatedSelfAttentionDense: - module.scale = alpha_scale - # print("scale: ", alpha_scale) - # print("attn: ", module.alpha_attn) - # print("dense: ", module.alpha_dense) - # print(' ') - # print(' ') - - -def save_images(samples, image_ids, folder, to256): - for sample, image_id in zip(samples, image_ids): - sample = torch.clamp(sample, min=-1, max=1) * 0.5 + 0.5 - sample = sample.cpu().numpy().transpose(1,2,0) * 255 - img_name = str(int(image_id))+'.png' - img = Image.fromarray(sample.astype(np.uint8)) - if to256: - img = img.resize( (256,256), Image.BICUBIC) - img.save(os.path.join(folder,img_name)) - - -def ckpt_to_folder_name(basename): - name="" - for s in basename: - if s.isdigit(): - name+=s - seen = round( int(name)/1000, 1 ) - return str(seen).ljust(4,'0')+'k' - - -class Evaluator: - def __init__(self, config): - - self.config = config - self.device = torch.device("cuda") - - - # = = = = = create model and diffusion = = = = = # - if self.config.ckpt != "real": - - self.model = instantiate_from_config(config.model).to(self.device) - self.autoencoder = instantiate_from_config(config.autoencoder).to(self.device) - self.text_encoder = instantiate_from_config(config.text_encoder).to(self.device) - self.diffusion = instantiate_from_config(config.diffusion).to(self.device) - - # donot need to load official_ckpt for self.model here, since we will load from our ckpt - state_dict = read_official_ckpt( os.path.join(config.DATA_ROOT, config.official_ckpt_name) ) - self.autoencoder.load_state_dict( state_dict["autoencoder"] ) - self.text_encoder.load_state_dict( state_dict["text_encoder"] ) - self.diffusion.load_state_dict( state_dict["diffusion"] ) - - - # = = = = = load from our ckpt = = = = = # - if self.config.ckpt == "real": - print("Saving all real images...") - self.just_save_real = True - else: - checkpoint = torch.load(self.config.ckpt, map_location="cpu") - which_state = 'ema' if 'ema' in checkpoint else "model" - which_state = which_state if config.which_state is None else config.which_state - self.model.load_state_dict(checkpoint[which_state]) - print("ckpt is loaded") - self.just_save_real = False - set_alpha_scale(self.model, self.config.alpha_scale) - - self.autoencoder.eval() - self.model.eval() - self.text_encoder.eval() - - - # = = = = = create data = = = = = # - self.dataset_eval = ConCatDataset(config.val_dataset_names, config.DATA_ROOT, config.which_embedder, train=False) - print("total eval images: ", len(self.dataset_eval)) - sampler = DistributedSampler(self.dataset_eval,shuffle=False) if config.distributed else None - loader_eval = DataLoader( self.dataset_eval,batch_size=config.batch_size, - num_workers=config.workers, - pin_memory=True, - sampler=sampler, - drop_last=False) # shuffle default is False - self.loader_eval = loader_eval - - - # = = = = = create output folder = = = = = # - folder_name = ckpt_to_folder_name(os.path.basename(config.ckpt)) - self.outdir = os.path.join(config.OUTPUT_ROOT, folder_name) - self.outdir_real = os.path.join(self.outdir,'real') - self.outdir_fake = os.path.join(self.outdir,'fake') - if config.to256: - self.outdir_real256 = os.path.join(self.outdir,'real256') - self.outdir_fake256 = os.path.join(self.outdir,'fake256') - synchronize() # if rank0 is faster, it may mkdir before the other rank call os.listdir() - if get_rank() == 0: - os.makedirs(self.outdir, exist_ok=True) - os.makedirs(self.outdir_real, exist_ok=True) - os.makedirs(self.outdir_fake, exist_ok=True) - if config.to256: - os.makedirs(self.outdir_real256, exist_ok=True) - os.makedirs(self.outdir_fake256, exist_ok=True) - print(self.outdir) # double check - - self.evaluation_finished = False - if os.path.exists( os.path.join(self.outdir,'score.txt') ): - self.evaluation_finished = True - - - def alread_saved_this_batch(self, batch): - existing_real_files = os.listdir( self.outdir_real ) - existing_fake_files = os.listdir( self.outdir_fake ) - status = [] - for image_id in batch["id"]: - img_name = str(int(image_id))+'.png' - status.append(img_name in existing_real_files) - status.append(img_name in existing_fake_files) - return all(status) - - - @torch.no_grad() - def start_evaluating(self): - - iterator = tqdm( self.loader_eval, desc='Evaluating progress') - for batch in iterator: - - #if not self.alread_saved_this_batch(batch): - if True: - - batch_to_device(batch, self.device) - batch_size = batch["image"].shape[0] - samples_real = batch["image"] - - if self.just_save_real: - samples_fake = None - else: - uc = self.text_encoder.encode( batch_size*[""] ) - context = self.text_encoder.encode( batch["caption"] ) - - image_mask = x0 = None - if self.config.inpaint: - image_mask = draw_masks_from_boxes( batch['boxes'], self.model.image_size ).cuda() - x0 = self.autoencoder.encode( batch["image"] ) - - shape = (batch_size, self.model.in_channels, self.model.image_size, self.model.image_size) - if self.config.no_plms: - sampler = DDIMSampler(self.diffusion, self.model) - steps = 250 - else: - sampler = PLMSSampler(self.diffusion, self.model) - steps = 50 - - input = dict( x=None, timesteps=None, context=context, boxes=batch['boxes'], masks=batch['masks'], positive_embeddings=batch["positive_embeddings"] ) - samples_fake = sampler.sample(S=steps, shape=shape, input=input, uc=uc, guidance_scale=self.config.guidance_scale, mask=image_mask, x0=x0) - samples_fake = self.autoencoder.decode(samples_fake) - - - save_images(samples_real, batch['id'], self.outdir_real, to256=False ) - if self.config.to256: - save_images(samples_real, batch['id'], self.outdir_real256, to256=True ) - - if samples_fake is not None: - save_images(samples_fake, batch['id'], self.outdir_fake, to256=False ) - if self.config.to256: - save_images(samples_fake, batch['id'], self.outdir_fake256, to256=True ) - - - def fire_fid(self): - paths = [self.outdir_real, self.outdir_fake] - if self.config.to256: - paths = [self.outdir_real256, self.outdir_fake256] - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/spaces/gossminn/fillmorle-app/sociolome/lome_wrapper.py b/spaces/gossminn/fillmorle-app/sociolome/lome_wrapper.py deleted file mode 100644 index b06fcfded6c38d49b0cbdb03f4b3cffa8b51b069..0000000000000000000000000000000000000000 --- a/spaces/gossminn/fillmorle-app/sociolome/lome_wrapper.py +++ /dev/null @@ -1,83 +0,0 @@ -from sftp import SpanPredictor -import spacy - -import sys -import dataclasses -from typing import List, Optional, Dict, Any - - -predictor = SpanPredictor.from_path("model.mod.tar.gz") -nlp = spacy.load("xx_sent_ud_sm") - - -@dataclasses.dataclass -class FrameAnnotation: - tokens: List[str] = dataclasses.field(default_factory=list) - pos: List[str] = dataclasses.field(default_factory=list) - - -@dataclasses.dataclass -class MultiLabelAnnotation(FrameAnnotation): - frame_list: List[List[str]] = dataclasses.field(default_factory=list) - lu_list: List[Optional[str]] = dataclasses.field(default_factory=list) - - def to_txt(self): - for i, tok in enumerate(self.tokens): - yield f"{tok} {self.pos[i]} {'|'.join(self.frame_list[i]) or '_'} {self.lu_list[i] or '_'}" - - -# reused from "combine_predictions.py" (cloned/lome/src/spanfinder/sociolome) -def convert_to_seq_labels(sentence: List[str], structures: Dict[int, Dict[str, Any]]) -> List[List[str]]: - labels = [[] for _ in sentence] - - for struct_id, struct in structures.items(): - tgt_span = struct["target"] - frame = struct["frame"] - - for i in range(tgt_span[0], tgt_span[1] + 1): - labels[i].append(f"T:{frame}@{struct_id:02}") - for role in struct["roles"]: - role_span = role["boundary"] - role_label = role["label"] - for i in range(role_span[0], role_span[1] + 1): - prefix = "B" if i == role_span[0] else "I" - labels[i].append(f"{prefix}:{frame}:{role_label}@{struct_id:02}") - return labels - -def make_prediction(sentence, spacy_model, predictor): - spacy_doc = spacy_model(sentence) - tokens = [t.text for t in spacy_doc] - tgt_spans, fr_labels, _ = predictor.force_decode(tokens) - - frame_structures = {} - - for i, (tgt, frm) in enumerate(sorted(zip(tgt_spans, fr_labels), key=lambda t: t[0][0])): - arg_spans, arg_labels, _ = predictor.force_decode(tokens, parent_span=tgt, parent_label=frm) - - frame_structures[i] = { - "target": tgt, - "frame": frm, - "roles": [ - {"boundary": bnd, "label": label} - for bnd, label in zip(arg_spans, arg_labels) - if label != "Target" - ] - } - - return MultiLabelAnnotation( - tokens=tokens, - pos=[t.pos_ for t in spacy_doc], - frame_list=convert_to_seq_labels(tokens, frame_structures), - lu_list=[None for _ in tokens] - ) - - -def analyze(text): - analyses = [] - for sentence in text.split("\n"): - analyses.append(make_prediction(sentence, nlp, predictor)) - - return { - "result": "OK", - "analyses": [dataclasses.asdict(an) for an in analyses] - } diff --git a/spaces/gradio/HuBERT/examples/speech_recognition/new/decoders/decoder.py b/spaces/gradio/HuBERT/examples/speech_recognition/new/decoders/decoder.py deleted file mode 100644 index b5bec8cf707b53104ef7a45993a5db2893d3443b..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/speech_recognition/new/decoders/decoder.py +++ /dev/null @@ -1,32 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Union - -from fairseq.data.dictionary import Dictionary - -from .decoder_config import DecoderConfig, FlashlightDecoderConfig -from .base_decoder import BaseDecoder - - -def Decoder( - cfg: Union[DecoderConfig, FlashlightDecoderConfig], tgt_dict: Dictionary -) -> BaseDecoder: - - if cfg.type == "viterbi": - from .viterbi_decoder import ViterbiDecoder - - return ViterbiDecoder(tgt_dict) - if cfg.type == "kenlm": - from .flashlight_decoder import KenLMDecoder - - return KenLMDecoder(cfg, tgt_dict) - if cfg.type == "fairseqlm": - from .flashlight_decoder import FairseqLMDecoder - - return FairseqLMDecoder(cfg, tgt_dict) - raise NotImplementedError(f"Invalid decoder name: {cfg.name}") diff --git a/spaces/gradio/HuBERT/fairseq/data/bucket_pad_length_dataset.py b/spaces/gradio/HuBERT/fairseq/data/bucket_pad_length_dataset.py deleted file mode 100644 index 0f9410014845873bb0344fca6478c231c88e9dea..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/data/bucket_pad_length_dataset.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch.nn.functional as F -from fairseq.data import BaseWrapperDataset -from fairseq.data.data_utils import get_buckets, get_bucketed_sizes - - -class BucketPadLengthDataset(BaseWrapperDataset): - """ - Bucket and pad item lengths to the nearest bucket size. This can be used to - reduce the number of unique batch shapes, which is important on TPUs since - each new batch shape requires a recompilation. - - Args: - dataset (FairseqDatset): dataset to bucket - sizes (List[int]): all item sizes - num_buckets (int): number of buckets to create - pad_idx (int): padding symbol - left_pad (bool): if True, pad on the left; otherwise right pad - """ - - def __init__( - self, - dataset, - sizes, - num_buckets, - pad_idx, - left_pad, - tensor_key=None, - ): - super().__init__(dataset) - self.pad_idx = pad_idx - self.left_pad = left_pad - - assert num_buckets > 0 - self.buckets = get_buckets(sizes, num_buckets) - self._bucketed_sizes = get_bucketed_sizes(sizes, self.buckets) - self._tensor_key = tensor_key - - def _set_tensor(self, item, val): - if self._tensor_key is None: - return val - item[self._tensor_key] = val - return item - - def _get_tensor(self, item): - if self._tensor_key is None: - return item - return item[self._tensor_key] - - def _pad(self, tensor, bucket_size, dim=-1): - num_pad = bucket_size - tensor.size(dim) - return F.pad( - tensor, - (num_pad if self.left_pad else 0, 0 if self.left_pad else num_pad), - value=self.pad_idx, - ) - - def __getitem__(self, index): - item = self.dataset[index] - bucket_size = self._bucketed_sizes[index] - tensor = self._get_tensor(item) - padded = self._pad(tensor, bucket_size) - return self._set_tensor(item, padded) - - @property - def sizes(self): - return self._bucketed_sizes - - def num_tokens(self, index): - return self._bucketed_sizes[index] - - def size(self, index): - return self._bucketed_sizes[index] diff --git a/spaces/gradio/HuBERT/fairseq/nan_detector.py b/spaces/gradio/HuBERT/fairseq/nan_detector.py deleted file mode 100644 index faa8031d4666c9ba9837919fe1c884dacf47ac3a..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/nan_detector.py +++ /dev/null @@ -1,108 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import torch - - -logger = logging.getLogger(__name__) - - -class NanDetector: - """ - Detects the first NaN or Inf in forward and/or backward pass and logs, together with the module name - """ - - def __init__(self, model, forward=True, backward=True): - self.bhooks = [] - self.fhooks = [] - self.forward = forward - self.backward = backward - self.named_parameters = list(model.named_parameters()) - self.reset() - - for name, mod in model.named_modules(): - mod.__module_name = name - self.add_hooks(mod) - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, exc_traceback): - # Dump out all model gnorms to enable better debugging - norm = {} - gradients = {} - for name, param in self.named_parameters: - if param.grad is not None: - grad_norm = torch.norm(param.grad.data, p=2, dtype=torch.float32) - norm[name] = grad_norm.item() - if torch.isnan(grad_norm).any() or torch.isinf(grad_norm).any(): - gradients[name] = param.grad.data - if len(gradients) > 0: - logger.info("Detected nan/inf grad norm, dumping norms...") - logger.info(f"norms: {norm}") - logger.info(f"gradients: {gradients}") - - self.close() - - def add_hooks(self, module): - if self.forward: - self.fhooks.append(module.register_forward_hook(self.fhook_fn)) - if self.backward: - self.bhooks.append(module.register_backward_hook(self.bhook_fn)) - - def reset(self): - self.has_printed_f = False - self.has_printed_b = False - - def _detect(self, tensor, name, backward): - err = None - if ( - torch.is_floating_point(tensor) - # single value tensors (like the loss) will not provide much info - and tensor.numel() >= 2 - ): - with torch.no_grad(): - if torch.isnan(tensor).any(): - err = "NaN" - elif torch.isinf(tensor).any(): - err = "Inf" - if err is not None: - err = f"{err} detected in output of {name}, shape: {tensor.shape}, {'backward' if backward else 'forward'}" - return err - - def _apply(self, module, inp, x, backward): - if torch.is_tensor(x): - if isinstance(inp, tuple) and len(inp) > 0: - inp = inp[0] - err = self._detect(x, module.__module_name, backward) - if err is not None: - if torch.is_tensor(inp) and not backward: - err += ( - f" input max: {inp.max().item()}, input min: {inp.min().item()}" - ) - - has_printed_attr = "has_printed_b" if backward else "has_printed_f" - logger.warning(err) - setattr(self, has_printed_attr, True) - elif isinstance(x, dict): - for v in x.values(): - self._apply(module, inp, v, backward) - elif isinstance(x, list) or isinstance(x, tuple): - for v in x: - self._apply(module, inp, v, backward) - - def fhook_fn(self, module, inp, output): - if not self.has_printed_f: - self._apply(module, inp, output, backward=False) - - def bhook_fn(self, module, inp, output): - if not self.has_printed_b: - self._apply(module, inp, output, backward=True) - - def close(self): - for hook in self.fhooks + self.bhooks: - hook.remove() diff --git a/spaces/guetLzy/Real-ESRGAN-Demo/scripts/generate_multiscale_DF2K.py b/spaces/guetLzy/Real-ESRGAN-Demo/scripts/generate_multiscale_DF2K.py deleted file mode 100644 index d4f5d8324b1624e4cb6163754703b8dac2d188fd..0000000000000000000000000000000000000000 --- a/spaces/guetLzy/Real-ESRGAN-Demo/scripts/generate_multiscale_DF2K.py +++ /dev/null @@ -1,48 +0,0 @@ -import argparse -import glob -import os -from PIL import Image - - -def main(args): - # For DF2K, we consider the following three scales, - # and the smallest image whose shortest edge is 400 - scale_list = [0.75, 0.5, 1 / 3] - shortest_edge = 400 - - path_list = sorted(glob.glob(os.path.join(args.input, '*'))) - for path in path_list: - print(path) - basename = os.path.splitext(os.path.basename(path))[0] - - img = Image.open(path) - width, height = img.size - for idx, scale in enumerate(scale_list): - print(f'\t{scale:.2f}') - rlt = img.resize((int(width * scale), int(height * scale)), resample=Image.LANCZOS) - rlt.save(os.path.join(args.output, f'{basename}T{idx}.png')) - - # save the smallest image which the shortest edge is 400 - if width < height: - ratio = height / width - width = shortest_edge - height = int(width * ratio) - else: - ratio = width / height - height = shortest_edge - width = int(height * ratio) - rlt = img.resize((int(width), int(height)), resample=Image.LANCZOS) - rlt.save(os.path.join(args.output, f'{basename}T{idx+1}.png')) - - -if __name__ == '__main__': - """Generate multi-scale versions for GT images with LANCZOS resampling. - It is now used for DF2K dataset (DIV2K + Flickr 2K) - """ - parser = argparse.ArgumentParser() - parser.add_argument('--input', type=str, default='datasets/DF2K/DF2K_HR', help='Input folder') - parser.add_argument('--output', type=str, default='datasets/DF2K/DF2K_multiscale', help='Output folder') - args = parser.parse_args() - - os.makedirs(args.output, exist_ok=True) - main(args) diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/samples/tensorflow/earth.py b/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/samples/tensorflow/earth.py deleted file mode 100644 index 8ef5870d764d70a291aaea7132a6d96f51c707ea..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/samples/tensorflow/earth.py +++ /dev/null @@ -1,186 +0,0 @@ -# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import numpy as np -import tensorflow as tf -import os -import sys -import pathlib - -import util - -sys.path.insert(0, os.path.join(sys.path[0], '../..')) # for nvdiffrast -import nvdiffrast.tensorflow as dr - -#---------------------------------------------------------------------------- -# Texture learning with/without mipmaps. -#---------------------------------------------------------------------------- - -def fit_earth(max_iter = 20000, - log_interval = 10, - display_interval = None, - display_res = 1024, - enable_mip = True, - res = 512, - ref_res = 4096, - lr_base = 1e-2, - lr_ramp = 0.1, - out_dir = '.', - log_fn = None, - texsave_interval = None, - texsave_fn = None, - imgsave_interval = None, - imgsave_fn = None): - - if out_dir: - os.makedirs(out_dir, exist_ok=True) - - # Mesh and texture adapted from "3D Earth Photorealistic 2K" model at - # https://www.turbosquid.com/3d-models/3d-realistic-earth-photorealistic-2k-1279125 - datadir = f'{pathlib.Path(__file__).absolute().parents[1]}/data' - with np.load(f'{datadir}/earth.npz') as f: - pos_idx, pos, uv_idx, uv, tex = f.values() - tex = tex.astype(np.float32)/255.0 - max_mip_level = 9 # Texture is a 4x3 atlas of 512x512 maps. - print("Mesh has %d triangles and %d vertices." % (pos_idx.shape[0], pos.shape[0])) - - # Transformation matrix input to TF graph. - mtx_in = tf.placeholder(tf.float32, [4, 4]) - - # Learned texture. - tex_var = tf.get_variable('tex', initializer=tf.constant_initializer(0.2), shape=tex.shape) - - # Setup TF graph for reference rendering in high resolution. - pos_clip = tf.matmul(pos, mtx_in, transpose_b=True)[tf.newaxis, ...] - rast_out, rast_out_db = dr.rasterize(pos_clip, pos_idx, [ref_res, ref_res]) - texc, texd = dr.interpolate(uv[tf.newaxis, ...], rast_out, uv_idx, rast_db=rast_out_db, diff_attrs='all') - color = dr.texture(tex[np.newaxis], texc, texd, filter_mode='linear-mipmap-linear', max_mip_level=max_mip_level) - color = color * tf.clip_by_value(rast_out[..., -1:], 0, 1) # Mask out background. - - # Reduce the reference to correct size. - while color.shape[1] > res: - color = util.bilinear_downsample(color) - - # TF Graph for rendered candidate. - if enable_mip: - # With mipmaps. - rast_out_opt, rast_out_db_opt = dr.rasterize(pos_clip, pos_idx, [res, res]) - texc_opt, texd_opt = dr.interpolate(uv[tf.newaxis, ...], rast_out_opt, uv_idx, rast_db=rast_out_db_opt, diff_attrs='all') - color_opt = dr.texture(tex_var[np.newaxis], texc_opt, texd_opt, filter_mode='linear-mipmap-linear', max_mip_level=max_mip_level) - else: - # No mipmaps: no image-space derivatives anywhere. - rast_out_opt, _ = dr.rasterize(pos_clip, pos_idx, [res, res], output_db=False) - texc_opt, _ = dr.interpolate(uv[tf.newaxis, ...], rast_out_opt, uv_idx) - color_opt = dr.texture(tex_var[np.newaxis], texc_opt, filter_mode='linear') - color_opt = color_opt * tf.clip_by_value(rast_out_opt[..., -1:], 0, 1) # Mask out background. - - # Measure only relevant portions of texture when calculating texture PSNR. - loss = tf.reduce_mean((color - color_opt)**2) - texmask = np.zeros_like(tex) - tr = tex.shape[1]//4 - texmask[tr+13:2*tr-13, 25:-25, :] += 1.0 - texmask[25:-25, tr+13:2*tr-13, :] += 1.0 - texloss = (tf.reduce_sum(texmask * (tex - tex_var)**2)/np.sum(texmask))**0.5 # RMSE within masked area. - - # Training driven by image-space loss. - lr_in = tf.placeholder(tf.float32, []) - train_op = tf.train.AdamOptimizer(lr_in, 0.9, 0.99).minimize(loss, var_list=[tex_var]) - - # Open log file. - log_file = open(out_dir + '/' + log_fn, 'wt') if log_fn else None - - # Render. - ang = 0.0 - util.init_uninitialized_vars() - texloss_avg = [] - for it in range(max_iter + 1): - lr = lr_base * lr_ramp**(float(it)/float(max_iter)) - - # Random rotation/translation matrix for optimization. - r_rot = util.random_rotation_translation(0.25) - - # Smooth rotation for display. - ang = ang + 0.01 - a_rot = np.matmul(util.rotate_x(-0.4), util.rotate_y(ang)) - dist = np.random.uniform(0.0, 48.5) - - # Modelview and modelview + projection matrices. - proj = util.projection(x=0.4, n=1.0, f=200.0) - r_mv = np.matmul(util.translate(0, 0, -1.5 - dist), r_rot) - r_mvp = np.matmul(proj, r_mv).astype(np.float32) - a_mv = np.matmul(util.translate(0, 0, -3.5), a_rot) - a_mvp = np.matmul(proj, a_mv).astype(np.float32) - - # Run training and measure texture-space RMSE loss. - texloss_val, _ = util.run([texloss, train_op], {mtx_in: r_mvp, lr_in: lr}) - texloss_avg.append(texloss_val) - - # Print/save log. - if log_interval and (it % log_interval == 0): - texloss_val, texloss_avg = np.mean(np.asarray(texloss_avg)), [] - psnr = -10.0 * np.log10(texloss_val**2) # PSNR based on average RMSE. - s = "iter=%d,loss=%f,psnr=%f" % (it, texloss_val, psnr) - print(s) - if log_file: - log_file.write(s + '\n') - - # Show/save result images/textures. - display_image = display_interval and (it % display_interval) == 0 - save_image = imgsave_interval and (it % imgsave_interval) == 0 - save_texture = texsave_interval and (it % texsave_interval) == 0 - - if display_image or save_image: - result_image = util.run(color_opt, {mtx_in: a_mvp})[0] - if display_image: - util.display_image(result_image, size=display_res, title='%d / %d' % (it, max_iter)) - if save_image: - util.save_image(out_dir + '/' + (imgsave_fn % it), result_image) - if save_texture: - util.save_image(out_dir + '/' + (texsave_fn % it), util.run(tex_var)[::-1]) - - # Done. - if log_file: - log_file.close() - -#---------------------------------------------------------------------------- -# Main function. -#---------------------------------------------------------------------------- - -def main(): - display_interval = 0 - enable_mip = None - - def usage(): - print("Usage: python earth.py [-v] [-mip|-nomip]") - exit() - - for a in sys.argv[1:]: - if a == '-v': display_interval = 10 - elif a == '-mip': enable_mip = True - elif a == '-nomip': enable_mip = False - else: usage() - - if enable_mip is None: - usage() - - # Initialize TensorFlow. - util.init_tf() - - # Run. - out_dir = 'out/earth_mip' if enable_mip else 'out/earth_nomip' - fit_earth(max_iter=20000, log_interval=10, display_interval=display_interval, enable_mip=enable_mip, out_dir=out_dir, log_fn='log.txt', texsave_interval=1000, texsave_fn='tex_%06d.png', imgsave_interval=1000, imgsave_fn='img_%06d.png') - - # Done. - print("Done.") - -#---------------------------------------------------------------------------- - -if __name__ == "__main__": - main() - -#---------------------------------------------------------------------------- diff --git a/spaces/gylleus/icongen/torch_utils/__init__.py b/spaces/gylleus/icongen/torch_utils/__init__.py deleted file mode 100644 index ece0ea08fe2e939cc260a1dafc0ab5b391b773d9..0000000000000000000000000000000000000000 --- a/spaces/gylleus/icongen/torch_utils/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -# empty diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/utils/util.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/utils/util.py deleted file mode 100644 index 544c94895dfc0bfcd1285fde7cd2c102b71113ed..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/utils/util.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -import torch -import cv2 -from torchvision import transforms -import numpy as np -import math - - -def visual(output, out_path): - output = (output + 1)/2 - output = torch.clamp(output, 0, 1) - if output.shape[1] == 1: - output = torch.cat([output, output, output], 1) - output = output[0].detach().cpu().permute(1, 2, 0).numpy() - output = (output*255).astype(np.uint8) - output = output[:, :, ::-1] - cv2.imwrite(out_path, output) - - -def get_lr(t, initial_lr, rampdown=0.25, rampup=0.05): - - lr_ramp = min(1, (1 - t) / rampdown) - lr_ramp = 0.5 - 0.5 * math.cos(lr_ramp * math.pi) - lr_ramp = lr_ramp * min(1, t / rampup) - return initial_lr * lr_ramp - - -def latent_noise(latent, strength): - noise = torch.randn_like(latent) * strength - - return latent + noise - - -def noise_regularize_(noises): - loss = 0 - - for noise in noises: - size = noise.shape[2] - - while True: - loss = ( - loss - + (noise * torch.roll(noise, shifts=1, dims=3)).mean().pow(2) - + (noise * torch.roll(noise, shifts=1, dims=2)).mean().pow(2) - ) - - if size <= 8: - break - - noise = noise.reshape([-1, 1, size // 2, 2, size // 2, 2]) - noise = noise.mean([3, 5]) - size //= 2 - - return loss - - -def noise_normalize_(noises): - for noise in noises: - mean = noise.mean() - std = noise.std() - - noise.data.add_(-mean).div_(std) - - -def tensor_to_numpy(x): - x = x[0].permute(1, 2, 0) - x = torch.clamp(x, -1, 1) - x = (x+1) * 127.5 - x = x.cpu().detach().numpy().astype(np.uint8) - return x - - -def numpy_to_tensor(x): - x = (x / 255 - 0.5) * 2 - x = torch.from_numpy(x).unsqueeze(0).permute(0, 3, 1, 2) - x = x.cuda().float() - return x - - -def tensor_to_pil(x): - x = torch.clamp(x, -1, 1) - x = (x+1) * 127.5 - return transforms.ToPILImage()(x.squeeze_(0)) diff --git a/spaces/h2oai/wave-tour/examples/progress.py b/spaces/h2oai/wave-tour/examples/progress.py deleted file mode 100644 index 11f04f77b36e146e6fb88df453097d4d16d12039..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/progress.py +++ /dev/null @@ -1,18 +0,0 @@ -# Form / Progress -# Use a #progress bar to indicate completion status of an operation. -# #form -# --- -from h2o_wave import site, ui - -page = site['/demo'] - -page['example'] = ui.form_card( - box='1 1 4 7', - items=[ - ui.progress(label='Indeterminate Progress', caption='Goes on forever'), - ui.progress(label='Standard Progress', caption='Downloading the interwebs...', value=0.25), - ui.progress(label='Spinner Progress', type='spinner'), - ui.progress(label='', caption='Spinner Progress with text at the bottom', type='spinner'), - ] -) -page.save() diff --git a/spaces/haakohu/deep_privacy2/dp2/data/__init__.py b/spaces/haakohu/deep_privacy2/dp2/data/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/haakohu/deep_privacy2/dp2/loss/sg2_loss.py b/spaces/haakohu/deep_privacy2/dp2/loss/sg2_loss.py deleted file mode 100644 index 763263e2e7cb9330f24265ba8008e152fa4110f0..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2/dp2/loss/sg2_loss.py +++ /dev/null @@ -1,96 +0,0 @@ -import functools -import torch -import tops -from tops import logger -from dp2.utils import forward_D_fake -from .utils import nsgan_d_loss, nsgan_g_loss -from .r1_regularization import r1_regularization -from .pl_regularization import PLRegularization - - -class StyleGAN2Loss: - - def __init__( - self, - D, - G, - r1_opts: dict, - EP_lambd: float, - lazy_reg_interval: int, - lazy_regularization: bool, - pl_reg_opts: dict, - ) -> None: - self.gradient_step_D = 0 - self._lazy_reg_interval = lazy_reg_interval - self.D = D - self.G = G - self.EP_lambd = EP_lambd - self.lazy_regularization = lazy_regularization - self.r1_reg = functools.partial( - r1_regularization, **r1_opts, lazy_reg_interval=lazy_reg_interval, - lazy_regularization=lazy_regularization) - self.do_PL_Reg = False - if pl_reg_opts.weight > 0: - self.pl_reg = PLRegularization(**pl_reg_opts) - self.do_PL_Reg = True - self.pl_start_nimg = pl_reg_opts.start_nimg - - def D_loss(self, batch: dict, grad_scaler): - to_log = {} - # Forward through G and D - do_GP = self.lazy_regularization and self.gradient_step_D % self._lazy_reg_interval == 0 - if do_GP: - batch["img"] = batch["img"].detach().requires_grad_(True) - with torch.cuda.amp.autocast(enabled=tops.AMP()): - with torch.no_grad(): - G_fake = self.G(**batch, update_emas=True) - D_out_real = self.D(**batch) - - D_out_fake = forward_D_fake(batch, G_fake["img"], self.D) - - # Non saturating loss - nsgan_loss = nsgan_d_loss(D_out_real["score"], D_out_fake["score"]) - tops.assert_shape(nsgan_loss, (batch["img"].shape[0], )) - to_log["d_loss"] = nsgan_loss.mean() - total_loss = nsgan_loss - epsilon_penalty = D_out_real["score"].pow(2).view(-1) - to_log["epsilon_penalty"] = epsilon_penalty.mean() - tops.assert_shape(epsilon_penalty, total_loss.shape) - total_loss = total_loss + epsilon_penalty * self.EP_lambd - - # Improved gradient penalty with lazy regularization - # Gradient penalty applies specialized autocast. - if do_GP: - gradient_pen, grad_unscaled = self.r1_reg( - batch["img"], D_out_real["score"], batch["mask"], scaler=grad_scaler) - to_log["r1_gradient_penalty"] = grad_unscaled.mean() - tops.assert_shape(gradient_pen, total_loss.shape) - total_loss = total_loss + gradient_pen - - batch["img"] = batch["img"].detach().requires_grad_(False) - if "score" in D_out_real: - to_log["real_scores"] = D_out_real["score"] - to_log["real_logits_sign"] = D_out_real["score"].sign() - to_log["fake_logits_sign"] = D_out_fake["score"].sign() - to_log["fake_scores"] = D_out_fake["score"] - to_log = {key: item.mean().detach() for key, item in to_log.items()} - self.gradient_step_D += 1 - return total_loss.mean(), to_log - - def G_loss(self, batch: dict, grad_scaler): - with torch.cuda.amp.autocast(enabled=tops.AMP()): - to_log = {} - # Forward through G and D - G_fake = self.G(**batch) - D_out_fake = forward_D_fake(batch, G_fake["img"], self.D) - # Adversarial Loss - total_loss = nsgan_g_loss(D_out_fake["score"]).view(-1) - to_log["g_loss"] = total_loss.mean() - tops.assert_shape(total_loss, (batch["img"].shape[0], )) - - if self.do_PL_Reg and logger.global_step() >= self.pl_start_nimg: - pl_reg, to_log_ = self.pl_reg(self.G, batch, grad_scaler=grad_scaler) - total_loss = total_loss + pl_reg.mean() - to_log.update(to_log_) - to_log = {key: item.mean().detach() for key, item in to_log.items()} - return total_loss.mean(), to_log diff --git a/spaces/hamacojr/SAM-CAT-Seg/cat_seg/third_party/model_vpt.py b/spaces/hamacojr/SAM-CAT-Seg/cat_seg/third_party/model_vpt.py deleted file mode 100644 index 9e958112828e81b788418ab573ea1962684667b7..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/SAM-CAT-Seg/cat_seg/third_party/model_vpt.py +++ /dev/null @@ -1,477 +0,0 @@ -from collections import OrderedDict -from typing import Tuple, Union - -import torch -import torch.nn.functional as F -from torch import nn - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1): - super().__init__() - - # all conv layers have stride 1. an avgpool is performed after the second convolution when stride > 1 - self.conv1 = nn.Conv2d(inplanes, planes, 1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - - self.conv2 = nn.Conv2d(planes, planes, 3, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(planes) - - self.avgpool = nn.AvgPool2d(stride) if stride > 1 else nn.Identity() - - self.conv3 = nn.Conv2d(planes, planes * self.expansion, 1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - - self.relu = nn.ReLU(inplace=True) - self.downsample = None - self.stride = stride - - if stride > 1 or inplanes != planes * Bottleneck.expansion: - # downsampling layer is prepended with an avgpool, and the subsequent convolution has stride 1 - self.downsample = nn.Sequential(OrderedDict([ - ("-1", nn.AvgPool2d(stride)), - ("0", nn.Conv2d(inplanes, planes * self.expansion, 1, stride=1, bias=False)), - ("1", nn.BatchNorm2d(planes * self.expansion)) - ])) - - def forward(self, x: torch.Tensor): - identity = x - - out = self.relu(self.bn1(self.conv1(x))) - out = self.relu(self.bn2(self.conv2(out))) - out = self.avgpool(out) - out = self.bn3(self.conv3(out)) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - return out - - -class AttentionPool2d(nn.Module): - def __init__(self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None): - super().__init__() - self.positional_embedding = nn.Parameter(torch.randn(spacial_dim ** 2 + 1, embed_dim) / embed_dim ** 0.5) - self.k_proj = nn.Linear(embed_dim, embed_dim) - self.q_proj = nn.Linear(embed_dim, embed_dim) - self.v_proj = nn.Linear(embed_dim, embed_dim) - self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim) - self.num_heads = num_heads - - def forward(self, x): - x = x.flatten(start_dim=2).permute(2, 0, 1) # NCHW -> (HW)NC - x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)NC - x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC - x, _ = F.multi_head_attention_forward( - query=x[:1], key=x, value=x, - embed_dim_to_check=x.shape[-1], - num_heads=self.num_heads, - q_proj_weight=self.q_proj.weight, - k_proj_weight=self.k_proj.weight, - v_proj_weight=self.v_proj.weight, - in_proj_weight=None, - in_proj_bias=torch.cat([self.q_proj.bias, self.k_proj.bias, self.v_proj.bias]), - bias_k=None, - bias_v=None, - add_zero_attn=False, - dropout_p=0, - out_proj_weight=self.c_proj.weight, - out_proj_bias=self.c_proj.bias, - use_separate_proj_weight=True, - training=self.training, - need_weights=False - ) - return x.squeeze(0) - - -class ModifiedResNet(nn.Module): - """ - A ResNet class that is similar to torchvision's but contains the following changes: - - There are now 3 "stem" convolutions as opposed to 1, with an average pool instead of a max pool. - - Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1 - - The final pooling layer is a QKV attention instead of an average pool - """ - - def __init__(self, layers, output_dim, heads, input_resolution=224, width=64): - super().__init__() - self.output_dim = output_dim - self.input_resolution = input_resolution - - # the 3-layer stem - self.conv1 = nn.Conv2d(3, width // 2, kernel_size=3, stride=2, padding=1, bias=False) - self.bn1 = nn.BatchNorm2d(width // 2) - self.relu1 = nn.ReLU(inplace=True) - self.conv2 = nn.Conv2d(width // 2, width // 2, kernel_size=3, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(width // 2) - self.relu2 = nn.ReLU(inplace=True) - self.conv3 = nn.Conv2d(width // 2, width, kernel_size=3, padding=1, bias=False) - self.bn3 = nn.BatchNorm2d(width) - self.relu3 = nn.ReLU(inplace=True) - self.avgpool = nn.AvgPool2d(2) - - # residual layers - self._inplanes = width # this is a *mutable* variable used during construction - self.layer1 = self._make_layer(width, layers[0]) - self.layer2 = self._make_layer(width * 2, layers[1], stride=2) - self.layer3 = self._make_layer(width * 4, layers[2], stride=2) - self.layer4 = self._make_layer(width * 8, layers[3], stride=2) - - embed_dim = width * 32 # the ResNet feature dimension - self.attnpool = AttentionPool2d(input_resolution // 32, embed_dim, heads, output_dim) - - def _make_layer(self, planes, blocks, stride=1): - layers = [Bottleneck(self._inplanes, planes, stride)] - - self._inplanes = planes * Bottleneck.expansion - for _ in range(1, blocks): - layers.append(Bottleneck(self._inplanes, planes)) - - return nn.Sequential(*layers) - - def forward(self, x): - def stem(x): - x = self.relu1(self.bn1(self.conv1(x))) - x = self.relu2(self.bn2(self.conv2(x))) - x = self.relu3(self.bn3(self.conv3(x))) - x = self.avgpool(x) - return x - - x = x.type(self.conv1.weight.dtype) - x = stem(x) - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - x = self.attnpool(x) - - return x - - -class LayerNorm(nn.LayerNorm): - """Subclass torch's LayerNorm to handle fp16.""" - - def forward(self, x: torch.Tensor): - orig_type = x.dtype - ret = super().forward(x.type(torch.float32)) - return ret.type(orig_type) - - -class QuickGELU(nn.Module): - def forward(self, x: torch.Tensor): - return x * torch.sigmoid(1.702 * x) - - -class ResidualAttentionBlock(nn.Module): - def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None): - super().__init__() - - self.attn = nn.MultiheadAttention(d_model, n_head) - self.ln_1 = LayerNorm(d_model) - self.mlp = nn.Sequential(OrderedDict([ - ("c_fc", nn.Linear(d_model, d_model * 4)), - ("gelu", QuickGELU()), - ("c_proj", nn.Linear(d_model * 4, d_model)) - ])) - self.ln_2 = LayerNorm(d_model) - self.attn_mask = attn_mask - self.mask_pre_mlp = True - - def attention(self, x: torch.Tensor): - self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None - return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0] - - def forward(self, x: torch.Tensor): - x = x + self.attention(self.ln_1(x)) - x = x + self.mlp(self.ln_2(x)) - return x - - def forward_dense(self, x: torch.Tensor): - y = self.ln_1(x) - y = F.linear(y, self.attn.in_proj_weight, self.attn.in_proj_bias) - L, N, D = y.shape # L N 3D - - y = y.reshape(L, N, 3, D // 3).permute(2, 1, 0, 3).reshape(3 * N, L, D // 3) - y = F.linear(y, self.attn.out_proj.weight, self.attn.out_proj.bias) - - q, k, v = y.tensor_split(3, dim=0) - #v = v.transpose(1, 0) + x # L N D - v = v.transpose(1, 0) + x[:1] # L N D - - v = v + self.mlp(self.ln_2(v)) - return v - - -class Transformer(nn.Module): - def __init__(self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None, prompt_length=0, prompt_depth=0): - super().__init__() - self.width = width - self.layers = layers - self.resblocks = nn.Sequential(*[ResidualAttentionBlock(width, heads, attn_mask) for _ in range(layers)]) - - self.prompt_length = prompt_length - self.prompt_depth = prompt_depth - self.prompt_tokens = nn.Parameter(torch.zeros(prompt_depth, prompt_length, width)) if prompt_length > 0 else None - if self.prompt_tokens is not None: - nn.init.xavier_uniform_(self.prompt_tokens) - - def forward(self, x: torch.Tensor, dense=False): - for i, resblock in enumerate(self.resblocks): - if self.prompt_length > 0 and i < self.prompt_depth: - l = self.prompt_length + 1 if i > 0 else 1 - x = torch.cat((x[0:1, :, :], self.prompt_tokens[i].repeat(x.shape[1], 1, 1).permute(1, 0, 2) ,x[l:, :, :])) - - if i == self.layers - 1 and dense: - x = resblock.forward_dense(x) - x = torch.cat((x[0:1, :, :], x[self.prompt_length + 1: :, :]), dim=0) - else: - x = resblock(x) - - return x - - -class VisualTransformer(nn.Module): - def __init__(self, input_resolution: int, patch_size: int, width: int, layers: int, heads: int, output_dim: int, prompt_depth: int, prompt_length: int): - super().__init__() - self.output_dim = output_dim - self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False) - - scale = width ** -0.5 - self.class_embedding = nn.Parameter(scale * torch.randn(width)) - self.positional_embedding = nn.Parameter(scale * torch.randn((input_resolution // patch_size) ** 2 + 1, width)) - self.ln_pre = LayerNorm(width) - - self.transformer = Transformer(width, layers, heads, prompt_depth=prompt_depth, prompt_length=prompt_length) - - self.ln_post = LayerNorm(width) - self.proj = nn.Parameter(scale * torch.randn(width, output_dim)) - - self.patch_size = patch_size - self.input_resolution = input_resolution - - def forward(self, x: torch.Tensor, dense=False): - x = self.conv1(x) # shape = [*, width, grid, grid] - x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2] - x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width] - x = torch.cat([self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width] - - if dense and (x.shape[1] != self.positional_embedding.shape[0]): - x = x + self.resized_pos_embed(self.input_resolution, x.shape[1]).to(x.dtype) - else: - x = x + self.positional_embedding.to(x.dtype) - - x = self.ln_pre(x) - - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x, dense) - x = x.permute(1, 0, 2) # LND -> NLD - - if dense: - x = self.ln_post(x[:, :, :]) - else: - x = self.ln_post(x[:, 0, :]) - - if self.proj is not None: - x = x @ self.proj - - return x - - def resized_pos_embed(self, in_res, tgt_res, mode="bicubic"): - #assert L == (input_resolution // self.patch_size) ** 2 + 1 - L, D = self.positional_embedding.shape - - in_side = in_res // self.patch_size - #tgt_side = tgt_res // self.patch_size - tgt_side = int((tgt_res - 1) ** 0.5) - - cls_pos = self.positional_embedding[0].unsqueeze(0) # 1 D - pos_embed = self.positional_embedding[1:].reshape(1, in_side, in_side, D).permute(0, 3, 1, 2) # L-1 D -> 1 D S S - resized_pos_embed = F.interpolate(pos_embed, size=(tgt_side, tgt_side), mode=mode, align_corners=False,) # 1 D S S -> 1 D S' S' - resized_pos_embed = resized_pos_embed.squeeze(0).reshape(D, -1).T # L'-1 D - - return torch.cat((cls_pos, resized_pos_embed), dim=0) - - -class CLIP(nn.Module): - def __init__(self, - embed_dim: int, - # vision - image_resolution: int, - vision_layers: Union[Tuple[int, int, int, int], int], - vision_width: int, - vision_patch_size: int, - # text - context_length: int, - vocab_size: int, - transformer_width: int, - transformer_heads: int, - transformer_layers: int, - # prompt - prompt_depth: int=0, - prompt_length: int=0, - ): - super().__init__() - - self.context_length = context_length - - self.image_resolution = image_resolution - - - if isinstance(vision_layers, (tuple, list)): - assert prompt_length == 0 and prompt_depth==0 - vision_heads = vision_width * 32 // 64 - self.visual = ModifiedResNet( - layers=vision_layers, - output_dim=embed_dim, - heads=vision_heads, - input_resolution=image_resolution, - width=vision_width - ) - else: - vision_heads = vision_width // 64 - self.visual = VisualTransformer( - input_resolution=image_resolution, - patch_size=vision_patch_size, - width=vision_width, - layers=vision_layers, - heads=vision_heads, - output_dim=embed_dim, - prompt_depth=prompt_depth, - prompt_length=prompt_length, - ) - - self.transformer = Transformer( - width=transformer_width, - layers=transformer_layers, - heads=transformer_heads, - attn_mask=self.build_attention_mask() - ) - - self.vocab_size = vocab_size - self.token_embedding = nn.Embedding(vocab_size, transformer_width) - self.positional_embedding = nn.Parameter(torch.empty(self.context_length, transformer_width)) - self.ln_final = LayerNorm(transformer_width) - - self.text_projection = nn.Parameter(torch.empty(transformer_width, embed_dim)) - self.logit_scale = nn.Parameter(torch.ones([])) - - - def build_attention_mask(self): - # lazily create causal attention mask, with full attention between the vision tokens - # pytorch uses additive attention mask; fill with -inf - mask = torch.empty(self.context_length, self.context_length) - mask.fill_(float("-inf")) - mask.triu_(1) # zero out the lower diagonal - return mask - - @property - def dtype(self): - return self.visual.conv1.weight.dtype - - - def encode_image(self, image, masks=None, pool_mask=None, dense=False): - if pool_mask is not None: - return self.visual(image.type(self.dtype), mask=pool_mask, dense=dense) - if masks == None: - return self.visual(image.type(self.dtype), dense=dense) - else: - return self.visual(image.type(self.dtype), masks.type(self.dtype)) - - def encode_text(self, text): - x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model] - - x = x + self.positional_embedding.type(self.dtype) - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x) - x = x.permute(1, 0, 2) # LND -> NLD - x = self.ln_final(x).type(self.dtype) - - # x.shape = [batch_size, n_ctx, transformer.width] - # take features from the eot embedding (eot_token is the highest number in each sequence) - x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection - - return x - - def forward(self, image, text): - image_features = self.encode_image(image) - text_features = self.encode_text(text) - # import pdb; pdb.set_trace() - # normalized features - # image_features shape: [1, 1024] - image_features = image_features / image_features.norm(dim=-1, keepdim=True) - text_features = text_features / text_features.norm(dim=-1, keepdim=True) - - # cosine similarity as logits - logit_scale = self.logit_scale.exp() - logits_per_iamge = logit_scale * image_features @ text_features.t() - logits_per_text = logit_scale * text_features @ image_features.t() - - # shape = [global_batch_size, global_batch_size] - return logits_per_iamge, logits_per_text - - -def convert_weights(model: nn.Module): - """Convert applicable model parameters to fp16""" - - def _convert_weights_to_fp16(l): - if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)): - l.weight.data = l.weight.data.half() - if l.bias is not None: - l.bias.data = l.bias.data.half() - - if isinstance(l, nn.MultiheadAttention): - for attr in [*[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], "in_proj_bias", "bias_k", "bias_v"]: - tensor = getattr(l, attr) - if tensor is not None: - tensor.data = tensor.data.half() - - for name in ["text_projection", "proj"]: - if hasattr(l, name): - attr = getattr(l, name) - if attr is not None: - attr.data = attr.data.half() - - model.apply(_convert_weights_to_fp16) - - -def build_model(state_dict: dict, prompt_depth=0, prompt_length=0): - vit = "visual.proj" in state_dict - - if vit: - vision_width = state_dict["visual.conv1.weight"].shape[0] - vision_layers = len([k for k in state_dict.keys() if k.startswith("visual.") and k.endswith(".attn.in_proj_weight")]) - vision_patch_size = state_dict["visual.conv1.weight"].shape[-1] - grid_size = round((state_dict["visual.positional_embedding"].shape[0] - 1) ** 0.5) - image_resolution = vision_patch_size * grid_size - else: - counts: list = [len(set(k.split(".")[2] for k in state_dict if k.startswith(f"visual.layer{b}"))) for b in [1, 2, 3, 4]] - vision_layers = tuple(counts) - vision_width = state_dict["visual.layer1.0.conv1.weight"].shape[0] - output_width = round((state_dict["visual.attnpool.positional_embedding"].shape[0] - 1) ** 0.5) - vision_patch_size = None - assert output_width ** 2 + 1 == state_dict["visual.attnpool.positional_embedding"].shape[0] - image_resolution = output_width * 32 - - embed_dim = state_dict["text_projection"].shape[1] - context_length = state_dict["positional_embedding"].shape[0] - vocab_size = state_dict["token_embedding.weight"].shape[0] - transformer_width = state_dict["ln_final.weight"].shape[0] - transformer_heads = transformer_width // 64 - transformer_layers = len(set(k.split(".")[2] for k in state_dict if k.startswith(f"transformer.resblocks"))) - - model = CLIP( - embed_dim, - image_resolution, vision_layers, vision_width, vision_patch_size, - context_length, vocab_size, transformer_width, transformer_heads, transformer_layers, - prompt_depth=prompt_depth, prompt_length=prompt_length, - ) - - for key in ["input_resolution", "context_length", "vocab_size"]: - del state_dict[key] - - convert_weights(model) - model.load_state_dict(state_dict, strict=False) - return model.eval() diff --git a/spaces/hamzapehlivan/StyleRes/datasets/__init__.py b/spaces/hamzapehlivan/StyleRes/datasets/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/hands012/gpt-academic/docs/README_JP.md b/spaces/hands012/gpt-academic/docs/README_JP.md deleted file mode 100644 index 1df2b0a9cf200ca5be348e9178dcf478558c7d0f..0000000000000000000000000000000000000000 --- a/spaces/hands012/gpt-academic/docs/README_JP.md +++ /dev/null @@ -1,329 +0,0 @@ -> **Note** -> -> このReadmeファイルは、このプロジェクトのmarkdown翻訳プラグインによって自動的に生成されたもので、100%正確ではない可能性があります。 -> -> When installing dependencies, please strictly choose the versions specified in `requirements.txt`. -> -> `pip install -r requirements.txt` -> - -# GPT 学术优化 (GPT Academic) - -**もしこのプロジェクトが好きなら、星をつけてください。もしあなたがより良いアカデミックショートカットまたは機能プラグインを思いついた場合、Issueをオープンするか pull request を送信してください。私たちはこのプロジェクト自体によって翻訳された[英語 |](README_EN.md)[日本語 |](README_JP.md)[한국어 |](https://github.com/mldljyh/ko_gpt_academic)[Русский |](README_RS.md)[Français](README_FR.md)のREADMEも用意しています。 -GPTを使った任意の言語にこのプロジェクトを翻訳するには、[`multi_language.py`](multi_language.py)を読んで実行してください。 (experimental)。 - -> **注意** -> -> 1. **赤色**で表示された関数プラグイン(ボタン)のみ、ファイルの読み取りをサポートしています。一部のプラグインは、プラグインエリアの**ドロップダウンメニュー**内にあります。また、私たちはどんな新しいプラグインのPRでも、**最優先**で歓迎し、処理します! -> -> 2. このプロジェクトの各ファイルの機能は、自己解析の詳細説明書である[`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)で説明されています。バージョンが進化するにつれて、関連する関数プラグインをいつでもクリックし、GPTを呼び出してプロジェクトの自己解析レポートを再生成することができます。よくある問題は[`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)にまとめられています。[インストール方法](#installation)。 - -> 3. このプロジェクトは、chatglmやRWKV、パンクなど、国内の大規模自然言語モデルを利用することをサポートし、試みることを奨励します。複数のAPIキーを共存することができ、設定ファイルに`API_KEY="openai-key1,openai-key2,api2d-key3"`のように記入することができます。`API_KEY`を一時的に変更する場合は、入力エリアに一時的な`API_KEY`を入力してEnterキーを押せば、それが有効になります。 - - -
        - -機能 | 説明 ---- | --- -一键校正 | 一键で校正可能、論文の文法エラーを検索することができる -一键中英翻訳 | 一键で中英翻訳可能 -一键コード解説 | コードを表示し、解説し、生成し、コードに注釈をつけることができる -[自分でカスタマイズ可能なショートカットキー](https://www.bilibili.com/video/BV14s4y1E7jN) | 自分でカスタマイズ可能なショートカットキーをサポートする -モジュール化された設計 | カスタマイズ可能な[強力な関数プラグイン](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions)をサポートし、プラグインは[ホットアップデート](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)に対応している -[自己プログラム解析](https://www.bilibili.com/video/BV1cj411A7VW) | [関数プラグイン] [一键読解](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)このプロジェクトのソースコード -プログラム解析 | [関数プラグイン] 一鍵で他のPython/C/C++/Java/Lua/...プロジェクトを分析できる -論文の読み、[翻訳](https://www.bilibili.com/video/BV1KT411x7Wn) | [関数プラグイン] LaTex/ PDF論文の全文を一鍵で読み解き、要約を生成することができる -LaTex全文[翻訳](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[校正](https://www.bilibili.com/video/BV1FT411H7c5/) | [関数プラグイン] LaTex論文の翻訳または校正を一鍵で行うことができる -一括で注釈を生成 | [関数プラグイン] 一鍵で関数に注釈をつけることができる -Markdown[中英翻訳](https://www.bilibili.com/video/BV1yo4y157jV/) | [関数プラグイン] 上記の5種類の言語の[README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md)を見たことがありますか? -チャット分析レポート生成 | [関数プラグイン] 実行後、自動的に概要報告書を生成する -[PDF論文全文翻訳機能](https://www.bilibili.com/video/BV1KT411x7Wn) | [関数プラグイン] PDF論文からタイトルと要約を抽出し、全文を翻訳する(マルチスレッド) -[Arxivアシスタント](https://www.bilibili.com/video/BV1LM4y1279X) | [関数プラグイン] arxiv記事のURLを入力するだけで、要約を一鍵翻訳し、PDFをダウンロードできる -[Google Scholar 総合アシスタント](https://www.bilibili.com/video/BV19L411U7ia) | [関数プラグイン] 任意のGoogle Scholar検索ページURLを指定すると、gptが[related works](https://www.bilibili.com/video/BV1GP411U7Az/)を作成する -インターネット情報収集+GPT | [関数プラグイン] まずGPTに[インターネットから情報を収集](https://www.bilibili.com/video/BV1om4y127ck)してから質問に回答させ、情報が常に最新であるようにする -数式/画像/表表示 | 数式の[tex形式とレンダリング形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png)を同時に表示し、数式、コードハイライトをサポートしている -マルチスレッド関数プラグインがサポートされている | chatgptをマルチスレッドで呼び出し、[大量のテキスト](https://www.bilibili.com/video/BV1FT411H7c5/)またはプログラムを一鍵で処理できる -ダークグラジオ[テーマの起動](https://github.com/binary-husky/chatgpt_academic/issues/173) | ブラウザのURLの後ろに```/?__theme=dark```を追加すると、ダークテーマを切り替えることができます。 -[多数のLLMモデル](https://www.bilibili.com/video/BV1wT411p7yf)がサポートされ、[API2D](https://api2d.com/)がサポートされている | 同時にGPT3.5、GPT4、[清華ChatGLM](https://github.com/THUDM/ChatGLM-6B)、[復旦MOSS](https://github.com/OpenLMLab/MOSS)に対応 -より多くのLLMモデルが接続され、[huggingfaceデプロイ](https://huggingface.co/spaces/qingxu98/gpt-academic)がサポートされている | Newbingインターフェイス(Newbing)、清華大学の[Jittorllm](https://github.com/Jittor/JittorLLMs)のサポート[LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV)と[盘古α](https://openi.org.cn/pangu/) -さらに多くの新機能(画像生成など)を紹介する... | この文書の最後に示す... -
        - -- 新しいインターフェース(`config.py`のLAYOUTオプションを変更することで、「左右配置」と「上下配置」を切り替えることができます) -
        - -
        - All buttons are dynamically generated by reading functional.py, and custom functions can be freely added to free the clipboard. - -
        - -
        - -- Polishing/Correction - -
        - -
        - -- If the output contains formulas, they are displayed in both TeX and rendering forms, making it easy to copy and read. - -
        - -
        - -- Don't feel like looking at the project code? Just ask chatgpt directly. - -
        - -
        - - -- Mixed calls of multiple large language models (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4) - -
        - -
        - ---- - -# Installation - -## Installation-Method 1: Directly run (Windows, Linux or MacOS) - -1. Download the project. - -```sh -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -``` - -2. Configure the API_KEY. - -Configure the API KEY and other settings in `config.py` and [special network environment settings](https://github.com/binary-husky/gpt_academic/issues/1). - -(P.S. When the program is running, it will first check if there is a private configuration file named `config_private.py`, and use the configuration in it to override the same name configuration in `config.py`. Therefore, if you can understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py`, and transfer (copy) the configuration in `config.py` to `config_private.py`. `config_private.py` is not controlled by git and can make your privacy information more secure. P.S. The project also supports configuring most options through `environment variables`, and the writing format of environment variables refers to the `docker-compose` file. Reading priority: `environment variables` > `config_private.py` > `config.py`) - -3. Install dependencies. - -```sh -# (Choose I: If familiar with Python)(Python version 3.9 or above, the newer the better) Note: Use the official pip source or Ali pip source. Temporary switching source method: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ -python -m pip install -r requirements.txt - -# (Choose II: If not familiar with Python) Use anaconda, the steps are the same (https://www.bilibili.com/video/BV1rc411W7Dr): -conda create -n gptac_venv python=3.11 # Create anaconda environment. -conda activate gptac_venv # Activate the anaconda environment. -python -m pip install -r requirements.txt # This step is the same as the pip installation step. -``` - -
        If you need to support Tsinghua ChatGLM/Fudan MOSS as a backend, click to expand. -

        - -[Optional Steps] If you need to support Tsinghua ChatGLM/Fudan MOSS as a backend, you need to install more dependencies (precondition: familiar with Python + used Pytorch + computer configuration). Strong enough): - -```sh -# Optional step I: support Tsinghua ChatGLM. Tsinghua ChatGLM remarks: If you encounter the error "Call ChatGLM fail cannot load ChatGLM parameters normally", refer to the following: 1: The version installed above is torch+cpu version, using cuda requires uninstalling torch and reinstalling torch+cuda; 2: If the model cannot be loaded due to insufficient local configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py, and change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True). -python -m pip install -r request_llm/requirements_chatglm.txt - -# Optional Step II: Support Fudan MOSS. -python -m pip install -r request_llm/requirements_moss.txt -git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # Note that when executing this line of code, it must be in the project root. - -# 【Optional Step III】Ensure that the AVAIL_LLM_MODELS in the config.py configuration file contains the expected model. Currently, all supported models are as follows (jittorllms series currently only supports the docker solution): -AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"] -``` - -

        -
        - - - -4. Run. - -```sh -python main.py -```5. Testing Function Plugin -``` -- Test function plugin template function (requires gpt to answer what happened today in history), you can use this function as a template to implement more complex functions - Click "[Function Plugin Template Demo] Today in History" -``` - -## Installation-Methods 2: Using Docker - -1. Only ChatGPT (recommended for most people) - - ``` sh -git clone https://github.com/binary-husky/chatgpt_academic.git # Download project -cd chatgpt_academic # Enter path -nano config.py # Edit config.py with any text editor ‑ configure "Proxy," "API_KEY," "WEB_PORT" (e.g., 50923) and more -docker build -t gpt-academic . # installation - -#(Last step-Option 1) In a Linux environment, `--net=host` is more convenient and quick -docker run --rm -it --net=host gpt-academic -#(Last step-Option 2) In a macOS/windows environment, the -p option must be used to expose the container port (e.g., 50923) to the port on the host. -docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic -``` - -2. ChatGPT + ChatGLM + MOSS (requires familiarity with Docker) - -``` sh -# Modify docker-compose.yml, delete plans 1 and 3, and retain plan 2. Modify the configuration of plan 2 in docker-compose.yml, and reference the comments for instructions. -docker-compose up -``` - -3. ChatGPT + LLAMA + Pangu + RWKV (requires familiarity with Docker) -``` sh -# Modify docker-compose.yml, delete plans 1 and 2, and retain plan 3. Modify the configuration of plan 3 in docker-compose.yml, and reference the comments for instructions. -docker-compose up -``` - - -## Installation-Method 3: Other Deployment Methods - -1. How to use proxy URL/Microsoft Azure API -Configure API_URL_REDIRECT according to the instructions in `config.py`. - -2. Remote Cloud Server Deployment (requires cloud server knowledge and experience) -Please visit [Deployment Wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97) - -3. Using WSL2 (Windows Subsystem for Linux Subsystem) -Please visit [Deployment Wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2) - -4. How to run on a secondary URL (such as `http://localhost/subpath`) -Please visit [FastAPI Running Instructions](docs/WithFastapi.md) - -5. Run with docker-compose -Please read docker-compose.yml and follow the instructions provided therein. ---- -# Advanced Usage -## Customize new convenience buttons/custom function plugins - -1. Custom new convenience buttons (academic shortcut keys) -Open `core_functional.py` with any text editor, add the item as follows, and restart the program. (If the button has been added successfully and is visible, the prefix and suffix support hot modification without restarting the program.) -example: -``` -"Super English to Chinese Translation": { - # Prefix, which will be added before your input. For example, used to describe your request, such as translation, code interpretation, polish, etc. - "Prefix": "Please translate the following content into Chinese, and explain the proper nouns in the text in a markdown table one by one:\n\n", - - # Suffix, which will be added after your input. For example, in combination with the prefix, you can surround your input content with quotation marks. - "Suffix": "", -}, -``` -
        - -
        - -2. Custom function plugins - -Write powerful function plugins to perform any task you can and cannot think of. -The difficulty of writing and debugging plugins in this project is low, and as long as you have a certain amount of python basic knowledge, you can follow the template provided by us to achieve your own plugin functions. -For details, please refer to the [Function Plugin Guide](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97). - ---- -# Latest Update -## New feature dynamics. -1. ダイアログの保存機能。関数プラグインエリアで '現在の会話を保存' を呼び出すと、現在のダイアログを読み取り可能で復元可能なHTMLファイルとして保存できます。さらに、関数プラグインエリア(ドロップダウンメニュー)で 'ダイアログの履歴保存ファイルを読み込む' を呼び出すことで、以前の会話を復元することができます。Tips:ファイルを指定せずに 'ダイアログの履歴保存ファイルを読み込む' をクリックすることで、過去のHTML保存ファイルのキャッシュを表示することができます。'すべてのローカルダイアログの履歴を削除' をクリックすることで、すべてのHTML保存ファイルのキャッシュを削除できます。 -
        - -
        - - -2. 報告書を生成します。ほとんどのプラグインは、実行が終了した後に作業報告書を生成します。 -
        - - - -
        - -3. モジュール化された機能設計、簡単なインターフェースで強力な機能をサポートする。 -
        - - -
        - -4. 自己解決可能なオープンソースプロジェクトです。 -
        - -
        - - -5. 他のオープンソースプロジェクトの解読、容易である。 -
        - -
        - -
        - -
        - -6. [Live2D](https://github.com/fghrsh/live2d_demo)のデコレート小機能です。(デフォルトでは閉じてますが、 `config.py`を変更する必要があります。) -
        - -
        - -7. 新たにMOSS大言語モデルのサポートを追加しました。 -
        - -
        - -8. OpenAI画像生成 -
        - -
        - -9. OpenAIオーディオの解析とサマリー -
        - -
        - -10. 全文校正されたLaTeX -
        - -
        - - -## バージョン: -- version 3.5(作業中):すべての関数プラグインを自然言語で呼び出すことができるようにする(高い優先度)。 -- version 3.4(作業中):chatglmのローカルモデルのマルチスレッドをサポートすることで、機能を改善する。 -- version 3.3:+Web情報の総合機能 -- version 3.2:関数プラグインでさらに多くのパラメータインターフェイスをサポートする(ダイアログの保存機能、任意の言語コードの解読+同時に任意のLLM組み合わせに関する問い合わせ) -- version 3.1:複数のGPTモデルを同時に質問できるようになりました! api2dをサポートし、複数のAPIキーを均等に負荷分散することができます。 -- version 3.0:chatglmとその他の小型LLMのサポート。 -- version 2.6:プラグイン構造を再構築し、対話内容を高め、より多くのプラグインを追加しました。 -- version 2.5:自己アップデートし、長文書やトークンのオーバーフローの問題を解決しました。 -- version 2.4:(1)全文翻訳のPDF機能を追加しました。(2)入力エリアの位置切り替え機能を追加しました。(3)垂直レイアウトオプションを追加しました。(4)マルチスレッド関数プラグインを最適化しました。 -- version 2.3:マルチスレッド性能の向上。 -- version 2.2:関数プラグインのホットリロードをサポートする。 -- version 2.1:折りたたみ式レイアウト。 -- version 2.0:モジュール化された関数プラグインを導入。 -- version 1.0:基本機能 - -gpt_academic開発者QQグループ-2:610599535 - -- 既知の問題 - - 一部のブラウザ翻訳プラグインが、このソフトウェアのフロントエンドの実行を妨害する - - gradioバージョンが高すぎるか低すぎると、多くの異常が引き起こされる - -## 参考学習 - -``` -コードの中には、他の優れたプロジェクトの設計から参考にしたものがたくさん含まれています: - -# プロジェクト1:清華ChatGLM-6B: -https://github.com/THUDM/ChatGLM-6B - -# プロジェクト2:清華JittorLLMs: -https://github.com/Jittor/JittorLLMs - -# プロジェクト3:Edge-GPT: -https://github.com/acheong08/EdgeGPT - -# プロジェクト4:ChuanhuChatGPT: -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# プロジェクト5:ChatPaper: -https://github.com/kaixindelele/ChatPaper - -# その他: -https://github.com/gradio-app/gradio -https://github.com/fghrsh/live2d_demo -``` \ No newline at end of file diff --git a/spaces/haoqi7/research/widgets/__init__.py b/spaces/haoqi7/research/widgets/__init__.py deleted file mode 100644 index 51f7a9f23ff8dae28e535b706c7200be2340c3a1..0000000000000000000000000000000000000000 --- a/spaces/haoqi7/research/widgets/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .body import render_body -from .sidebar import render_sidebar -from .utils import readfile, generate_html_pyecharts diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tests/test_model_analysis.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tests/test_model_analysis.py deleted file mode 100644 index 0e3f84c9354746fc634aca997abb232424ddebb2..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tests/test_model_analysis.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. - - -import unittest -import torch - -import detectron2.model_zoo as model_zoo -from detectron2.config import get_cfg -from detectron2.modeling import build_model -from detectron2.utils.analysis import flop_count_operators, parameter_count - - -def get_model_zoo(config_path): - """ - Like model_zoo.get, but do not load any weights (even pretrained) - """ - cfg_file = model_zoo.get_config_file(config_path) - cfg = get_cfg() - cfg.merge_from_file(cfg_file) - if not torch.cuda.is_available(): - cfg.MODEL.DEVICE = "cpu" - return build_model(cfg) - - -class RetinaNetTest(unittest.TestCase): - def setUp(self): - self.model = get_model_zoo("COCO-Detection/retinanet_R_50_FPN_1x.yaml") - - def test_flop(self): - # RetinaNet supports flop-counting with random inputs - inputs = [{"image": torch.rand(3, 800, 800)}] - res = flop_count_operators(self.model, inputs) - self.assertTrue(int(res["conv"]), 146) # 146B flops - - def test_param_count(self): - res = parameter_count(self.model) - self.assertTrue(res[""], 37915572) - self.assertTrue(res["backbone"], 31452352) - - -class FasterRCNNTest(unittest.TestCase): - def setUp(self): - self.model = get_model_zoo("COCO-Detection/faster_rcnn_R_50_FPN_1x.yaml") - - def test_flop(self): - # Faster R-CNN supports flop-counting with random inputs - inputs = [{"image": torch.rand(3, 800, 800)}] - res = flop_count_operators(self.model, inputs) - - # This only checks flops for backbone & proposal generator - # Flops for box head is not conv, and depends on #proposals, which is - # almost 0 for random inputs. - self.assertTrue(int(res["conv"]), 117) - - def test_param_count(self): - res = parameter_count(self.model) - self.assertTrue(res[""], 41699936) - self.assertTrue(res["backbone"], 26799296) diff --git a/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/TPS-Brass-Section-Module-VSTi.md b/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/TPS-Brass-Section-Module-VSTi.md deleted file mode 100644 index 451e736105be1c92e73e8a96fce73e8895f740cf..0000000000000000000000000000000000000000 --- a/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/TPS-Brass-Section-Module-VSTi.md +++ /dev/null @@ -1,74 +0,0 @@ -## TPS - Brass Section Module VSTi - - - - - - ![TPS - Brass Section Module VSTi](https://www.boatersland.com/images/img/brp-5006484-kit-tps-sensor-5006484.png) - - - - - -**Download File - [https://www.google.com/url?q=https%3A%2F%2Fbltlly.com%2F2txmlY&sa=D&sntz=1&usg=AOvVaw3pVhNI7QVDsf3NODXfJxQA](https://www.google.com/url?q=https%3A%2F%2Fbltlly.com%2F2txmlY&sa=D&sntz=1&usg=AOvVaw3pVhNI7QVDsf3NODXfJxQA)** - - - - - - - - - - - - Here is a possible title and article with HTML formatting for the keyword "TPS - Brass Section Module VSTi": - -# TPS - Brass Section Module VSTi: A Free and Realistic Virtual Instrument for Brass Sounds - - - -If you are looking for a free and realistic virtual instrument plugin that can produce high-quality brass sounds, you might want to check out TPS - Brass Section Module VSTi. This plugin was created by Mishael Nekrasov, a Ukrainian sound designer and composer, who used some samples from the Kurzweil libraries to create 32 brass patches that cover various styles and articulations. You can use this plugin in any DAW that supports VST2 format, such as Cubase, FL Studio, Reaper, and Ableton Live. - - - -TPS - Brass Section Module VSTi has a simple and intuitive interface that lets you control various parameters of the brass sounds, such as ADSR envelope, low-pass filter, LFO modulation, fine tuning, transpose, glide, and mono/poly mode. You can also switch between different patches using the drop-down menu or the arrow buttons. The patches include falls, stabs, quintets, "fat" sections, and legatos. Some of the patches were used in the song "Wild Dances" by Ruslana, which won the 2004 Eurovision Song Contest. - - - -The plugin has a 24-bit sample resolution and a size of about 156 MB. It does not require a lot of hard drive space or CPU power to run smoothly. The sound quality is impressive and realistic, and you can use it for various genres of music that require brass sounds, such as jazz, funk, soul, pop, rock, orchestral, and more. You can also add some effects to enhance the sound further, such as reverb, delay, chorus, phaser, distortion, etc. - - - -TPS - Brass Section Module VSTi is a free plugin that you can download from the link below. You will need to use a program like WinRAR or 7-Zip to extract the files from the compressed archive. Then you can copy the DLL file to your VST plugins folder and scan it with your DAW. If you have a 64-bit DAW, you might need to use a bridge program like jBridge to make it compatible. You can also watch a demo video of the plugin on YouTube to hear how it sounds. - - - -TPS - Brass Section Module VSTi is a great plugin for anyone who wants to add some realistic brass sounds to their music projects without spending any money. It is easy to use and has a variety of patches that can suit different musical needs. You can download it for free from the link below and enjoy making some brass music with it. - - - -**Download link:** [http://www.mediafire.com/file/1ywde8k...](http://www.mediafire.com/file/1ywde8k...) - - - -**Demo video:** [https://www.youtube.com/watch?v=dl2YxmHqlS0](https://www.youtube.com/watch?v=dl2YxmHqlS0) - -Here are a few more paragraphs for the article with HTML formatting for the keyword "TPS - Brass Section Module VSTi": - -If you want to use TPS - Brass Section Module VSTi in your music projects, you will need to follow some simple steps. First, you will need to download the plugin from the link provided above and extract the DLL file from the compressed archive. Then, you will need to copy the DLL file to your VST plugins folder, which is usually located in your DAW's installation directory or in a custom location that you can specify in your DAW's preferences. Next, you will need to scan your VST plugins folder with your DAW and make sure that TPS - Brass Section Module VSTi is recognized and available in your plugin list. - - - -Once you have loaded TPS - Brass Section Module VSTi in your DAW, you can start playing with it using your MIDI keyboard or controller. You can select different patches from the drop-down menu or use the arrow buttons to browse through them. You can also adjust various parameters of the brass sounds using the knobs and sliders on the interface. For example, you can change the attack, decay, sustain, and release of the envelope, the cutoff and resonance of the filter, the amount and type of modulation, the fine tuning and transpose of the pitch, and the glide and mode of the voice. You can also use your MIDI controller's modulation wheel or pitch bend wheel to add some expression and variation to the brass sounds. - - - -TPS - Brass Section Module VSTi is a versatile and realistic plugin that can produce a wide range of brass sounds for different musical genres and styles. You can use it for jazz, funk, soul, pop, rock, orchestral, and more. You can also layer it with other instruments or effects to create rich and complex sounds. For example, you can add some reverb to create a spacious and ambient sound, some delay to create a rhythmic and echoey sound, some chorus to create a lush and wide sound, some phaser to create a swirling and psychedelic sound, or some distortion to create a gritty and aggressive sound. The possibilities are endless with TPS - Brass Section Module VSTi. - - dfd1c89656 - - - - - diff --git a/spaces/hlopez/Waste-Detector/classifier.py b/spaces/hlopez/Waste-Detector/classifier.py deleted file mode 100644 index 3bb0815f9fce6d7ca5d707e54826f5dc4612934a..0000000000000000000000000000000000000000 --- a/spaces/hlopez/Waste-Detector/classifier.py +++ /dev/null @@ -1,97 +0,0 @@ -import timm -import torch.nn as nn -import albumentations as A -import torch -import cv2 - -class CustomNormalization(A.ImageOnlyTransform): - def _norm(self, img): - return img / 255. - - def apply(self, img, **params): - return self._norm(img) - -def transform_image(image, size): - transforms = [ - A.Resize(size, size, - interpolation=cv2.INTER_NEAREST), - CustomNormalization(p=1), - ] - - augs = A.Compose(transforms) - transformed = augs(image=image) - - return transformed['image'] - -class CustomEfficientNet(nn.Module): - """ - This class defines a custom EfficientNet network. - - Parameters - ---------- - target_size : int - Number of units for the output layer. - pretrained : bool - Determine if pretrained weights are used. - - Attributes - ---------- - model : nn.Module - EfficientNet model. - """ - def __init__(self, model_name : str = 'efficientnet_b0', - target_size : int = 4, pretrained : bool = True): - super().__init__() - self.model = timm.create_model(model_name, pretrained=pretrained) - - # Modify the classifier layer - in_features = self.model.classifier.in_features - self.model.classifier = nn.Sequential( - #nn.Dropout(0.5), - nn.Linear(in_features, 256), - nn.ReLU(), - #nn.Dropout(0.5), - nn.Linear(256, target_size) - ) - - def forward(self, x : torch.Tensor) -> torch.Tensor: - x = self.model(x) - - return x - -class CustomViT(nn.Module): - """ - This class defines a custom ViT network. - - Parameters - ---------- - target_size : int - Number of units for the output layer. - pretrained : bool - Determine if pretrained weights are used. - - Attributes - ---------- - model : nn.Module - CustomViT model. - """ - def __init__(self, model_name : str = 'vit_base_patch16_224', - target_size : int = 4, pretrained : bool = True): - super().__init__() - self.model = timm.create_model(model_name, - pretrained=pretrained, - num_classes=target_size) - - in_features = self.model.head.in_features - self.model.head = nn.Sequential( - #nn.Dropout(0.5), - nn.Linear(in_features, 256), - nn.ReLU(), - nn.Dropout(0.5), - nn.Linear(256, target_size) - ) - - def forward(self, x : torch.Tensor) -> torch.Tensor: - x = self.model(x) - - return x diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/inference/predict.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/inference/predict.py deleted file mode 100644 index 37463ba1312b7476ef0fd2ccf039e61f5dfee7c0..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/inference/predict.py +++ /dev/null @@ -1,853 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import argparse -from copy import deepcopy -from typing import Tuple, Union, List - -import numpy as np -from batchgenerators.augmentations.utils import resize_segmentation -from nnunet.inference.segmentation_export import save_segmentation_nifti_from_softmax, save_segmentation_nifti -from batchgenerators.utilities.file_and_folder_operations import * -from multiprocessing import Process, Queue -import torch -import SimpleITK as sitk -import shutil -from multiprocessing import Pool -from nnunet.postprocessing.connected_components import load_remove_save, load_postprocessing -from nnunet.training.model_restore import load_model_and_checkpoint_files -from nnunet.training.network_training.nnUNetTrainer import nnUNetTrainer -from nnunet.utilities.one_hot_encoding import to_one_hot - - -def preprocess_save_to_queue(preprocess_fn, q, list_of_lists, output_files, segs_from_prev_stage, classes, - transpose_forward): - # suppress output - # sys.stdout = open(os.devnull, 'w') - - errors_in = [] - for i, l in enumerate(list_of_lists): - try: - output_file = output_files[i] - print("preprocessing", output_file) - d, _, dct = preprocess_fn(l) - dct['classes'] = [[0]+ cl for cl in classes] - # print(output_file, dct) - if segs_from_prev_stage[i] is not None: - assert isfile(segs_from_prev_stage[i]) and segs_from_prev_stage[i].endswith( - ".nii.gz"), "segs_from_prev_stage" \ - " must point to a " \ - "segmentation file" - seg_prev = sitk.GetArrayFromImage(sitk.ReadImage(segs_from_prev_stage[i])) - # check to see if shapes match - img = sitk.GetArrayFromImage(sitk.ReadImage(l[0])) - assert all([i == j for i, j in zip(seg_prev.shape, img.shape)]), "image and segmentation from previous " \ - "stage don't have the same pixel array " \ - "shape! image: %s, seg_prev: %s" % \ - (l[0], segs_from_prev_stage[i]) - seg_prev = seg_prev.transpose(transpose_forward) - seg_reshaped = resize_segmentation(seg_prev, d.shape[1:], order=1) - seg_reshaped = to_one_hot(seg_reshaped, classes) - d = np.vstack((d, seg_reshaped)).astype(np.float32) - """There is a problem with python process communication that prevents us from communicating obejcts - larger than 2 GB between processes (basically when the length of the pickle string that will be sent is - communicated by the multiprocessing.Pipe object then the placeholder (\%i I think) does not allow for long - enough strings (lol). This could be fixed by changing i to l (for long) but that would require manually - patching system python code. We circumvent that problem here by saving softmax_pred to a npy file that will - then be read (and finally deleted) by the Process. save_segmentation_nifti_from_softmax can take either - filename or np.ndarray and will handle this automatically""" - print(d.shape) - if np.prod(d.shape) > (2e9 / 4 * 0.85): # *0.85 just to be save, 4 because float32 is 4 bytes - print( - "This output is too large for python process-process communication. " - "Saving output temporarily to disk") - np.save(output_file[:-7] + ".npy", d) - d = output_file[:-7] + ".npy" - q.put((output_file, (d, dct))) - except KeyboardInterrupt: - raise KeyboardInterrupt - except Exception as e: - print("error in", l) - print(e) - q.put("end") - if len(errors_in) > 0: - print("There were some errors in the following cases:", errors_in) - print("These cases were ignored.") - else: - print("This worker has ended successfully, no errors to report") - # restore output - # sys.stdout = sys.__stdout__ - - -def preprocess_multithreaded(trainer, list_of_lists, output_files, num_processes=2, segs_from_prev_stage=None): - if segs_from_prev_stage is None: - segs_from_prev_stage = [None] * len(list_of_lists) - - num_processes = min(len(list_of_lists), num_processes) - - classes = [list(range(1, num_classes)) for num_classes in trainer.num_classes] - assert isinstance(trainer, nnUNetTrainer) - q = Queue(1) - processes = [] - for i in range(num_processes): - """ - pr = preprocess_save_to_queue(trainer.preprocess_patient, q, list_of_lists[i::num_processes], - output_files[i::num_processes], - segs_from_prev_stage[i::num_processes], - classes, trainer.plans['transpose_forward']) - """ - pr = Process(target=preprocess_save_to_queue, args=(trainer.preprocess_patient, q, - list_of_lists[i::num_processes], - output_files[i::num_processes], - segs_from_prev_stage[i::num_processes], - classes, trainer.plans['transpose_forward'])) - pr.start() - - processes.append(pr) - - - try: - end_ctr = 0 - while end_ctr != num_processes: - item = q.get() - if item == "end": - end_ctr += 1 - continue - else: - yield item - - finally: - for p in processes: - if p.is_alive(): - p.terminate() # this should not happen but better safe than sorry right - p.join() - - q.close() - - -def predict_cases(model, list_of_lists, output_filenames, folds, save_npz, num_threads_preprocessing, - num_threads_nifti_save, segs_from_prev_stage=None, do_tta=True, mixed_precision=True, - overwrite_existing=False, - all_in_gpu=False, step_size=0.5, checkpoint_name="model_final_checkpoint", - segmentation_export_kwargs: dict = None, disable_postprocessing: bool = False): - """ - :param segmentation_export_kwargs: - :param model: folder where the model is saved, must contain fold_x subfolders - :param list_of_lists: [[case0_0000.nii.gz, case0_0001.nii.gz], [case1_0000.nii.gz, case1_0001.nii.gz], ...] - :param output_filenames: [output_file_case0.nii.gz, output_file_case1.nii.gz, ...] - :param folds: default: (0, 1, 2, 3, 4) (but can also be 'all' or a subset of the five folds, for example use (0, ) - for using only fold_0 - :param save_npz: default: False - :param num_threads_preprocessing: - :param num_threads_nifti_save: - :param segs_from_prev_stage: - :param do_tta: default: True, can be set to False for a 8x speedup at the cost of a reduced segmentation quality - :param overwrite_existing: default: True - :param mixed_precision: if None then we take no action. If True/False we overwrite what the model has in its init - :return: - """ - assert len(list_of_lists) == len(output_filenames) - if segs_from_prev_stage is not None: assert len(segs_from_prev_stage) == len(output_filenames) - - pool = Pool(num_threads_nifti_save) - results = [] - - cleaned_output_files = [] - for o in output_filenames: - dr, f = os.path.split(o) - if len(dr) > 0: - maybe_mkdir_p(dr) - if not f.endswith(".nii.gz"): - f, _ = os.path.splitext(f) - f = f + ".nii.gz" - cleaned_output_files.append(join(dr, f)) - - if not overwrite_existing: - print("number of cases:", len(list_of_lists)) - # if save_npz=True then we should also check for missing npz files - not_done_idx = [i for i, j in enumerate(cleaned_output_files) if (not isfile(j)) or (save_npz and not isfile(j[:-7] + '.npz'))] - - cleaned_output_files = [cleaned_output_files[i] for i in not_done_idx] - list_of_lists = [list_of_lists[i] for i in not_done_idx] - if segs_from_prev_stage is not None: - segs_from_prev_stage = [segs_from_prev_stage[i] for i in not_done_idx] - - print("number of cases that still need to be predicted:", len(cleaned_output_files)) - - print("emptying cuda cache") - torch.cuda.empty_cache() - - print("loading parameters for folds,", folds) - trainer, params = load_model_and_checkpoint_files(model, folds, mixed_precision=mixed_precision, - checkpoint_name=checkpoint_name) - - if segmentation_export_kwargs is None: - if 'segmentation_export_params' in trainer.plans.keys(): - force_separate_z = trainer.plans['segmentation_export_params']['force_separate_z'] - interpolation_order = trainer.plans['segmentation_export_params']['interpolation_order'] - interpolation_order_z = trainer.plans['segmentation_export_params']['interpolation_order_z'] - else: - force_separate_z = None - interpolation_order = 1 - interpolation_order_z = 0 - else: - force_separate_z = segmentation_export_kwargs['force_separate_z'] - interpolation_order = segmentation_export_kwargs['interpolation_order'] - interpolation_order_z = segmentation_export_kwargs['interpolation_order_z'] - - print("starting preprocessing generator") - - preprocessing = preprocess_multithreaded(trainer, list_of_lists, cleaned_output_files, num_threads_preprocessing, - segs_from_prev_stage) - print("starting prediction...") - all_output_files = [] - for preprocessed in preprocessing: - output_filename, (d, dct) = preprocessed - all_output_files.append(output_filename) - if isinstance(d, str): - data = np.load(d) - os.remove(d) - d = data - - print("predicting", output_filename) - trainer.load_checkpoint_ram(params[0], False) - softmax = trainer.predict_preprocessed_data_return_seg_and_softmax( - d, do_mirroring=do_tta, mirror_axes=trainer.data_aug_params['mirror_axes'], use_sliding_window=True, - step_size=step_size, use_gaussian=True, all_in_gpu=all_in_gpu, - mixed_precision=mixed_precision)[1] - - for p in params[1:]: - trainer.load_checkpoint_ram(p, False) - softmax += trainer.predict_preprocessed_data_return_seg_and_softmax( - d, do_mirroring=do_tta, mirror_axes=trainer.data_aug_params['mirror_axes'], use_sliding_window=True, - step_size=step_size, use_gaussian=True, all_in_gpu=all_in_gpu, - mixed_precision=mixed_precision)[1] - - if len(params) > 1: - softmax /= len(params) - - transpose_forward = trainer.plans.get('transpose_forward') - if transpose_forward is not None: - transpose_backward = trainer.plans.get('transpose_backward') - softmax = softmax.transpose([0] + [i + 1 for i in transpose_backward]) - - if save_npz: - npz_file = output_filename[:-7] + ".npz" - else: - npz_file = None - - if hasattr(trainer, 'regions_class_order'): - region_class_order = trainer.regions_class_order - else: - region_class_order = None - - """There is a problem with python process communication that prevents us from communicating obejcts - larger than 2 GB between processes (basically when the length of the pickle string that will be sent is - communicated by the multiprocessing.Pipe object then the placeholder (\%i I think) does not allow for long - enough strings (lol). This could be fixed by changing i to l (for long) but that would require manually - patching system python code. We circumvent that problem here by saving softmax_pred to a npy file that will - then be read (and finally deleted) by the Process. save_segmentation_nifti_from_softmax can take either - filename or np.ndarray and will handle this automatically""" - bytes_per_voxel = 4 - if all_in_gpu: - bytes_per_voxel = 2 # if all_in_gpu then the return value is half (float16) - if np.prod(softmax.shape) > (2e9 / bytes_per_voxel * 0.85): # * 0.85 just to be save - print( - "This output is too large for python process-process communication. Saving output temporarily to disk") - np.save(output_filename[:-7] + ".npy", softmax) - softmax = output_filename[:-7] + ".npy" - - """ - save_segmentation_nifti_from_softmax(softmax, output_filename, dct, interpolation_order, region_class_order, - None, None, - npz_file, None, force_separate_z, interpolation_order_z) - """ - results.append(pool.starmap_async(save_segmentation_nifti_from_softmax, - ((softmax, output_filename, dct, interpolation_order, region_class_order, - None, None, - npz_file, None, force_separate_z, interpolation_order_z),) - )) - - - print("inference done. Now waiting for the segmentation export to finish...") - _ = [i.get() for i in results] - # now apply postprocessing - # first load the postprocessing properties if they are present. Else raise a well visible warning - if not disable_postprocessing: - results = [] - pp_file = join(model, "postprocessing.json") - if isfile(pp_file): - print("postprocessing...") - shutil.copy(pp_file, os.path.abspath(os.path.dirname(output_filenames[0]))) - # for_which_classes stores for which of the classes everything but the largest connected component needs to be - # removed - for_which_classes, min_valid_obj_size = load_postprocessing(pp_file) - results.append(pool.starmap_async(load_remove_save, - zip(output_filenames, output_filenames, - [for_which_classes] * len(output_filenames), - [min_valid_obj_size] * len(output_filenames)))) - _ = [i.get() for i in results] - else: - print("WARNING! Cannot run postprocessing because the postprocessing file is missing. Make sure to run " - "consolidate_folds in the output folder of the model first!\nThe folder you need to run this in is " - "%s" % model) - - pool.close() - pool.join() - - -def predict_cases_fast(model, list_of_lists, output_filenames, folds, num_threads_preprocessing, - num_threads_nifti_save, segs_from_prev_stage=None, do_tta=True, mixed_precision=True, - overwrite_existing=False, - all_in_gpu=False, step_size=0.5, checkpoint_name="model_final_checkpoint", - segmentation_export_kwargs: dict = None, disable_postprocessing: bool = False): - assert len(list_of_lists) == len(output_filenames) - if segs_from_prev_stage is not None: assert len(segs_from_prev_stage) == len(output_filenames) - - pool = Pool(num_threads_nifti_save) - results = [] - - cleaned_output_files = [] - for o in output_filenames: - dr, f = os.path.split(o) - if len(dr) > 0: - maybe_mkdir_p(dr) - if not f.endswith(".nii.gz"): - f, _ = os.path.splitext(f) - f = f + ".nii.gz" - cleaned_output_files.append(join(dr, f)) - - if not overwrite_existing: - print("number of cases:", len(list_of_lists)) - not_done_idx = [i for i, j in enumerate(cleaned_output_files) if not isfile(j)] - - cleaned_output_files = [cleaned_output_files[i] for i in not_done_idx] - list_of_lists = [list_of_lists[i] for i in not_done_idx] - if segs_from_prev_stage is not None: - segs_from_prev_stage = [segs_from_prev_stage[i] for i in not_done_idx] - - print("number of cases that still need to be predicted:", len(cleaned_output_files)) - - print("emptying cuda cache") - torch.cuda.empty_cache() - - print("loading parameters for folds,", folds) - trainer, params = load_model_and_checkpoint_files(model, folds, mixed_precision=mixed_precision, - checkpoint_name=checkpoint_name) - - if segmentation_export_kwargs is None: - if 'segmentation_export_params' in trainer.plans.keys(): - force_separate_z = trainer.plans['segmentation_export_params']['force_separate_z'] - interpolation_order = trainer.plans['segmentation_export_params']['interpolation_order'] - interpolation_order_z = trainer.plans['segmentation_export_params']['interpolation_order_z'] - else: - force_separate_z = None - interpolation_order = 1 - interpolation_order_z = 0 - else: - force_separate_z = segmentation_export_kwargs['force_separate_z'] - interpolation_order = segmentation_export_kwargs['interpolation_order'] - interpolation_order_z = segmentation_export_kwargs['interpolation_order_z'] - - print("starting preprocessing generator") - preprocessing = preprocess_multithreaded(trainer, list_of_lists, cleaned_output_files, num_threads_preprocessing, - segs_from_prev_stage) - - print("starting prediction...") - for preprocessed in preprocessing: - print("getting data from preprocessor") - output_filename, (d, dct) = preprocessed - print("got something") - if isinstance(d, str): - print("what I got is a string, so I need to load a file") - data = np.load(d) - os.remove(d) - d = data - - # preallocate the output arrays - # same dtype as the return value in predict_preprocessed_data_return_seg_and_softmax (saves time) - softmax_aggr = None # np.zeros((trainer.num_classes, *d.shape[1:]), dtype=np.float16) - all_seg_outputs = np.zeros((len(params), *d.shape[1:]), dtype=int) - print("predicting", output_filename) - - for i, p in enumerate(params): - trainer.load_checkpoint_ram(p, False) - - res = trainer.predict_preprocessed_data_return_seg_and_softmax(d, do_mirroring=do_tta, - mirror_axes=trainer.data_aug_params['mirror_axes'], - use_sliding_window=True, - step_size=step_size, use_gaussian=True, - all_in_gpu=all_in_gpu, - mixed_precision=mixed_precision) - - if len(params) > 1: - # otherwise we dont need this and we can save ourselves the time it takes to copy that - print("aggregating softmax") - if softmax_aggr is None: - softmax_aggr = res[1] - else: - softmax_aggr += res[1] - all_seg_outputs[i] = res[0] - - print("obtaining segmentation map") - if len(params) > 1: - # we dont need to normalize the softmax by 1 / len(params) because this would not change the outcome of the argmax - seg = softmax_aggr.argmax(0) - else: - seg = all_seg_outputs[0] - - print("applying transpose_backward") - transpose_forward = trainer.plans.get('transpose_forward') - if transpose_forward is not None: - transpose_backward = trainer.plans.get('transpose_backward') - seg = seg.transpose([i for i in transpose_backward]) - - if hasattr(trainer, 'regions_class_order'): - region_class_order = trainer.regions_class_order - else: - region_class_order = None - assert region_class_order is None, "predict_cases_fast can only work with regular softmax predictions " \ - "and is therefore unable to handle trainer classes with region_class_order" - - print("initializing segmentation export") - results.append(pool.starmap_async(save_segmentation_nifti, - ((seg, output_filename, dct, interpolation_order, force_separate_z, - interpolation_order_z),) - )) - print("done") - - print("inference done. Now waiting for the segmentation export to finish...") - _ = [i.get() for i in results] - # now apply postprocessing - # first load the postprocessing properties if they are present. Else raise a well visible warning - - if not disable_postprocessing: - results = [] - pp_file = join(model, "postprocessing.json") - if isfile(pp_file): - print("postprocessing...") - shutil.copy(pp_file, os.path.dirname(output_filenames[0])) - # for_which_classes stores for which of the classes everything but the largest connected component needs to be - # removed - for_which_classes, min_valid_obj_size = load_postprocessing(pp_file) - results.append(pool.starmap_async(load_remove_save, - zip(output_filenames, output_filenames, - [for_which_classes] * len(output_filenames), - [min_valid_obj_size] * len(output_filenames)))) - _ = [i.get() for i in results] - else: - print("WARNING! Cannot run postprocessing because the postprocessing file is missing. Make sure to run " - "consolidate_folds in the output folder of the model first!\nThe folder you need to run this in is " - "%s" % model) - - pool.close() - pool.join() - - -def predict_cases_fastest(model, list_of_lists, output_filenames, folds, num_threads_preprocessing, - num_threads_nifti_save, segs_from_prev_stage=None, do_tta=True, mixed_precision=True, - overwrite_existing=False, all_in_gpu=False, step_size=0.5, - checkpoint_name="model_final_checkpoint", disable_postprocessing: bool = False): - assert len(list_of_lists) == len(output_filenames) - if segs_from_prev_stage is not None: assert len(segs_from_prev_stage) == len(output_filenames) - - pool = Pool(num_threads_nifti_save) - results = [] - - cleaned_output_files = [] - for o in output_filenames: - dr, f = os.path.split(o) - if len(dr) > 0: - maybe_mkdir_p(dr) - if not f.endswith(".nii.gz"): - f, _ = os.path.splitext(f) - f = f + ".nii.gz" - cleaned_output_files.append(join(dr, f)) - - if not overwrite_existing: - print("number of cases:", len(list_of_lists)) - not_done_idx = [i for i, j in enumerate(cleaned_output_files) if not isfile(j)] - - cleaned_output_files = [cleaned_output_files[i] for i in not_done_idx] - list_of_lists = [list_of_lists[i] for i in not_done_idx] - if segs_from_prev_stage is not None: - segs_from_prev_stage = [segs_from_prev_stage[i] for i in not_done_idx] - - print("number of cases that still need to be predicted:", len(cleaned_output_files)) - - print("emptying cuda cache") - torch.cuda.empty_cache() - - print("loading parameters for folds,", folds) - trainer, params = load_model_and_checkpoint_files(model, folds, mixed_precision=mixed_precision, - checkpoint_name=checkpoint_name) - - print("starting preprocessing generator") - preprocessing = preprocess_multithreaded(trainer, list_of_lists, cleaned_output_files, num_threads_preprocessing, - segs_from_prev_stage) - - print("starting prediction...") - for preprocessed in preprocessing: - print("getting data from preprocessor") - output_filename, (d, dct) = preprocessed - print("got something") - if isinstance(d, str): - print("what I got is a string, so I need to load a file") - data = np.load(d) - os.remove(d) - d = data - - # preallocate the output arrays - # same dtype as the return value in predict_preprocessed_data_return_seg_and_softmax (saves time) - all_softmax_outputs = np.zeros((len(params), trainer.num_classes, *d.shape[1:]), dtype=np.float16) - all_seg_outputs = np.zeros((len(params), *d.shape[1:]), dtype=int) - print("predicting", output_filename) - - for i, p in enumerate(params): - trainer.load_checkpoint_ram(p, False) - res = trainer.predict_preprocessed_data_return_seg_and_softmax(d, do_mirroring=do_tta, - mirror_axes=trainer.data_aug_params['mirror_axes'], - use_sliding_window=True, - step_size=step_size, use_gaussian=True, - all_in_gpu=all_in_gpu, - mixed_precision=mixed_precision) - if len(params) > 1: - # otherwise we dont need this and we can save ourselves the time it takes to copy that - all_softmax_outputs[i] = res[1] - all_seg_outputs[i] = res[0] - - if hasattr(trainer, 'regions_class_order'): - region_class_order = trainer.regions_class_order - else: - region_class_order = None - assert region_class_order is None, "predict_cases_fastest can only work with regular softmax predictions " \ - "and is therefore unable to handle trainer classes with region_class_order" - - print("aggregating predictions") - if len(params) > 1: - softmax_mean = np.mean(all_softmax_outputs, 0) - seg = softmax_mean.argmax(0) - else: - seg = all_seg_outputs[0] - - print("applying transpose_backward") - transpose_forward = trainer.plans.get('transpose_forward') - if transpose_forward is not None: - transpose_backward = trainer.plans.get('transpose_backward') - seg = seg.transpose([i for i in transpose_backward]) - - print("initializing segmentation export") - results.append(pool.starmap_async(save_segmentation_nifti, - ((seg, output_filename, dct, 0, None),) - )) - print("done") - - print("inference done. Now waiting for the segmentation export to finish...") - _ = [i.get() for i in results] - # now apply postprocessing - # first load the postprocessing properties if they are present. Else raise a well visible warning - if not disable_postprocessing: - results = [] - pp_file = join(model, "postprocessing.json") - if isfile(pp_file): - print("postprocessing...") - shutil.copy(pp_file, os.path.dirname(output_filenames[0])) - # for_which_classes stores for which of the classes everything but the largest connected component needs to be - # removed - for_which_classes, min_valid_obj_size = load_postprocessing(pp_file) - results.append(pool.starmap_async(load_remove_save, - zip(output_filenames, output_filenames, - [for_which_classes] * len(output_filenames), - [min_valid_obj_size] * len(output_filenames)))) - _ = [i.get() for i in results] - else: - print("WARNING! Cannot run postprocessing because the postprocessing file is missing. Make sure to run " - "consolidate_folds in the output folder of the model first!\nThe folder you need to run this in is " - "%s" % model) - - pool.close() - pool.join() - - -def check_input_folder_and_return_caseIDs(input_folder, expected_num_modalities): - print("This model expects %d input modalities for each image" % expected_num_modalities) - files = subfiles(input_folder, suffix=".nii.gz", join=False, sort=True) - - maybe_case_ids = np.unique([i[:-12] for i in files]) - - remaining = deepcopy(files) - missing = [] - - assert len(files) > 0, "input folder did not contain any images (expected to find .nii.gz file endings)" - - # now check if all required files are present and that no unexpected files are remaining - for c in maybe_case_ids: - for n in range(expected_num_modalities): - expected_output_file = c + "_%04.0d.nii.gz" % n - if not isfile(join(input_folder, expected_output_file)): - missing.append(expected_output_file) - else: - remaining.remove(expected_output_file) - - print("Found %d unique case ids, here are some examples:" % len(maybe_case_ids), - np.random.choice(maybe_case_ids, min(len(maybe_case_ids), 10))) - print("If they don't look right, make sure to double check your filenames. They must end with _0000.nii.gz etc") - - if len(remaining) > 0: - print("found %d unexpected remaining files in the folder. Here are some examples:" % len(remaining), - np.random.choice(remaining, min(len(remaining), 10))) - - if len(missing) > 0: - print("Some files are missing:") - print(missing) - raise RuntimeError("missing files in input_folder") - - return maybe_case_ids - - -def predict_from_folder(model: str, input_folder: str, output_folder: str, folds: Union[Tuple[int], List[int]], - save_npz: bool, num_threads_preprocessing: int, num_threads_nifti_save: int, - lowres_segmentations: Union[str, None], - part_id: int, num_parts: int, tta: bool, mixed_precision: bool = True, - overwrite_existing: bool = True, mode: str = 'normal', overwrite_all_in_gpu: bool = None, - step_size: float = 0.5, checkpoint_name: str = "model_final_checkpoint", - segmentation_export_kwargs: dict = None, disable_postprocessing: bool = False): - """ - here we use the standard naming scheme to generate list_of_lists and output_files needed by predict_cases - - :param model: - :param input_folder: - :param output_folder: - :param folds: - :param save_npz: - :param num_threads_preprocessing: - :param num_threads_nifti_save: - :param lowres_segmentations: - :param part_id: - :param num_parts: - :param tta: - :param mixed_precision: - :param overwrite_existing: if not None then it will be overwritten with whatever is in there. None is default (no overwrite) - :return: - """ - maybe_mkdir_p(output_folder) - shutil.copy(join(model, 'plans.pkl'), output_folder) - - assert isfile(join(model, "plans.pkl")), "Folder with saved model weights must contain a plans.pkl file" - expected_num_modalities = load_pickle(join(model, "plans.pkl"))['num_modalities'] - - # check input folder integrity - case_ids = check_input_folder_and_return_caseIDs(input_folder, expected_num_modalities) - - output_files = [join(output_folder, i + ".nii.gz") for i in case_ids] - all_files = subfiles(input_folder, suffix=".nii.gz", join=False, sort=True) - list_of_lists = [[join(input_folder, i) for i in all_files if i[:len(j)].startswith(j) and - len(i) == (len(j) + 12)] for j in case_ids] - - if lowres_segmentations is not None: - assert isdir(lowres_segmentations), "if lowres_segmentations is not None then it must point to a directory" - lowres_segmentations = [join(lowres_segmentations, i + ".nii.gz") for i in case_ids] - assert all([isfile(i) for i in lowres_segmentations]), "not all lowres_segmentations files are present. " \ - "(I was searching for case_id.nii.gz in that folder)" - lowres_segmentations = lowres_segmentations[part_id::num_parts] - else: - lowres_segmentations = None - - if mode == "normal": - if overwrite_all_in_gpu is None: - all_in_gpu = False - else: - all_in_gpu = overwrite_all_in_gpu - - return predict_cases(model, list_of_lists[part_id::num_parts], output_files[part_id::num_parts], folds, - save_npz, num_threads_preprocessing, num_threads_nifti_save, lowres_segmentations, tta, - mixed_precision=mixed_precision, overwrite_existing=overwrite_existing, - all_in_gpu=all_in_gpu, - step_size=step_size, checkpoint_name=checkpoint_name, - segmentation_export_kwargs=segmentation_export_kwargs, - disable_postprocessing=disable_postprocessing) - elif mode == "fast": - if overwrite_all_in_gpu is None: - all_in_gpu = False - else: - all_in_gpu = overwrite_all_in_gpu - - assert save_npz is False - return predict_cases_fast(model, list_of_lists[part_id::num_parts], output_files[part_id::num_parts], folds, - num_threads_preprocessing, num_threads_nifti_save, lowres_segmentations, - tta, mixed_precision=mixed_precision, overwrite_existing=overwrite_existing, - all_in_gpu=all_in_gpu, - step_size=step_size, checkpoint_name=checkpoint_name, - segmentation_export_kwargs=segmentation_export_kwargs, - disable_postprocessing=disable_postprocessing) - elif mode == "fastest": - if overwrite_all_in_gpu is None: - all_in_gpu = False - else: - all_in_gpu = overwrite_all_in_gpu - - assert save_npz is False - return predict_cases_fastest(model, list_of_lists[part_id::num_parts], output_files[part_id::num_parts], folds, - num_threads_preprocessing, num_threads_nifti_save, lowres_segmentations, - tta, mixed_precision=mixed_precision, overwrite_existing=overwrite_existing, - all_in_gpu=all_in_gpu, - step_size=step_size, checkpoint_name=checkpoint_name, - disable_postprocessing=disable_postprocessing) - else: - raise ValueError("unrecognized mode. Must be normal, fast or fastest") - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("-i", '--input_folder', help="Must contain all modalities for each patient in the correct" - " order (same as training). Files must be named " - "CASENAME_XXXX.nii.gz where XXXX is the modality " - "identifier (0000, 0001, etc)", required=True) - parser.add_argument('-o', "--output_folder", required=True, help="folder for saving predictions") - parser.add_argument('-m', '--model_output_folder', - help='model output folder. Will automatically discover the folds ' - 'that were ' - 'run and use those as an ensemble', required=True) - parser.add_argument('-f', '--folds', nargs='+', default='None', help="folds to use for prediction. Default is None " - "which means that folds will be detected " - "automatically in the model output folder") - parser.add_argument('-z', '--save_npz', required=False, action='store_true', help="use this if you want to ensemble" - " these predictions with those of" - " other models. Softmax " - "probabilities will be saved as " - "compresed numpy arrays in " - "output_folder and can be merged " - "between output_folders with " - "merge_predictions.py") - parser.add_argument('-l', '--lowres_segmentations', required=False, default='None', help="if model is the highres " - "stage of the cascade then you need to use -l to specify where the segmentations of the " - "corresponding lowres unet are. Here they are required to do a prediction") - parser.add_argument("--part_id", type=int, required=False, default=0, help="Used to parallelize the prediction of " - "the folder over several GPUs. If you " - "want to use n GPUs to predict this " - "folder you need to run this command " - "n times with --part_id=0, ... n-1 and " - "--num_parts=n (each with a different " - "GPU (for example via " - "CUDA_VISIBLE_DEVICES=X)") - parser.add_argument("--num_parts", type=int, required=False, default=1, - help="Used to parallelize the prediction of " - "the folder over several GPUs. If you " - "want to use n GPUs to predict this " - "folder you need to run this command " - "n times with --part_id=0, ... n-1 and " - "--num_parts=n (each with a different " - "GPU (via " - "CUDA_VISIBLE_DEVICES=X)") - parser.add_argument("--num_threads_preprocessing", required=False, default=6, type=int, help= - "Determines many background processes will be used for data preprocessing. Reduce this if you " - "run into out of memory (RAM) problems. Default: 6") - parser.add_argument("--num_threads_nifti_save", required=False, default=2, type=int, help= - "Determines many background processes will be used for segmentation export. Reduce this if you " - "run into out of memory (RAM) problems. Default: 2") - parser.add_argument("--tta", required=False, type=int, default=1, help="Set to 0 to disable test time data " - "augmentation (speedup of factor " - "4(2D)/8(3D)), " - "lower quality segmentations") - parser.add_argument("--overwrite_existing", required=False, type=int, default=1, help="Set this to 0 if you need " - "to resume a previous " - "prediction. Default: 1 " - "(=existing segmentations " - "in output_folder will be " - "overwritten)") - parser.add_argument("--mode", type=str, default="normal", required=False) - parser.add_argument("--all_in_gpu", type=str, default="None", required=False, help="can be None, False or True") - parser.add_argument("--step_size", type=float, default=0.5, required=False, help="don't touch") - # parser.add_argument("--interp_order", required=False, default=3, type=int, - # help="order of interpolation for segmentations, has no effect if mode=fastest") - # parser.add_argument("--interp_order_z", required=False, default=0, type=int, - # help="order of interpolation along z is z is done differently") - # parser.add_argument("--force_separate_z", required=False, default="None", type=str, - # help="force_separate_z resampling. Can be None, True or False, has no effect if mode=fastest") - parser.add_argument('--disable_mixed_precision', default=False, action='store_true', required=False, - help='Predictions are done with mixed precision by default. This improves speed and reduces ' - 'the required vram. If you want to disable mixed precision you can set this flag. Note ' - 'that yhis is not recommended (mixed precision is ~2x faster!)') - - args = parser.parse_args() - input_folder = args.input_folder - output_folder = args.output_folder - part_id = args.part_id - num_parts = args.num_parts - model = args.model_output_folder - folds = args.folds - save_npz = args.save_npz - lowres_segmentations = args.lowres_segmentations - num_threads_preprocessing = args.num_threads_preprocessing - num_threads_nifti_save = args.num_threads_nifti_save - tta = args.tta - step_size = args.step_size - - # interp_order = args.interp_order - # interp_order_z = args.interp_order_z - # force_separate_z = args.force_separate_z - - # if force_separate_z == "None": - # force_separate_z = None - # elif force_separate_z == "False": - # force_separate_z = False - # elif force_separate_z == "True": - # force_separate_z = True - # else: - # raise ValueError("force_separate_z must be None, True or False. Given: %s" % force_separate_z) - - overwrite = args.overwrite_existing - mode = args.mode - all_in_gpu = args.all_in_gpu - - if lowres_segmentations == "None": - lowres_segmentations = None - - if isinstance(folds, list): - if folds[0] == 'all' and len(folds) == 1: - pass - else: - folds = [int(i) for i in folds] - elif folds == "None": - folds = None - else: - raise ValueError("Unexpected value for argument folds") - - if tta == 0: - tta = False - elif tta == 1: - tta = True - else: - raise ValueError("Unexpected value for tta, Use 1 or 0") - - if overwrite == 0: - overwrite = False - elif overwrite == 1: - overwrite = True - else: - raise ValueError("Unexpected value for overwrite, Use 1 or 0") - - assert all_in_gpu in ['None', 'False', 'True'] - if all_in_gpu == "None": - all_in_gpu = None - elif all_in_gpu == "True": - all_in_gpu = True - elif all_in_gpu == "False": - all_in_gpu = False - - predict_from_folder(model, input_folder, output_folder, folds, save_npz, num_threads_preprocessing, - num_threads_nifti_save, lowres_segmentations, part_id, num_parts, tta, - mixed_precision=not args.disable_mixed_precision, - overwrite_existing=overwrite, mode=mode, overwrite_all_in_gpu=all_in_gpu, step_size=step_size) diff --git a/spaces/htukor/NLLB-Translator/langs.py b/spaces/htukor/NLLB-Translator/langs.py deleted file mode 100644 index e5e849a4f5427f5b22e1e0bcfbe00102ac0eef10..0000000000000000000000000000000000000000 --- a/spaces/htukor/NLLB-Translator/langs.py +++ /dev/null @@ -1,204 +0,0 @@ -LANGS = [ - "ace_Arab", - "ace_Latn", - "acm_Arab", - "acq_Arab", - "aeb_Arab", - "afr_Latn", - "ajp_Arab", - "aka_Latn", - "amh_Ethi", - "apc_Arab", - "arb_Arab", - "ars_Arab", - "ary_Arab", - "arz_Arab", - "asm_Beng", - "ast_Latn", - "awa_Deva", - "ayr_Latn", - "azb_Arab", - "azj_Latn", - "bak_Cyrl", - "bam_Latn", - "ban_Latn", - "bel_Cyrl", - "bem_Latn", - "ben_Beng", - "bho_Deva", - "bjn_Arab", - "bjn_Latn", - "bod_Tibt", - "bos_Latn", - "bug_Latn", - "bul_Cyrl", - "cat_Latn", - "ceb_Latn", - "ces_Latn", - "cjk_Latn", - "ckb_Arab", - "crh_Latn", - "cym_Latn", - "dan_Latn", - "deu_Latn", - "dik_Latn", - "dyu_Latn", - "dzo_Tibt", - "ell_Grek", - "eng_Latn", - "epo_Latn", - "est_Latn", - "eus_Latn", - "ewe_Latn", - "fao_Latn", - "pes_Arab", - "fij_Latn", - "fin_Latn", - "fon_Latn", - "fra_Latn", - "fur_Latn", - "fuv_Latn", - "gla_Latn", - "gle_Latn", - "glg_Latn", - "grn_Latn", - "guj_Gujr", - "hat_Latn", - "hau_Latn", - "heb_Hebr", - "hin_Deva", - "hne_Deva", - "hrv_Latn", - "hun_Latn", - "hye_Armn", - "ibo_Latn", - "ilo_Latn", - "ind_Latn", - "isl_Latn", - "ita_Latn", - "jav_Latn", - "jpn_Jpan", - "kab_Latn", - "kac_Latn", - "kam_Latn", - "kan_Knda", - "kas_Arab", - "kas_Deva", - "kat_Geor", - "knc_Arab", - "knc_Latn", - "kaz_Cyrl", - "kbp_Latn", - "kea_Latn", - "khm_Khmr", - "kik_Latn", - "kin_Latn", - "kir_Cyrl", - "kmb_Latn", - "kon_Latn", - "kor_Hang", - "kmr_Latn", - "lao_Laoo", - "lvs_Latn", - "lij_Latn", - "lim_Latn", - "lin_Latn", - "lit_Latn", - "lmo_Latn", - "ltg_Latn", - "ltz_Latn", - "lua_Latn", - "lug_Latn", - "luo_Latn", - "lus_Latn", - "mag_Deva", - "mai_Deva", - "mal_Mlym", - "mar_Deva", - "min_Latn", - "mkd_Cyrl", - "plt_Latn", - "mlt_Latn", - "mni_Beng", - "khk_Cyrl", - "mos_Latn", - "mri_Latn", - "zsm_Latn", - "mya_Mymr", - "nld_Latn", - "nno_Latn", - "nob_Latn", - "npi_Deva", - "nso_Latn", - "nus_Latn", - "nya_Latn", - "oci_Latn", - "gaz_Latn", - "ory_Orya", - "pag_Latn", - "pan_Guru", - "pap_Latn", - "pol_Latn", - "por_Latn", - "prs_Arab", - "pbt_Arab", - "quy_Latn", - "ron_Latn", - "run_Latn", - "rus_Cyrl", - "sag_Latn", - "san_Deva", - "sat_Beng", - "scn_Latn", - "shn_Mymr", - "sin_Sinh", - "slk_Latn", - "slv_Latn", - "smo_Latn", - "sna_Latn", - "snd_Arab", - "som_Latn", - "sot_Latn", - "spa_Latn", - "als_Latn", - "srd_Latn", - "srp_Cyrl", - "ssw_Latn", - "sun_Latn", - "swe_Latn", - "swh_Latn", - "szl_Latn", - "tam_Taml", - "tat_Cyrl", - "tel_Telu", - "tgk_Cyrl", - "tgl_Latn", - "tha_Thai", - "tir_Ethi", - "taq_Latn", - "taq_Tfng", - "tpi_Latn", - "tsn_Latn", - "tso_Latn", - "tuk_Latn", - "tum_Latn", - "tur_Latn", - "twi_Latn", - "tzm_Tfng", - "uig_Arab", - "ukr_Cyrl", - "umb_Latn", - "urd_Arab", - "uzn_Latn", - "vec_Latn", - "vie_Latn", - "war_Latn", - "wol_Latn", - "xho_Latn", - "ydd_Hebr", - "yor_Latn", - "yue_Hant", - "zho_Hans", - "zho_Hant", - "zul_Latn" -] diff --git a/spaces/huaiji3y/bingo-Public/src/lib/isomorphic/index.ts b/spaces/huaiji3y/bingo-Public/src/lib/isomorphic/index.ts deleted file mode 100644 index d4ebae951004bc8ec388f82548f4204a6c2a0a50..0000000000000000000000000000000000000000 --- a/spaces/huaiji3y/bingo-Public/src/lib/isomorphic/index.ts +++ /dev/null @@ -1,8 +0,0 @@ -'use client' - -import Debug from 'debug' -export * from 'ifw' - -export const debug = typeof document === 'undefined' ? Debug('bingo') - : process.env.NEXT_PUBLIC_DEBUG ? console.info.bind(console) - : () => {} diff --git a/spaces/huggingface-projects/magic-diffusion/app.py b/spaces/huggingface-projects/magic-diffusion/app.py deleted file mode 100644 index c5d5180bf525be5cfc13c069ea6c60dee0af4cde..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/magic-diffusion/app.py +++ /dev/null @@ -1,104 +0,0 @@ -import gradio as gr -import os -from share_btn import community_icon_html, loading_icon_html, share_js - -text_gen = gr.Interface.load(name="spaces/Gustavosta/MagicPrompt-Stable-Diffusion") -stable_diffusion = gr.Blocks.load(name="spaces/runwayml/stable-diffusion-v1-5") - -def get_images(prompt): - gallery_dir = stable_diffusion(prompt, fn_index=2) - sd_output = [os.path.join(gallery_dir, image) for image in os.listdir(gallery_dir)] - return sd_output, gr.update(visible=True), gr.update(visible=True), gr.update(visible=True) - -def get_prompts(prompt_text): - return text_gen(prompt_text) - -css = ''' -.animate-spin { - animation: spin 1s linear infinite; -} -@keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } -} -#share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; -} -#share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important; -} -#share-btn * { - all: unset; -} -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} -#share-btn-container .wrap { - display: none !important; -} -a {text-decoration-line: underline;} -''' - -with gr.Blocks(css=css) as demo: - gr.HTML("""
        -
        -

        - Magic Diffusion 🪄 -

        -
        -

        - This Space prettifies your prompt using MagicPrompt - and then runs it through Stable Diffusion to create aesthetically pleasing images. Simply enter a few concepts and let it improve your prompt. You can then diffuse the prompt. -

        -
        """) - - with gr.Row(): - with gr.Column(): - input_text = gr.Textbox(label="Short text prompt", - lines=4, elem_id="input-text") - with gr.Row(): - see_prompts = gr.Button("Feed in your text!") - - with gr.Column(): - text_output = gr.Textbox( - label="Prettified text prompt", - lines=4, - elem_id="translated" - ) - with gr.Row(): - diffuse_btn = gr.Button(value="Diffuse the Prompt!") - with gr.Column(elem_id="generated-gallery"): - sd_output = gr.Gallery().style(grid=2, height="auto") - with gr.Group(elem_id="share-btn-container"): - community_icon = gr.HTML(community_icon_html, visible=False) - loading_icon = gr.HTML(loading_icon_html, visible=False) - share_button = gr.Button("Share to community", elem_id="share-btn", visible=False) - - see_prompts.click(get_prompts, - inputs = [input_text], - outputs = [ - text_output - ]) - diffuse_btn.click(get_images, - inputs = [ - text_output - ], - outputs = [sd_output, community_icon, loading_icon, share_button] - ) - share_button.click(None, [], [], _js=share_js) - - - -demo.launch(debug=True) \ No newline at end of file diff --git "a/spaces/huggingface/Model_Cards_Writing_Tool/pages/6_\360\237\224\254_Model_Evaluation.py" "b/spaces/huggingface/Model_Cards_Writing_Tool/pages/6_\360\237\224\254_Model_Evaluation.py" deleted file mode 100644 index e3c4926a814f9f34980b77b5d8dc4277fd272d7e..0000000000000000000000000000000000000000 --- "a/spaces/huggingface/Model_Cards_Writing_Tool/pages/6_\360\237\224\254_Model_Evaluation.py" +++ /dev/null @@ -1,66 +0,0 @@ -import streamlit as st -from persist import persist, load_widget_state -from pathlib import Path - -from middleMan import apply_view,writingPrompt - -global variable_output - -def main(): - cs_body() - - -def cs_body(): - - #stateVariable = 'Model_Eval' - #help_text ='Detail the Evaluation Results for this model' - #col1.header('Model Evaluation') - st.markdown('# Evaluation') - st.text_area(" This section describes the evaluation protocols and provides the results. ",help="Detail the Evaluation Results for this model") - st.markdown('## Testing Data, Factors & Metrics:') - left, right = st.columns([2,4]) - - #st.markdown('### Model Description') - - - with left: - st.write("\n") - st.write("\n") - st.markdown('#### Testing Data:') - st.write("\n") - st.write("\n") - st.write("\n") - st.write("\n") - st.write("\n") - st.write("\n") - #st.write("\n") - st.markdown('#### Factors:') - st.write("\n") - st.write("\n") - st.write("\n") - st.write("\n") - st.write("\n") - st.write("\n") - st.markdown('#### Metrics:') - st.write("\n") - st.write("\n") - st.write("\n") - st.write("\n") - st.write("\n") - st.markdown('#### Results:') - - with right: - #soutput_jinja = parse_into_jinja_markdown() - st.text_area("", help="Ideally this links to a Dataset Card.",key=persist("Testing_Data")) - #st.write("\n") - st.text_area("",help="What are the foreseeable characteristics that will influence how the model behaves? This includes domain and context, as well as population subgroups.",key=persist("Factors")) - st.text_area("", help="What metrics will be used for evaluation in light of tradeoffs between different errors?", key=persist("Metrics")) - st.text_area("", key=persist("Model_Results")) - - - - - -if __name__ == '__main__': - load_widget_state() - main() \ No newline at end of file diff --git a/spaces/hysts/StyleSwin/style.css b/spaces/hysts/StyleSwin/style.css deleted file mode 100644 index 8dd6cf3081735167994093f71d1d0c80d1a7d144..0000000000000000000000000000000000000000 --- a/spaces/hysts/StyleSwin/style.css +++ /dev/null @@ -1,11 +0,0 @@ -h1 { - text-align: center; -} -div#result { - max-width: 600px; - max-height: 600px; -} -img#visitor-badge { - display: block; - margin: auto; -} diff --git a/spaces/hysts/space-that-creates-model-demo-space/assets/template.py b/spaces/hysts/space-that-creates-model-demo-space/assets/template.py deleted file mode 100644 index 52517926681b84a9cb2af15c5bd6e3dfb7b52614..0000000000000000000000000000000000000000 --- a/spaces/hysts/space-that-creates-model-demo-space/assets/template.py +++ /dev/null @@ -1,36 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import gradio as gr - - -def read_info(file_name: str) -> str: - with open(file_name) as f: - content = f.read() - return content - - -def load_model(model_name: str) -> gr.Interface: - iface = gr.Interface.load(model_name, src='models') - for component in iface.output_components: - component.label = f'{component.label} ({model_name})' - return iface - - -def load_models(model_names: list[str]) -> list[gr.Interface]: - return [load_model(name) for name in model_names] - - -title = read_info('TITLE') -description = read_info('DESCRIPTION') -article = read_info('ARTICLE') -model_names = read_info('MODEL_NAMES').split('\n') - -interfaces = load_models(model_names) -gr.Parallel( - *interfaces, - title=title, - description=description, - article=article, -).launch() diff --git a/spaces/igashov/DiffLinker/src/lightning.py b/spaces/igashov/DiffLinker/src/lightning.py deleted file mode 100644 index 326429ba104cd517edaea8f0f057bbc1f89fdbad..0000000000000000000000000000000000000000 --- a/spaces/igashov/DiffLinker/src/lightning.py +++ /dev/null @@ -1,476 +0,0 @@ -import numpy as np -import os -import pytorch_lightning as pl -import torch -import wandb - -from src import metrics, utils, delinker -from src.const import LINKER_SIZE_DIST -from src.egnn import Dynamics, DynamicsWithPockets -from src.edm import EDM, InpaintingEDM -from src.datasets import ( - ZincDataset, MOADDataset, create_templates_for_linker_generation, get_dataloader, collate -) -from src.linker_size import DistributionNodes -from src.molecule_builder import build_molecules -from src.visualizer import save_xyz_file, visualize_chain -from typing import Dict, List, Optional -from tqdm import tqdm - -from pdb import set_trace - - -def get_activation(activation): - if activation == 'silu': - return torch.nn.SiLU() - else: - raise Exception("activation fn not supported yet. Add it here.") - - -class DDPM(pl.LightningModule): - train_dataset = None - val_dataset = None - test_dataset = None - starting_epoch = None - metrics: Dict[str, List[float]] = {} - - FRAMES = 100 - - def __init__( - self, - in_node_nf, n_dims, context_node_nf, hidden_nf, activation, tanh, n_layers, attention, norm_constant, - inv_sublayers, sin_embedding, normalization_factor, aggregation_method, - diffusion_steps, diffusion_noise_schedule, diffusion_noise_precision, diffusion_loss_type, - normalize_factors, include_charges, model, - data_path, train_data_prefix, val_data_prefix, batch_size, lr, torch_device, test_epochs, n_stability_samples, - normalization=None, log_iterations=None, samples_dir=None, data_augmentation=False, - center_of_mass='fragments', inpainting=False, anchors_context=True, - ): - super(DDPM, self).__init__() - - self.save_hyperparameters() - self.data_path = data_path - self.train_data_prefix = train_data_prefix - self.val_data_prefix = val_data_prefix - self.batch_size = batch_size - self.lr = lr - self.torch_device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - self.include_charges = include_charges - self.test_epochs = test_epochs - self.n_stability_samples = n_stability_samples - self.log_iterations = log_iterations - self.samples_dir = samples_dir - self.data_augmentation = data_augmentation - self.center_of_mass = center_of_mass - self.inpainting = inpainting - self.loss_type = diffusion_loss_type - - self.n_dims = n_dims - self.num_classes = in_node_nf - include_charges - self.include_charges = include_charges - self.anchors_context = anchors_context - - self.is_geom = ('geom' in self.train_data_prefix) or ('MOAD' in self.train_data_prefix) - - if type(activation) is str: - activation = get_activation(activation) - - dynamics_class = DynamicsWithPockets if '.' in train_data_prefix else Dynamics - dynamics = dynamics_class( - in_node_nf=in_node_nf, - n_dims=n_dims, - context_node_nf=context_node_nf, - device=self.torch_device, - hidden_nf=hidden_nf, - activation=activation, - n_layers=n_layers, - attention=attention, - tanh=tanh, - norm_constant=norm_constant, - inv_sublayers=inv_sublayers, - sin_embedding=sin_embedding, - normalization_factor=normalization_factor, - aggregation_method=aggregation_method, - model=model, - normalization=normalization, - centering=inpainting, - ) - edm_class = InpaintingEDM if inpainting else EDM - self.edm = edm_class( - dynamics=dynamics, - in_node_nf=in_node_nf, - n_dims=n_dims, - timesteps=diffusion_steps, - noise_schedule=diffusion_noise_schedule, - noise_precision=diffusion_noise_precision, - loss_type=diffusion_loss_type, - norm_values=normalize_factors, - ) - self.linker_size_sampler = DistributionNodes(LINKER_SIZE_DIST) - - def setup(self, stage: Optional[str] = None): - dataset_type = MOADDataset if '.' in self.train_data_prefix else ZincDataset - if stage == 'fit': - self.is_geom = ('geom' in self.train_data_prefix) or ('MOAD' in self.train_data_prefix) - self.train_dataset = dataset_type( - data_path=self.data_path, - prefix=self.train_data_prefix, - device=self.torch_device - ) - self.val_dataset = dataset_type( - data_path=self.data_path, - prefix=self.val_data_prefix, - device=self.torch_device - ) - elif stage == 'val': - self.is_geom = ('geom' in self.val_data_prefix) or ('MOAD' in self.val_data_prefix) - self.val_dataset = dataset_type( - data_path=self.data_path, - prefix=self.val_data_prefix, - device=self.torch_device - ) - else: - raise NotImplementedError - - def train_dataloader(self, collate_fn=collate): - return get_dataloader(self.train_dataset, self.batch_size, collate_fn=collate_fn, shuffle=True) - - def val_dataloader(self, collate_fn=collate): - return get_dataloader(self.val_dataset, self.batch_size, collate_fn=collate_fn) - - def test_dataloader(self, collate_fn=collate): - return get_dataloader(self.test_dataset, self.batch_size, collate_fn=collate_fn) - - def forward(self, data, training): - x = data['positions'] - h = data['one_hot'] - node_mask = data['atom_mask'] - edge_mask = data['edge_mask'] - anchors = data['anchors'] - fragment_mask = data['fragment_mask'] - linker_mask = data['linker_mask'] - - # Anchors and fragments labels are used as context - if self.anchors_context: - context = torch.cat([anchors, fragment_mask], dim=-1) - else: - context = fragment_mask - - # Add information about pocket to the context - if isinstance(self.train_dataset, MOADDataset): - fragment_pocket_mask = fragment_mask - fragment_only_mask = data['fragment_only_mask'] - pocket_only_mask = fragment_pocket_mask - fragment_only_mask - if self.anchors_context: - context = torch.cat([anchors, fragment_only_mask, pocket_only_mask], dim=-1) - else: - context = torch.cat([fragment_only_mask, pocket_only_mask], dim=-1) - - # Removing COM of fragment from the atom coordinates - if self.inpainting: - center_of_mass_mask = node_mask - elif isinstance(self.train_dataset, MOADDataset) and self.center_of_mass == 'fragments': - center_of_mass_mask = data['fragment_only_mask'] - elif self.center_of_mass == 'fragments': - center_of_mass_mask = fragment_mask - elif self.center_of_mass == 'anchors': - center_of_mass_mask = anchors - else: - raise NotImplementedError(self.center_of_mass) - x = utils.remove_partial_mean_with_mask(x, node_mask, center_of_mass_mask) - utils.assert_partial_mean_zero_with_mask(x, node_mask, center_of_mass_mask) - - # Applying random rotation - if training and self.data_augmentation: - x = utils.random_rotation(x) - - return self.edm.forward( - x=x, - h=h, - node_mask=node_mask, - fragment_mask=fragment_mask, - linker_mask=linker_mask, - edge_mask=edge_mask, - context=context - ) - - def training_step(self, data, *args): - delta_log_px, kl_prior, loss_term_t, loss_term_0, l2_loss, noise_t, noise_0 = self.forward(data, training=True) - vlb_loss = kl_prior + loss_term_t + loss_term_0 - delta_log_px - if self.loss_type == 'l2': - loss = l2_loss - elif self.loss_type == 'vlb': - loss = vlb_loss - else: - raise NotImplementedError(self.loss_type) - - training_metrics = { - 'loss': loss, - 'delta_log_px': delta_log_px, - 'kl_prior': kl_prior, - 'loss_term_t': loss_term_t, - 'loss_term_0': loss_term_0, - 'l2_loss': l2_loss, - 'vlb_loss': vlb_loss, - 'noise_t': noise_t, - 'noise_0': noise_0 - } - if self.log_iterations is not None and self.global_step % self.log_iterations == 0: - for metric_name, metric in training_metrics.items(): - self.metrics.setdefault(f'{metric_name}/train', []).append(metric) - self.log(f'{metric_name}/train', metric, prog_bar=True) - return training_metrics - - def validation_step(self, data, *args): - delta_log_px, kl_prior, loss_term_t, loss_term_0, l2_loss, noise_t, noise_0 = self.forward(data, training=False) - vlb_loss = kl_prior + loss_term_t + loss_term_0 - delta_log_px - if self.loss_type == 'l2': - loss = l2_loss - elif self.loss_type == 'vlb': - loss = vlb_loss - else: - raise NotImplementedError(self.loss_type) - return { - 'loss': loss, - 'delta_log_px': delta_log_px, - 'kl_prior': kl_prior, - 'loss_term_t': loss_term_t, - 'loss_term_0': loss_term_0, - 'l2_loss': l2_loss, - 'vlb_loss': vlb_loss, - 'noise_t': noise_t, - 'noise_0': noise_0 - } - - def test_step(self, data, *args): - delta_log_px, kl_prior, loss_term_t, loss_term_0, l2_loss, noise_t, noise_0 = self.forward(data, training=False) - vlb_loss = kl_prior + loss_term_t + loss_term_0 - delta_log_px - if self.loss_type == 'l2': - loss = l2_loss - elif self.loss_type == 'vlb': - loss = vlb_loss - else: - raise NotImplementedError(self.loss_type) - return { - 'loss': loss, - 'delta_log_px': delta_log_px, - 'kl_prior': kl_prior, - 'loss_term_t': loss_term_t, - 'loss_term_0': loss_term_0, - 'l2_loss': l2_loss, - 'vlb_loss': vlb_loss, - 'noise_t': noise_t, - 'noise_0': noise_0 - } - - def training_epoch_end(self, training_step_outputs): - for metric in training_step_outputs[0].keys(): - avg_metric = self.aggregate_metric(training_step_outputs, metric) - self.metrics.setdefault(f'{metric}/train', []).append(avg_metric) - self.log(f'{metric}/train', avg_metric, prog_bar=True) - - def validation_epoch_end(self, validation_step_outputs): - for metric in validation_step_outputs[0].keys(): - avg_metric = self.aggregate_metric(validation_step_outputs, metric) - self.metrics.setdefault(f'{metric}/val', []).append(avg_metric) - self.log(f'{metric}/val', avg_metric, prog_bar=True) - - if (self.current_epoch + 1) % self.test_epochs == 0: - sampling_results = self.sample_and_analyze(self.val_dataloader()) - for metric_name, metric_value in sampling_results.items(): - self.log(f'{metric_name}/val', metric_value, prog_bar=True) - self.metrics.setdefault(f'{metric_name}/val', []).append(metric_value) - - # Logging the results corresponding to the best validation_and_connectivity - best_metrics, best_epoch = self.compute_best_validation_metrics() - self.log('best_epoch', int(best_epoch), prog_bar=True, batch_size=self.batch_size) - for metric, value in best_metrics.items(): - self.log(f'best_{metric}', value, prog_bar=True, batch_size=self.batch_size) - - def test_epoch_end(self, test_step_outputs): - for metric in test_step_outputs[0].keys(): - avg_metric = self.aggregate_metric(test_step_outputs, metric) - self.metrics.setdefault(f'{metric}/test', []).append(avg_metric) - self.log(f'{metric}/test', avg_metric, prog_bar=True) - - if (self.current_epoch + 1) % self.test_epochs == 0: - sampling_results = self.sample_and_analyze(self.test_dataloader()) - for metric_name, metric_value in sampling_results.items(): - self.log(f'{metric_name}/test', metric_value, prog_bar=True) - self.metrics.setdefault(f'{metric_name}/test', []).append(metric_value) - - def generate_animation(self, chain_batch, node_mask, batch_i): - batch_indices, mol_indices = utils.get_batch_idx_for_animation(self.batch_size, batch_i) - for bi, mi in zip(batch_indices, mol_indices): - chain = chain_batch[:, bi, :, :] - name = f'mol_{mi}' - chain_output = os.path.join(self.samples_dir, f'epoch_{self.current_epoch}', name) - os.makedirs(chain_output, exist_ok=True) - - one_hot = chain[:, :, 3:-1] if self.include_charges else chain[:, :, 3:] - positions = chain[:, :, :3] - chain_node_mask = torch.cat([node_mask[bi].unsqueeze(0) for _ in range(self.FRAMES)], dim=0) - names = [f'{name}_{j}' for j in range(self.FRAMES)] - - save_xyz_file(chain_output, one_hot, positions, chain_node_mask, names=names, is_geom=self.is_geom) - visualize_chain(chain_output, wandb=wandb, mode=name, is_geom=self.is_geom) - - def sample_and_analyze(self, dataloader): - pred_molecules = [] - true_molecules = [] - true_fragments = [] - - for b, data in tqdm(enumerate(dataloader), total=len(dataloader), desc='Sampling'): - atom_mask = data['atom_mask'] - fragment_mask = data['fragment_mask'] - - # Save molecules without pockets - if '.' in self.train_data_prefix: - atom_mask = data['atom_mask'] - data['pocket_mask'] - fragment_mask = data['fragment_only_mask'] - - true_molecules_batch = build_molecules( - data['one_hot'], - data['positions'], - atom_mask, - is_geom=self.is_geom, - ) - true_fragments_batch = build_molecules( - data['one_hot'], - data['positions'], - fragment_mask, - is_geom=self.is_geom, - ) - - for sample_idx in tqdm(range(self.n_stability_samples)): - try: - chain_batch, node_mask = self.sample_chain(data, keep_frames=self.FRAMES) - except utils.FoundNaNException as e: - for idx in e.x_h_nan_idx: - smiles = data['name'][idx] - print(f'FoundNaNException: [xh], e={self.current_epoch}, b={b}, i={idx}: {smiles}') - for idx in e.only_x_nan_idx: - smiles = data['name'][idx] - print(f'FoundNaNException: [x ], e={self.current_epoch}, b={b}, i={idx}: {smiles}') - for idx in e.only_h_nan_idx: - smiles = data['name'][idx] - print(f'FoundNaNException: [ h], e={self.current_epoch}, b={b}, i={idx}: {smiles}') - continue - - # Get final molecules from chains – for computing metrics - x, h = utils.split_features( - z=chain_batch[0], - n_dims=self.n_dims, - num_classes=self.num_classes, - include_charges=self.include_charges, - ) - - # Save molecules without pockets - if '.' in self.train_data_prefix: - node_mask = node_mask - data['pocket_mask'] - - one_hot = h['categorical'] - pred_molecules_batch = build_molecules(one_hot, x, node_mask, is_geom=self.is_geom) - - # Adding only results for valid ground truth molecules - for pred_mol, true_mol, frag in zip(pred_molecules_batch, true_molecules_batch, true_fragments_batch): - if metrics.is_valid(true_mol): - pred_molecules.append(pred_mol) - true_molecules.append(true_mol) - true_fragments.append(frag) - - # Generate animation – will always do it for molecules with idx 0, 110 and 360 - if self.samples_dir is not None and sample_idx == 0: - self.generate_animation(chain_batch=chain_batch, node_mask=node_mask, batch_i=b) - - # Our own & DeLinker metrics - our_metrics = metrics.compute_metrics( - pred_molecules=pred_molecules, - true_molecules=true_molecules - ) - delinker_metrics = delinker.get_delinker_metrics( - pred_molecules=pred_molecules, - true_molecules=true_molecules, - true_fragments=true_fragments - ) - return { - **our_metrics, - **delinker_metrics - } - - def sample_chain(self, data, sample_fn=None, keep_frames=None): - if sample_fn is None: - linker_sizes = data['linker_mask'].sum(1).view(-1).int() - else: - linker_sizes = sample_fn(data) - - if self.inpainting: - template_data = data - else: - template_data = create_templates_for_linker_generation(data, linker_sizes) - - x = template_data['positions'] - node_mask = template_data['atom_mask'] - edge_mask = template_data['edge_mask'] - h = template_data['one_hot'] - anchors = template_data['anchors'] - fragment_mask = template_data['fragment_mask'] - linker_mask = template_data['linker_mask'] - - # Anchors and fragments labels are used as context - if self.anchors_context: - context = torch.cat([anchors, fragment_mask], dim=-1) - else: - context = fragment_mask - - # Add information about pocket to the context - if isinstance(self.val_dataset, MOADDataset): - fragment_pocket_mask = fragment_mask - fragment_only_mask = template_data['fragment_only_mask'] - pocket_only_mask = fragment_pocket_mask - fragment_only_mask - if self.anchors_context: - context = torch.cat([anchors, fragment_only_mask, pocket_only_mask], dim=-1) - else: - context = torch.cat([fragment_only_mask, pocket_only_mask], dim=-1) - - # Removing COM of fragment from the atom coordinates - if self.inpainting: - center_of_mass_mask = node_mask - elif isinstance(self.val_dataset, MOADDataset) and self.center_of_mass == 'fragments': - center_of_mass_mask = template_data['fragment_only_mask'] - elif self.center_of_mass == 'fragments': - center_of_mass_mask = fragment_mask - elif self.center_of_mass == 'anchors': - center_of_mass_mask = anchors - else: - raise NotImplementedError(self.center_of_mass) - x = utils.remove_partial_mean_with_mask(x, node_mask, center_of_mass_mask) - - chain = self.edm.sample_chain( - x=x, - h=h, - node_mask=node_mask, - edge_mask=edge_mask, - fragment_mask=fragment_mask, - linker_mask=linker_mask, - context=context, - keep_frames=keep_frames, - ) - return chain, node_mask - - def configure_optimizers(self): - return torch.optim.AdamW(self.edm.parameters(), lr=self.lr, amsgrad=True, weight_decay=1e-12) - - def compute_best_validation_metrics(self): - loss = self.metrics[f'validity_and_connectivity/val'] - best_epoch = np.argmax(loss) - best_metrics = { - metric_name: metric_values[best_epoch] - for metric_name, metric_values in self.metrics.items() - if metric_name.endswith('/val') - } - return best_metrics, best_epoch - - @staticmethod - def aggregate_metric(step_outputs, metric): - return torch.tensor([out[metric] for out in step_outputs]).mean() diff --git a/spaces/inamXcontru/PoeticTTS/Broforce Free Download Mac Why You Should Play This Game Right Now on Your Mac.md b/spaces/inamXcontru/PoeticTTS/Broforce Free Download Mac Why You Should Play This Game Right Now on Your Mac.md deleted file mode 100644 index c82cd699f8c2847788ba6a4100b88a2b32998418..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Broforce Free Download Mac Why You Should Play This Game Right Now on Your Mac.md +++ /dev/null @@ -1,13 +0,0 @@ - -

        People love free steam games, no doubt. But what many people hate is downloading so many parts and trying to install them on their own. This is why we are the only site that pre-installs every game for you. We have many categories like shooters, action, racing, simulators and even VR games! We strive to satisfy our users and ask for nothing in return. We revolutionized the downloading scene and will continue being your #1 site for free games.

        -

        Broforce Free Download Mac


        Download Ziphttps://gohhs.com/2uz4Cc



        -

        The first time we covered Broforce I noted that it's "what The Expendables game should have been." Well, someone at Lionsgate had a similar idea and decided to collaborate with Broforce developer Free Lives to create a free promotional spin-off of the testosterone-mad shooter that's a crossover with the upcoming The Expendables 3.

        -

        Compare prices with GG.deals to find the cheapest cd key for Broforce PC. Head over to one of the trusted game stores from our price comparison and buy cd key at the best price. Use the indicated client to activate key and download and play your game.

        -

        All shops featured on GG.deals will deliver your game immediately after the payment has been approved. This will be either in the form of direct download or PC key - depending on the store of your choice. After you activate key on a corresponding platform, you will be able to download and play your game for free. If you don't know how to activate the key, check out the tutorials section on the bottom of the page.

        -

        Broforce is available now on Steam, GOG, and Humble as well as PlayStation 4. Follow future updates to Broforce on Twitter @Free_Lives and @DevolverDigital and visit broforcegame.com if you love websites about Broforce.

        -

        -

        Broforce is a consistently positive experience for platformers. The setting convinces with its humor and the story is just fucking MALE!The player is thrown into the combat area with up to 3 additional players. There he (and possibly his other presents) should now bring the American dream of freedom to the countries of the needy (oO) through the use of blue beans and certain other things (I am talking about vitamins). The playable characters are to be found in various film classics and offer actually everyone with a little general imagination a few moments of recognition. (Don't worry it's not shameful not to recognize all of them. Because Wikipedia is your friend: P and won't let you down on questions like that. In this way, the game even fulfils a kind of educational mission. ) The depiction of violence leaves nothing to be desired and also the diversity of the opponents encourages them to continue playing. I haven't been able to expose anything to the controller so far. Therefore, the following conclusion:Gaff not around here, take some steroid. . . I'm getting my vitamins. Don't consistently free the world from all evil American and just be more than just the average citizen.

        -

        Yes, all deals featured on GameGator will always allow you to get your game right away once the payment has been approved. This will be in form of either a Steam, GoG, BattleNet, ... Key or a direct download link for DMCR free games. If you require assistance during your purchase, please contact our support team via email or our social media channels. We will be happy to assist you and take care of any worries.

        -

        Devolver Digital has brought us some of the gnarliest, raddest 2-D indie games to date. Well today marks the launch of their epic, chaotic shooter known as Broforce! I have been playing the early beta for over a year now, and love the crap out of this game. For those not in the know about Broforce, it is a super crazy side-scrolling shooter where you play as stylized versions of famous "bros" from '80's pop culture. Characters like Rambro and Robro Cop just to name a couple. You must battle a slew of bad guys and spread freedom and justice across the universe, all while taking down the devil himself.

        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Filemaker Pro 12 Serial Keygen Crack LINK.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Filemaker Pro 12 Serial Keygen Crack LINK.md deleted file mode 100644 index 5241ed7a13aae4f63346071cc0f7e26cfbd76571..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Filemaker Pro 12 Serial Keygen Crack LINK.md +++ /dev/null @@ -1,7 +0,0 @@ -
        -

        powerful analysis resources, robust evaluation abilities. the application provides 30 various, distinctive, skillfully created themes that assist in the business of jobs. within an issue of few moments, customers can produce the personalized repository that is geared to your everyday needs. filemaker pro keygen 2022 is the planets greatest application for making programs. it is possible to produce programs for your macintosh personal computer, ios, apache, home windows, apple ipad, as well as apple iphone just such as an expert. the application works by making documents from data which can be full with form job areas.

        -

        filemaker pro 19.5.3.300 crack consists of every one of the functions of filemaker pro license key 2022 plus a collection of superior personalization and advancement resources to produce, handle and discuss directories. it is a useful data source application that helps in the administration of responsibilities enabling the customers to full all of them quicker. this effective job office manager application is accessible on house windows. filemaker pro 19 license key are a effective as well as easy to use mix-system repository software with a graphic interface (gui) and superior protection functions. it an superior design and style and advancement resources to produce custom programs more quickly and simpler having a number of themes.

        -

        filemaker pro 12 serial keygen crack


        DOWNLOAD ⚹⚹⚹ https://urlin.us/2uEwb4



        -

        claris filemaker pro registration key is an integrated data management platform that helps business professionals to manage data of all kinds. moreover, accelerate your business, unleash the creative potential of your team, and demand higher results. in this case, your programmer must be compatible with both mac and pc and the internet. in addition, it has everything you need to diy or pair up with skilled developers for high-level expertise. in addition, thats the power of filemaker. explore marketplace for inspiration from templates and components to fully-built vertical market applications. in addition, files, schedules, contacts, and much more. claris filemaker pro crackis a low-code solution that allows troubleshooters to create, distribute, and integrate bespoke apps. this is made possible with the filemaker developer edition. create your own apps, start editing existing apps, and pair up with advanced developers. all of these features are made possible with a low-code development environment. claris filemaker pro registration keyis a powerful app development platform. furthermore, accelerate your business, unleash your teams creativity, and improve outcomes. moreover, for a low cost, claris filemaker pro serial keygives you everything you need to diy or pair up with skilled developers for high-level expertise. moreover, thats the power of filemaker.

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Kitchendraw 4.5 Keygen 56.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Kitchendraw 4.5 Keygen 56.md deleted file mode 100644 index 5fff7fcdf2988d0785bcf47f82446b299b816b0e..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Kitchendraw 4.5 Keygen 56.md +++ /dev/null @@ -1,10 +0,0 @@ -

        kitchendraw 4.5 keygen 56


        DOWNLOAD ⚹⚹⚹ https://urlin.us/2uEyzI



        -
        -.3 MB download No roleplay is required. These are just a couple of our many sites, you can sign up to 100+ to get full access to all of our various sites. This website provides adult references and information about adult dating, adult personals and strip poker. Category: Strip poker games - My Free Cams. Strip Poker Games. Now you can get the best of both worlds with single-player and multiplayer free play mode. - -Poker king - -Free Slots Games. No download, no registration, just play online slot games for free. May 13, 2018nbsp;0183;32;Top Drawer Entertainment is a revolutionary game company that believes it can help you become a better player by providing the best … Michael Jordan- The Original Basketball Icon. When Michael Jordan signed with the Chicago Bulls as a teenager, no one knew what to expect. Jordan reinvented the game, writing a new chapter in the history of basketball. In this video we review and compare two of the latest methods to unblock a website that is being blocked by your internet service provider: The Easiest Unblock Method is The fast-freeze method. With The Freeze Method, you need to download the game (an emulator) that you want to download and then you simply search for the game. There are … Chess. com is the online chess portal featuring free chess, chess variants, chess lessons and chess books. play for free and play real-time. Get the latest news, updates and blog posts on the RCA tech, cars, gadgets, and more. Keep up to date with the latest on Audi, Porsche, Land Rover, Bentley and more at RCA. 2. Find out why a number of people refer to a good global citizen as a good global citizen. Name game is an extremely straightforward and incredibly simple karush casino ableton 9. 9 online strategy game. A player chooses a casino near barrie ontario name which is displayed at the top of the board. Below the name, slot machine top 10 bonus symbols are displayed. Jan 30, 2018nbsp;0183;32;FREE WORLD WIDE DELIVERY FOR USA AND AUSTRALIA YOUR NAME IN THE GRAPHIC AND YOUR NAME YOUR CORPORATION As a leading global market-research agency, TNS can help you find caino access to a market or niche that fits your business. App Casino gratuit pour les h244;teurs, t233;l233;chargements et jeux grat 4fefd39f24
        -
        -
        -

        diff --git a/spaces/inreVtussa/clothingai/Examples/Activation Crack For Corel Draw X4 16 VERIFIED.md b/spaces/inreVtussa/clothingai/Examples/Activation Crack For Corel Draw X4 16 VERIFIED.md deleted file mode 100644 index bf34bb6e45d8683969478895ce7f98de58677b8e..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Activation Crack For Corel Draw X4 16 VERIFIED.md +++ /dev/null @@ -1,26 +0,0 @@ -
        -

        How to Activate Corel Draw X4 16 for Free

        -

        Corel Draw X4 16 is a popular software for creating graphic designs based on vector images. It has many features and tools that can help you design logos, flyers, posters, banners, and more. However, to use this software, you need to activate it with a serial number and an activation code. If you don't have a valid license, you might be looking for a way to crack Corel Draw X4 16 and use it for free. In this article, we will show you how to do that.

        -

        activation crack for corel draw x4 16


        Download Zip > https://tiurll.com/2uCizW



        -

        Disclaimer

        -

        Before we proceed, we want to make it clear that we do not condone or encourage any illegal or unethical use of software. Cracking software is a violation of the terms and conditions of the software developer and may result in legal consequences. We are providing this information for educational purposes only and we are not responsible for any damages or losses that may arise from using cracked software. We strongly advise you to purchase a legitimate license from the official website of Corel Draw if you want to use this software.

        -

        Method 1: Using a Keygen

        -

        A keygen is a program that can generate serial numbers and activation codes for various software products. There are many keygens available on the internet that claim to work for Corel Draw X4 16, but not all of them are reliable or safe. Some of them may contain viruses or malware that can harm your computer or steal your personal information. Therefore, you should be careful when downloading and using any keygen.

        -

        One of the most trusted and widely used keygens for Corel Draw products is the Corel Products Keygen by X-Force Crack Team[^2^]. This keygen supports dozens of Corel products, including Corel Draw X4 16, and can generate valid serial numbers and activation codes for them. Here are the steps to use this keygen:

        -
          -
        1. Download the Corel Products Keygen by X-Force Crack Team from the link provided in the reference section below[^2^]. Make sure you download the latest version that supports Corel Draw X4 16.
        2. -
        3. Extract the downloaded file using a program like WinRAR or 7-Zip. You will get a folder named "XFORCE" that contains the keygen executable file.
        4. -
        5. Run the keygen as administrator by right-clicking on it and choosing "Run as administrator". You will see a window like this:
        6. -
        7. Corel Products Keygen by X-Force Crack Team
        8. -
        9. Select "CorelDRAW Graphics Suite X4" from the "Select a product" drop-down menu. Then click on "Generate Serial Number" button. You will get a serial number like this: DR14B98-NMPMSL9-MM7WT66-8D884F8-TBVHS
        10. -
        11. Copy the serial number and paste it into the installation window of Corel Draw X4 16 when prompted. Click on "Phone Corel" button to get an installation code.
        12. -
        13. Copy the installation code and paste it into the keygen window where it says "Enter your installation code here". Then click on "Generate Activation Code" button. You will get an activation code like this: 6DC3-3E8D-7587-CE1C-A236
        14. -
        15. Copy the activation code and paste it into the activation window of Corel Draw X4 16 where it says "Enter your activation code here". Click on "Continue" button to finish the activation process.
        16. -
        17. Congratulations! You have successfully activated Corel Draw X4 16 using a keygen.
        18. -
        -

        Method 2: Using a Crack

        -

        A crack is a modified version of a software file that can bypass or remove the protection mechanism of the original software. By replacing or patching some files of Corel Draw X4 16 with cracked files, you can use it without needing a serial number or an activation code. However, cracking software is also risky and illegal, as it may damage your system files or expose you to malware infections.

        -

        -

        One of the most popular and working cracks

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Agustin Campos Arenas Pensamiento Critico.pdf.md b/spaces/inreVtussa/clothingai/Examples/Agustin Campos Arenas Pensamiento Critico.pdf.md deleted file mode 100644 index e9ee630d325c72ad51696beea0c160fcde8c9659..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Agustin Campos Arenas Pensamiento Critico.pdf.md +++ /dev/null @@ -1,9 +0,0 @@ -

        Agustin Campos Arenas Pensamiento Critico.pdf


        Downloadhttps://tiurll.com/2uCkQx



        - -... Download 21 dual-monitor-wallpaper-5120x1440 20best-Of -Dual-Monitor-Wallpaper-5120x1440-Wallpapers-.jpg Agustin Campos Arenas Pensamiento Critico.pdf ... Download and install app for iPhone, iPad, iPod touch. -Download VKontakte for iPhone using the link below. -Download and install the application for iPhone, iPad, iPod Touch. -Download VKontakte for iPhone using the link below. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/ivntl/MMS/vits/models.py b/spaces/ivntl/MMS/vits/models.py deleted file mode 100644 index f5acdeb2bedd47897348407c0ae55c9a160da881..0000000000000000000000000000000000000000 --- a/spaces/ivntl/MMS/vits/models.py +++ /dev/null @@ -1,534 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/ivotai/VITS-Umamusume-voice-synthesizer/README.md b/spaces/ivotai/VITS-Umamusume-voice-synthesizer/README.md deleted file mode 100644 index 1b24e6efdb04cb1460e4fe3257d2303677c5a0e1..0000000000000000000000000000000000000000 --- a/spaces/ivotai/VITS-Umamusume-voice-synthesizer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Multilingual Anime TTS -emoji: 🎙🐴 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.7 -app_file: app.py -pinned: false -duplicated_from: Plachta/VITS-Umamusume-voice-synthesizer ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jbilcke-hf/MusicGen/tests/utils/__init__.py b/spaces/jbilcke-hf/MusicGen/tests/utils/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/MusicGen/tests/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/jbilcke-hf/VideoQuest/src/components/ui/collapsible.tsx b/spaces/jbilcke-hf/VideoQuest/src/components/ui/collapsible.tsx deleted file mode 100644 index 9fa48946afd1eb56bd932377fd888e3986304676..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/VideoQuest/src/components/ui/collapsible.tsx +++ /dev/null @@ -1,11 +0,0 @@ -"use client" - -import * as CollapsiblePrimitive from "@radix-ui/react-collapsible" - -const Collapsible = CollapsiblePrimitive.Root - -const CollapsibleTrigger = CollapsiblePrimitive.CollapsibleTrigger - -const CollapsibleContent = CollapsiblePrimitive.CollapsibleContent - -export { Collapsible, CollapsibleTrigger, CollapsibleContent } diff --git a/spaces/jhwen/bingo/src/components/ui/badge.tsx b/spaces/jhwen/bingo/src/components/ui/badge.tsx deleted file mode 100644 index d9a84b394090e5b4b3bd34f6135b9a2f2ead0aa2..0000000000000000000000000000000000000000 --- a/spaces/jhwen/bingo/src/components/ui/badge.tsx +++ /dev/null @@ -1,36 +0,0 @@ -import * as React from 'react' -import { cva, type VariantProps } from 'class-variance-authority' - -import { cn } from '@/lib/utils' - -const badgeVariants = cva( - 'inline-flex items-center rounded-full border px-2.5 py-0.5 text-xs font-semibold transition-colors focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2', - { - variants: { - variant: { - default: - 'border-transparent bg-primary text-primary-foreground hover:bg-primary/80', - secondary: - 'border-transparent bg-secondary text-secondary-foreground hover:bg-secondary/80', - destructive: - 'border-transparent bg-destructive text-destructive-foreground hover:bg-destructive/80', - outline: 'text-foreground' - } - }, - defaultVariants: { - variant: 'default' - } - } -) - -export interface BadgeProps - extends React.HTMLAttributes, - VariantProps {} - -function Badge({ className, variant, ...props }: BadgeProps) { - return ( -
        - ) -} - -export { Badge, badgeVariants } diff --git a/spaces/jiejiejie0420/bingo/src/lib/hooks/chat-history.ts b/spaces/jiejiejie0420/bingo/src/lib/hooks/chat-history.ts deleted file mode 100644 index c6fbf3fecfa86fe553f56acc8253236b8f22a775..0000000000000000000000000000000000000000 --- a/spaces/jiejiejie0420/bingo/src/lib/hooks/chat-history.ts +++ /dev/null @@ -1,62 +0,0 @@ -import { zip } from 'lodash-es' -import { ChatMessageModel, BotId } from '@/lib/bots/bing/types' -import { Storage } from '../storage' - -/** - * conversations:$botId => Conversation[] - * conversation:$botId:$cid:messages => ChatMessageModel[] - */ - -interface Conversation { - id: string - createdAt: number -} - -type ConversationWithMessages = Conversation & { messages: ChatMessageModel[] } - -async function loadHistoryConversations(botId: BotId): Promise { - const key = `conversations:${botId}` - const { [key]: value } = await Storage.get(key) - return value || [] -} - -async function deleteHistoryConversation(botId: BotId, cid: string) { - const conversations = await loadHistoryConversations(botId) - const newConversations = conversations.filter((c) => c.id !== cid) - await Storage.set({ [`conversations:${botId}`]: newConversations }) -} - -async function loadConversationMessages(botId: BotId, cid: string): Promise { - const key = `conversation:${botId}:${cid}:messages` - const { [key]: value } = await Storage.get(key) - return value || [] -} - -export async function setConversationMessages(botId: BotId, cid: string, messages: ChatMessageModel[]) { - const conversations = await loadHistoryConversations(botId) - if (!conversations.some((c) => c.id === cid)) { - conversations.unshift({ id: cid, createdAt: Date.now() }) - await Storage.set({ [`conversations:${botId}`]: conversations }) - } - const key = `conversation:${botId}:${cid}:messages` - await Storage.set({ [key]: messages }) -} - -export async function loadHistoryMessages(botId: BotId): Promise { - const conversations = await loadHistoryConversations(botId) - const messagesList = await Promise.all(conversations.map((c) => loadConversationMessages(botId, c.id))) - return zip(conversations, messagesList).map(([c, messages]) => ({ - id: c!.id, - createdAt: c!.createdAt, - messages: messages!, - })) -} - -export async function deleteHistoryMessage(botId: BotId, conversationId: string, messageId: string) { - const messages = await loadConversationMessages(botId, conversationId) - const newMessages = messages.filter((m) => m.id !== messageId) - await setConversationMessages(botId, conversationId, newMessages) - if (!newMessages.length) { - await deleteHistoryConversation(botId, conversationId) - } -} diff --git a/spaces/jimmmyjoy56723/test/README.md b/spaces/jimmmyjoy56723/test/README.md deleted file mode 100644 index 79d33f75549af5187c2fe0ac24773bd0a3167193..0000000000000000000000000000000000000000 --- a/spaces/jimmmyjoy56723/test/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Test -emoji: 📚 -colorFrom: indigo -colorTo: purple -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jimschat/VITS-Umamusume-voice-synthesizer/text/cantonese.py b/spaces/jimschat/VITS-Umamusume-voice-synthesizer/text/cantonese.py deleted file mode 100644 index b66d12138b81b70b86f18217d24a08fce76305c0..0000000000000000000000000000000000000000 --- a/spaces/jimschat/VITS-Umamusume-voice-synthesizer/text/cantonese.py +++ /dev/null @@ -1,59 +0,0 @@ -import re -import cn2an -import opencc - - -converter = opencc.OpenCC('jyutjyu') - -# List of (Latin alphabet, ipa) pairs: -_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('A', 'ei˥'), - ('B', 'biː˥'), - ('C', 'siː˥'), - ('D', 'tiː˥'), - ('E', 'iː˥'), - ('F', 'e˥fuː˨˩'), - ('G', 'tsiː˥'), - ('H', 'ɪk̚˥tsʰyː˨˩'), - ('I', 'ɐi˥'), - ('J', 'tsei˥'), - ('K', 'kʰei˥'), - ('L', 'e˥llou˨˩'), - ('M', 'ɛːm˥'), - ('N', 'ɛːn˥'), - ('O', 'ou˥'), - ('P', 'pʰiː˥'), - ('Q', 'kʰiːu˥'), - ('R', 'aː˥lou˨˩'), - ('S', 'ɛː˥siː˨˩'), - ('T', 'tʰiː˥'), - ('U', 'juː˥'), - ('V', 'wiː˥'), - ('W', 'tʊk̚˥piː˥juː˥'), - ('X', 'ɪk̚˥siː˨˩'), - ('Y', 'waːi˥'), - ('Z', 'iː˨sɛːt̚˥') -]] - - -def number_to_cantonese(text): - return re.sub(r'\d+(?:\.?\d+)?', lambda x: cn2an.an2cn(x.group()), text) - - -def latin_to_ipa(text): - for regex, replacement in _latin_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def cantonese_to_ipa(text): - text = number_to_cantonese(text.upper()) - text = converter.convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text) - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/jinonet/digital-agency-website/README.md b/spaces/jinonet/digital-agency-website/README.md deleted file mode 100644 index 4d94e12eb88473cadde7f693451104be3cf51d65..0000000000000000000000000000000000000000 --- a/spaces/jinonet/digital-agency-website/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Digital Agency Website -emoji: 🏢 -colorFrom: pink -colorTo: blue -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jmesikto/whisper-webui/src/whisper/whisperFactory.py b/spaces/jmesikto/whisper-webui/src/whisper/whisperFactory.py deleted file mode 100644 index 58fc840b7e60947fec4a98b2833ff03e7ad7b7de..0000000000000000000000000000000000000000 --- a/spaces/jmesikto/whisper-webui/src/whisper/whisperFactory.py +++ /dev/null @@ -1,19 +0,0 @@ -from typing import List -from src import modelCache -from src.config import ModelConfig -from src.whisper.abstractWhisperContainer import AbstractWhisperContainer - -def create_whisper_container(whisper_implementation: str, - model_name: str, device: str = None, compute_type: str = "float16", - download_root: str = None, - cache: modelCache = None, models: List[ModelConfig] = []) -> AbstractWhisperContainer: - print("Creating whisper container for " + whisper_implementation) - - if (whisper_implementation == "whisper"): - from src.whisper.whisperContainer import WhisperContainer - return WhisperContainer(model_name=model_name, device=device, compute_type=compute_type, download_root=download_root, cache=cache, models=models) - elif (whisper_implementation == "faster-whisper" or whisper_implementation == "faster_whisper"): - from src.whisper.fasterWhisperContainer import FasterWhisperContainer - return FasterWhisperContainer(model_name=model_name, device=device, compute_type=compute_type, download_root=download_root, cache=cache, models=models) - else: - raise ValueError("Unknown Whisper implementation: " + whisper_implementation) \ No newline at end of file diff --git a/spaces/joaopereirajp/livvieChatBot/venv/bin/activate_this.py b/spaces/joaopereirajp/livvieChatBot/venv/bin/activate_this.py deleted file mode 100644 index acf029e07e9a336be74a4b81511b64acf98381bf..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/bin/activate_this.py +++ /dev/null @@ -1,31 +0,0 @@ -"""Activate virtualenv for current interpreter: - -Use exec(open(this_file).read(), {'__file__': this_file}). - -This can be used when you must use an existing Python interpreter, not the virtualenv bin/python. -""" -import os -import site -import sys - -try: - abs_file = os.path.abspath(__file__) -except NameError: - raise AssertionError("You must use exec(open(this_file).read(), {'__file__': this_file}))") - -bin_dir = os.path.dirname(abs_file) -base = bin_dir[: -len("bin") - 1] # strip away the bin part from the __file__, plus the path separator - -# prepend bin to PATH (this file is inside the bin directory) -os.environ["PATH"] = os.pathsep.join([bin_dir] + os.environ.get("PATH", "").split(os.pathsep)) -os.environ["VIRTUAL_ENV"] = base # virtual env is right above bin directory - -# add the virtual environments libraries to the host python import mechanism -prev_length = len(sys.path) -for lib in "../lib/python3.9/site-packages".split(os.pathsep): - path = os.path.realpath(os.path.join(bin_dir, lib)) - site.addsitedir(path.decode("utf-8") if "" else path) -sys.path[:] = sys.path[prev_length:] + sys.path[0:prev_length] - -sys.real_prefix = sys.prefix -sys.prefix = base diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Util/number.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Util/number.py deleted file mode 100644 index 82652b26e183809e5927a6c4777310dfcc36a20d..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Util/number.py +++ /dev/null @@ -1,1525 +0,0 @@ -# -# number.py : Number-theoretic functions -# -# Part of the Python Cryptography Toolkit -# -# Written by Andrew M. Kuchling, Barry A. Warsaw, and others -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== -# - -import math -import sys -import struct -from Crypto import Random -from Crypto.Util.py3compat import iter_range - -# Backward compatibility -_fastmath = None - - -def ceil_div(n, d): - """Return ceil(n/d), that is, the smallest integer r such that r*d >= n""" - - if d == 0: - raise ZeroDivisionError() - if (n < 0) or (d < 0): - raise ValueError("Non positive values") - r, q = divmod(n, d) - if (n != 0) and (q != 0): - r += 1 - return r - - -def size (N): - """Returns the size of the number N in bits.""" - - if N < 0: - raise ValueError("Size in bits only available for non-negative numbers") - return N.bit_length() - - -def getRandomInteger(N, randfunc=None): - """Return a random number at most N bits long. - - If :data:`randfunc` is omitted, then :meth:`Random.get_random_bytes` is used. - - .. deprecated:: 3.0 - This function is for internal use only and may be renamed or removed in - the future. Use :func:`Crypto.Random.random.getrandbits` instead. - """ - - if randfunc is None: - randfunc = Random.get_random_bytes - - S = randfunc(N>>3) - odd_bits = N % 8 - if odd_bits != 0: - rand_bits = ord(randfunc(1)) >> (8-odd_bits) - S = struct.pack('B', rand_bits) + S - value = bytes_to_long(S) - return value - -def getRandomRange(a, b, randfunc=None): - """Return a random number *n* so that *a <= n < b*. - - If :data:`randfunc` is omitted, then :meth:`Random.get_random_bytes` is used. - - .. deprecated:: 3.0 - This function is for internal use only and may be renamed or removed in - the future. Use :func:`Crypto.Random.random.randrange` instead. - """ - - range_ = b - a - 1 - bits = size(range_) - value = getRandomInteger(bits, randfunc) - while value > range_: - value = getRandomInteger(bits, randfunc) - return a + value - -def getRandomNBitInteger(N, randfunc=None): - """Return a random number with exactly N-bits, - i.e. a random number between 2**(N-1) and (2**N)-1. - - If :data:`randfunc` is omitted, then :meth:`Random.get_random_bytes` is used. - - .. deprecated:: 3.0 - This function is for internal use only and may be renamed or removed in - the future. - """ - - value = getRandomInteger (N-1, randfunc) - value |= 2 ** (N-1) # Ensure high bit is set - assert size(value) >= N - return value - - -if sys.version_info[:2] >= (3, 5): - - GCD = math.gcd - -else: - - def GCD(x,y): - """Greatest Common Denominator of :data:`x` and :data:`y`. - """ - - x = abs(x) ; y = abs(y) - while x > 0: - x, y = y % x, x - return y - - -if sys.version_info[:2] >= (3, 8): - - def inverse(u, v): - """The inverse of :data:`u` *mod* :data:`v`.""" - - if v == 0: - raise ZeroDivisionError("Modulus cannot be zero") - if v < 0: - raise ValueError("Modulus cannot be negative") - - return pow(u, -1, v) - -else: - - def inverse(u, v): - """The inverse of :data:`u` *mod* :data:`v`.""" - - if v == 0: - raise ZeroDivisionError("Modulus cannot be zero") - if v < 0: - raise ValueError("Modulus cannot be negative") - - u3, v3 = u, v - u1, v1 = 1, 0 - while v3 > 0: - q = u3 // v3 - u1, v1 = v1, u1 - v1*q - u3, v3 = v3, u3 - v3*q - if u3 != 1: - raise ValueError("No inverse value can be computed") - while u1<0: - u1 = u1 + v - return u1 - -# Given a number of bits to generate and a random generation function, -# find a prime number of the appropriate size. - -def getPrime(N, randfunc=None): - """Return a random N-bit prime number. - - N must be an integer larger than 1. - If randfunc is omitted, then :meth:`Random.get_random_bytes` is used. - """ - if randfunc is None: - randfunc = Random.get_random_bytes - - if N < 2: - raise ValueError("N must be larger than 1") - - while True: - number = getRandomNBitInteger(N, randfunc) | 1 - if isPrime(number, randfunc=randfunc): - break - return number - - -def _rabinMillerTest(n, rounds, randfunc=None): - """_rabinMillerTest(n:long, rounds:int, randfunc:callable):int - Tests if n is prime. - Returns 0 when n is definitely composite. - Returns 1 when n is probably prime. - Returns 2 when n is definitely prime. - - If randfunc is omitted, then Random.new().read is used. - - This function is for internal use only and may be renamed or removed in - the future. - """ - # check special cases (n==2, n even, n < 2) - if n < 3 or (n & 1) == 0: - return n == 2 - # n might be very large so it might be beneficial to precalculate n-1 - n_1 = n - 1 - # determine m and b so that 2**b * m = n - 1 and b maximal - b = 0 - m = n_1 - while (m & 1) == 0: - b += 1 - m >>= 1 - - tested = [] - # we need to do at most n-2 rounds. - for i in iter_range (min (rounds, n-2)): - # randomly choose a < n and make sure it hasn't been tested yet - a = getRandomRange (2, n, randfunc) - while a in tested: - a = getRandomRange (2, n, randfunc) - tested.append (a) - # do the rabin-miller test - z = pow (a, m, n) # (a**m) % n - if z == 1 or z == n_1: - continue - composite = 1 - for r in iter_range(b): - z = (z * z) % n - if z == 1: - return 0 - elif z == n_1: - composite = 0 - break - if composite: - return 0 - return 1 - -def getStrongPrime(N, e=0, false_positive_prob=1e-6, randfunc=None): - r""" - Return a random strong *N*-bit prime number. - In this context, *p* is a strong prime if *p-1* and *p+1* have at - least one large prime factor. - - Args: - N (integer): the exact length of the strong prime. - It must be a multiple of 128 and > 512. - e (integer): if provided, the returned prime (minus 1) - will be coprime to *e* and thus suitable for RSA where - *e* is the public exponent. - false_positive_prob (float): - The statistical probability for the result not to be actually a - prime. It defaults to 10\ :sup:`-6`. - Note that the real probability of a false-positive is far less. This is - just the mathematically provable limit. - randfunc (callable): - A function that takes a parameter *N* and that returns - a random byte string of such length. - If omitted, :func:`Crypto.Random.get_random_bytes` is used. - Return: - The new strong prime. - - .. deprecated:: 3.0 - This function is for internal use only and may be renamed or removed in - the future. - """ - - # This function was implemented following the - # instructions found in the paper: - # "FAST GENERATION OF RANDOM, STRONG RSA PRIMES" - # by Robert D. Silverman - # RSA Laboratories - # May 17, 1997 - # which by the time of writing could be freely downloaded here: - # http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.17.2713&rep=rep1&type=pdf - - if randfunc is None: - randfunc = Random.get_random_bytes - - # Use the accelerator if available - if _fastmath is not None: - return _fastmath.getStrongPrime(long(N), long(e), false_positive_prob, - randfunc) - - if (N < 512) or ((N % 128) != 0): - raise ValueError ("bits must be multiple of 128 and > 512") - - rabin_miller_rounds = int(math.ceil(-math.log(false_positive_prob)/math.log(4))) - - # calculate range for X - # lower_bound = sqrt(2) * 2^{511 + 128*x} - # upper_bound = 2^{512 + 128*x} - 1 - x = (N - 512) >> 7 - # We need to approximate the sqrt(2) in the lower_bound by an integer - # expression because floating point math overflows with these numbers - lower_bound = (14142135623730950489 * (2 ** (511 + 128*x))) // 10000000000000000000 - upper_bound = (1 << (512 + 128*x)) - 1 - # Randomly choose X in calculated range - X = getRandomRange (lower_bound, upper_bound, randfunc) - - # generate p1 and p2 - p = [0, 0] - for i in (0, 1): - # randomly choose 101-bit y - y = getRandomNBitInteger (101, randfunc) - # initialize the field for sieving - field = [0] * 5 * len (sieve_base) - # sieve the field - for prime in sieve_base: - offset = y % prime - for j in iter_range((prime - offset) % prime, len (field), prime): - field[j] = 1 - - # look for suitable p[i] starting at y - result = 0 - for j in range(len(field)): - composite = field[j] - # look for next canidate - if composite: - continue - tmp = y + j - result = _rabinMillerTest (tmp, rabin_miller_rounds) - if result > 0: - p[i] = tmp - break - if result == 0: - raise RuntimeError ("Couln't find prime in field. " - "Developer: Increase field_size") - - # Calculate R - # R = (p2^{-1} mod p1) * p2 - (p1^{-1} mod p2) * p1 - tmp1 = inverse (p[1], p[0]) * p[1] # (p2^-1 mod p1)*p2 - tmp2 = inverse (p[0], p[1]) * p[0] # (p1^-1 mod p2)*p1 - R = tmp1 - tmp2 # (p2^-1 mod p1)*p2 - (p1^-1 mod p2)*p1 - - # search for final prime number starting by Y0 - # Y0 = X + (R - X mod p1p2) - increment = p[0] * p[1] - X = X + (R - (X % increment)) - while 1: - is_possible_prime = 1 - # first check candidate against sieve_base - for prime in sieve_base: - if (X % prime) == 0: - is_possible_prime = 0 - break - # if e is given make sure that e and X-1 are coprime - # this is not necessarily a strong prime criterion but useful when - # creating them for RSA where the p-1 and q-1 should be coprime to - # the public exponent e - if e and is_possible_prime: - if e & 1: - if GCD(e, X-1) != 1: - is_possible_prime = 0 - else: - if GCD(e, (X-1) // 2) != 1: - is_possible_prime = 0 - - # do some Rabin-Miller-Tests - if is_possible_prime: - result = _rabinMillerTest (X, rabin_miller_rounds) - if result > 0: - break - X += increment - # abort when X has more bits than requested - # TODO: maybe we shouldn't abort but rather start over. - if X >= 1 << N: - raise RuntimeError ("Couln't find prime in field. " - "Developer: Increase field_size") - return X - -def isPrime(N, false_positive_prob=1e-6, randfunc=None): - r"""Test if a number *N* is a prime. - - Args: - false_positive_prob (float): - The statistical probability for the result not to be actually a - prime. It defaults to 10\ :sup:`-6`. - Note that the real probability of a false-positive is far less. - This is just the mathematically provable limit. - randfunc (callable): - A function that takes a parameter *N* and that returns - a random byte string of such length. - If omitted, :func:`Crypto.Random.get_random_bytes` is used. - - Return: - `True` is the input is indeed prime. - """ - - if randfunc is None: - randfunc = Random.get_random_bytes - - if _fastmath is not None: - return _fastmath.isPrime(long(N), false_positive_prob, randfunc) - - if N < 3 or N & 1 == 0: - return N == 2 - for p in sieve_base: - if N == p: - return True - if N % p == 0: - return False - - rounds = int(math.ceil(-math.log(false_positive_prob)/math.log(4))) - return bool(_rabinMillerTest(N, rounds, randfunc)) - - -# Improved conversion functions contributed by Barry Warsaw, after -# careful benchmarking - -import struct - -def long_to_bytes(n, blocksize=0): - """Convert a positive integer to a byte string using big endian encoding. - - If :data:`blocksize` is absent or zero, the byte string will - be of minimal length. - - Otherwise, the length of the byte string is guaranteed to be a multiple - of :data:`blocksize`. If necessary, zeroes (``\\x00``) are added at the left. - - .. note:: - In Python 3, if you are sure that :data:`n` can fit into - :data:`blocksize` bytes, you can simply use the native method instead:: - - >>> n.to_bytes(blocksize, 'big') - - For instance:: - - >>> n = 80 - >>> n.to_bytes(2, 'big') - b'\\x00P' - - However, and unlike this ``long_to_bytes()`` function, - an ``OverflowError`` exception is raised if :data:`n` does not fit. - """ - - if n < 0 or blocksize < 0: - raise ValueError("Values must be non-negative") - - result = [] - pack = struct.pack - - # Fill the first block independently from the value of n - bsr = blocksize - while bsr >= 8: - result.insert(0, pack('>Q', n & 0xFFFFFFFFFFFFFFFF)) - n = n >> 64 - bsr -= 8 - - while bsr >= 4: - result.insert(0, pack('>I', n & 0xFFFFFFFF)) - n = n >> 32 - bsr -= 4 - - while bsr > 0: - result.insert(0, pack('>B', n & 0xFF)) - n = n >> 8 - bsr -= 1 - - if n == 0: - if len(result) == 0: - bresult = b'\x00' - else: - bresult = b''.join(result) - else: - # The encoded number exceeds the block size - while n > 0: - result.insert(0, pack('>Q', n & 0xFFFFFFFFFFFFFFFF)) - n = n >> 64 - result[0] = result[0].lstrip(b'\x00') - bresult = b''.join(result) - # bresult has minimum length here - if blocksize > 0: - target_len = ((len(bresult) - 1) // blocksize + 1) * blocksize - bresult = b'\x00' * (target_len - len(bresult)) + bresult - - return bresult - - -def bytes_to_long(s): - """Convert a byte string to a long integer (big endian). - - In Python 3.2+, use the native method instead:: - - >>> int.from_bytes(s, 'big') - - For instance:: - - >>> int.from_bytes(b'\x00P', 'big') - 80 - - This is (essentially) the inverse of :func:`long_to_bytes`. - """ - acc = 0 - - unpack = struct.unpack - - # Up to Python 2.7.4, struct.unpack can't work with bytearrays nor - # memoryviews - if sys.version_info[0:3] < (2, 7, 4): - if isinstance(s, bytearray): - s = bytes(s) - elif isinstance(s, memoryview): - s = s.tobytes() - - length = len(s) - if length % 4: - extra = (4 - length % 4) - s = b'\x00' * extra + s - length = length + extra - for i in range(0, length, 4): - acc = (acc << 32) + unpack('>I', s[i:i+4])[0] - return acc - - -# For backwards compatibility... -import warnings -def long2str(n, blocksize=0): - warnings.warn("long2str() has been replaced by long_to_bytes()") - return long_to_bytes(n, blocksize) -def str2long(s): - warnings.warn("str2long() has been replaced by bytes_to_long()") - return bytes_to_long(s) - - -# The first 10000 primes used for checking primality. -# This should be enough to eliminate most of the odd -# numbers before needing to do a Rabin-Miller test at all. -sieve_base = ( - 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, - 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, - 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, - 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, - 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, - 233, 239, 241, 251, 257, 263, 269, 271, 277, 281, - 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, - 353, 359, 367, 373, 379, 383, 389, 397, 401, 409, - 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, - 467, 479, 487, 491, 499, 503, 509, 521, 523, 541, - 547, 557, 563, 569, 571, 577, 587, 593, 599, 601, - 607, 613, 617, 619, 631, 641, 643, 647, 653, 659, - 661, 673, 677, 683, 691, 701, 709, 719, 727, 733, - 739, 743, 751, 757, 761, 769, 773, 787, 797, 809, - 811, 821, 823, 827, 829, 839, 853, 857, 859, 863, - 877, 881, 883, 887, 907, 911, 919, 929, 937, 941, - 947, 953, 967, 971, 977, 983, 991, 997, 1009, 1013, - 1019, 1021, 1031, 1033, 1039, 1049, 1051, 1061, 1063, 1069, - 1087, 1091, 1093, 1097, 1103, 1109, 1117, 1123, 1129, 1151, - 1153, 1163, 1171, 1181, 1187, 1193, 1201, 1213, 1217, 1223, - 1229, 1231, 1237, 1249, 1259, 1277, 1279, 1283, 1289, 1291, - 1297, 1301, 1303, 1307, 1319, 1321, 1327, 1361, 1367, 1373, - 1381, 1399, 1409, 1423, 1427, 1429, 1433, 1439, 1447, 1451, - 1453, 1459, 1471, 1481, 1483, 1487, 1489, 1493, 1499, 1511, - 1523, 1531, 1543, 1549, 1553, 1559, 1567, 1571, 1579, 1583, - 1597, 1601, 1607, 1609, 1613, 1619, 1621, 1627, 1637, 1657, - 1663, 1667, 1669, 1693, 1697, 1699, 1709, 1721, 1723, 1733, - 1741, 1747, 1753, 1759, 1777, 1783, 1787, 1789, 1801, 1811, - 1823, 1831, 1847, 1861, 1867, 1871, 1873, 1877, 1879, 1889, - 1901, 1907, 1913, 1931, 1933, 1949, 1951, 1973, 1979, 1987, - 1993, 1997, 1999, 2003, 2011, 2017, 2027, 2029, 2039, 2053, - 2063, 2069, 2081, 2083, 2087, 2089, 2099, 2111, 2113, 2129, - 2131, 2137, 2141, 2143, 2153, 2161, 2179, 2203, 2207, 2213, - 2221, 2237, 2239, 2243, 2251, 2267, 2269, 2273, 2281, 2287, - 2293, 2297, 2309, 2311, 2333, 2339, 2341, 2347, 2351, 2357, - 2371, 2377, 2381, 2383, 2389, 2393, 2399, 2411, 2417, 2423, - 2437, 2441, 2447, 2459, 2467, 2473, 2477, 2503, 2521, 2531, - 2539, 2543, 2549, 2551, 2557, 2579, 2591, 2593, 2609, 2617, - 2621, 2633, 2647, 2657, 2659, 2663, 2671, 2677, 2683, 2687, - 2689, 2693, 2699, 2707, 2711, 2713, 2719, 2729, 2731, 2741, - 2749, 2753, 2767, 2777, 2789, 2791, 2797, 2801, 2803, 2819, - 2833, 2837, 2843, 2851, 2857, 2861, 2879, 2887, 2897, 2903, - 2909, 2917, 2927, 2939, 2953, 2957, 2963, 2969, 2971, 2999, - 3001, 3011, 3019, 3023, 3037, 3041, 3049, 3061, 3067, 3079, - 3083, 3089, 3109, 3119, 3121, 3137, 3163, 3167, 3169, 3181, - 3187, 3191, 3203, 3209, 3217, 3221, 3229, 3251, 3253, 3257, - 3259, 3271, 3299, 3301, 3307, 3313, 3319, 3323, 3329, 3331, - 3343, 3347, 3359, 3361, 3371, 3373, 3389, 3391, 3407, 3413, - 3433, 3449, 3457, 3461, 3463, 3467, 3469, 3491, 3499, 3511, - 3517, 3527, 3529, 3533, 3539, 3541, 3547, 3557, 3559, 3571, - 3581, 3583, 3593, 3607, 3613, 3617, 3623, 3631, 3637, 3643, - 3659, 3671, 3673, 3677, 3691, 3697, 3701, 3709, 3719, 3727, - 3733, 3739, 3761, 3767, 3769, 3779, 3793, 3797, 3803, 3821, - 3823, 3833, 3847, 3851, 3853, 3863, 3877, 3881, 3889, 3907, - 3911, 3917, 3919, 3923, 3929, 3931, 3943, 3947, 3967, 3989, - 4001, 4003, 4007, 4013, 4019, 4021, 4027, 4049, 4051, 4057, - 4073, 4079, 4091, 4093, 4099, 4111, 4127, 4129, 4133, 4139, - 4153, 4157, 4159, 4177, 4201, 4211, 4217, 4219, 4229, 4231, - 4241, 4243, 4253, 4259, 4261, 4271, 4273, 4283, 4289, 4297, - 4327, 4337, 4339, 4349, 4357, 4363, 4373, 4391, 4397, 4409, - 4421, 4423, 4441, 4447, 4451, 4457, 4463, 4481, 4483, 4493, - 4507, 4513, 4517, 4519, 4523, 4547, 4549, 4561, 4567, 4583, - 4591, 4597, 4603, 4621, 4637, 4639, 4643, 4649, 4651, 4657, - 4663, 4673, 4679, 4691, 4703, 4721, 4723, 4729, 4733, 4751, - 4759, 4783, 4787, 4789, 4793, 4799, 4801, 4813, 4817, 4831, - 4861, 4871, 4877, 4889, 4903, 4909, 4919, 4931, 4933, 4937, - 4943, 4951, 4957, 4967, 4969, 4973, 4987, 4993, 4999, 5003, - 5009, 5011, 5021, 5023, 5039, 5051, 5059, 5077, 5081, 5087, - 5099, 5101, 5107, 5113, 5119, 5147, 5153, 5167, 5171, 5179, - 5189, 5197, 5209, 5227, 5231, 5233, 5237, 5261, 5273, 5279, - 5281, 5297, 5303, 5309, 5323, 5333, 5347, 5351, 5381, 5387, - 5393, 5399, 5407, 5413, 5417, 5419, 5431, 5437, 5441, 5443, - 5449, 5471, 5477, 5479, 5483, 5501, 5503, 5507, 5519, 5521, - 5527, 5531, 5557, 5563, 5569, 5573, 5581, 5591, 5623, 5639, - 5641, 5647, 5651, 5653, 5657, 5659, 5669, 5683, 5689, 5693, - 5701, 5711, 5717, 5737, 5741, 5743, 5749, 5779, 5783, 5791, - 5801, 5807, 5813, 5821, 5827, 5839, 5843, 5849, 5851, 5857, - 5861, 5867, 5869, 5879, 5881, 5897, 5903, 5923, 5927, 5939, - 5953, 5981, 5987, 6007, 6011, 6029, 6037, 6043, 6047, 6053, - 6067, 6073, 6079, 6089, 6091, 6101, 6113, 6121, 6131, 6133, - 6143, 6151, 6163, 6173, 6197, 6199, 6203, 6211, 6217, 6221, - 6229, 6247, 6257, 6263, 6269, 6271, 6277, 6287, 6299, 6301, - 6311, 6317, 6323, 6329, 6337, 6343, 6353, 6359, 6361, 6367, - 6373, 6379, 6389, 6397, 6421, 6427, 6449, 6451, 6469, 6473, - 6481, 6491, 6521, 6529, 6547, 6551, 6553, 6563, 6569, 6571, - 6577, 6581, 6599, 6607, 6619, 6637, 6653, 6659, 6661, 6673, - 6679, 6689, 6691, 6701, 6703, 6709, 6719, 6733, 6737, 6761, - 6763, 6779, 6781, 6791, 6793, 6803, 6823, 6827, 6829, 6833, - 6841, 6857, 6863, 6869, 6871, 6883, 6899, 6907, 6911, 6917, - 6947, 6949, 6959, 6961, 6967, 6971, 6977, 6983, 6991, 6997, - 7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, - 7109, 7121, 7127, 7129, 7151, 7159, 7177, 7187, 7193, 7207, - 7211, 7213, 7219, 7229, 7237, 7243, 7247, 7253, 7283, 7297, - 7307, 7309, 7321, 7331, 7333, 7349, 7351, 7369, 7393, 7411, - 7417, 7433, 7451, 7457, 7459, 7477, 7481, 7487, 7489, 7499, - 7507, 7517, 7523, 7529, 7537, 7541, 7547, 7549, 7559, 7561, - 7573, 7577, 7583, 7589, 7591, 7603, 7607, 7621, 7639, 7643, - 7649, 7669, 7673, 7681, 7687, 7691, 7699, 7703, 7717, 7723, - 7727, 7741, 7753, 7757, 7759, 7789, 7793, 7817, 7823, 7829, - 7841, 7853, 7867, 7873, 7877, 7879, 7883, 7901, 7907, 7919, - 7927, 7933, 7937, 7949, 7951, 7963, 7993, 8009, 8011, 8017, - 8039, 8053, 8059, 8069, 8081, 8087, 8089, 8093, 8101, 8111, - 8117, 8123, 8147, 8161, 8167, 8171, 8179, 8191, 8209, 8219, - 8221, 8231, 8233, 8237, 8243, 8263, 8269, 8273, 8287, 8291, - 8293, 8297, 8311, 8317, 8329, 8353, 8363, 8369, 8377, 8387, - 8389, 8419, 8423, 8429, 8431, 8443, 8447, 8461, 8467, 8501, - 8513, 8521, 8527, 8537, 8539, 8543, 8563, 8573, 8581, 8597, - 8599, 8609, 8623, 8627, 8629, 8641, 8647, 8663, 8669, 8677, - 8681, 8689, 8693, 8699, 8707, 8713, 8719, 8731, 8737, 8741, - 8747, 8753, 8761, 8779, 8783, 8803, 8807, 8819, 8821, 8831, - 8837, 8839, 8849, 8861, 8863, 8867, 8887, 8893, 8923, 8929, - 8933, 8941, 8951, 8963, 8969, 8971, 8999, 9001, 9007, 9011, - 9013, 9029, 9041, 9043, 9049, 9059, 9067, 9091, 9103, 9109, - 9127, 9133, 9137, 9151, 9157, 9161, 9173, 9181, 9187, 9199, - 9203, 9209, 9221, 9227, 9239, 9241, 9257, 9277, 9281, 9283, - 9293, 9311, 9319, 9323, 9337, 9341, 9343, 9349, 9371, 9377, - 9391, 9397, 9403, 9413, 9419, 9421, 9431, 9433, 9437, 9439, - 9461, 9463, 9467, 9473, 9479, 9491, 9497, 9511, 9521, 9533, - 9539, 9547, 9551, 9587, 9601, 9613, 9619, 9623, 9629, 9631, - 9643, 9649, 9661, 9677, 9679, 9689, 9697, 9719, 9721, 9733, - 9739, 9743, 9749, 9767, 9769, 9781, 9787, 9791, 9803, 9811, - 9817, 9829, 9833, 9839, 9851, 9857, 9859, 9871, 9883, 9887, - 9901, 9907, 9923, 9929, 9931, 9941, 9949, 9967, 9973, 10007, - 10009, 10037, 10039, 10061, 10067, 10069, 10079, 10091, 10093, 10099, - 10103, 10111, 10133, 10139, 10141, 10151, 10159, 10163, 10169, 10177, - 10181, 10193, 10211, 10223, 10243, 10247, 10253, 10259, 10267, 10271, - 10273, 10289, 10301, 10303, 10313, 10321, 10331, 10333, 10337, 10343, - 10357, 10369, 10391, 10399, 10427, 10429, 10433, 10453, 10457, 10459, - 10463, 10477, 10487, 10499, 10501, 10513, 10529, 10531, 10559, 10567, - 10589, 10597, 10601, 10607, 10613, 10627, 10631, 10639, 10651, 10657, - 10663, 10667, 10687, 10691, 10709, 10711, 10723, 10729, 10733, 10739, - 10753, 10771, 10781, 10789, 10799, 10831, 10837, 10847, 10853, 10859, - 10861, 10867, 10883, 10889, 10891, 10903, 10909, 10937, 10939, 10949, - 10957, 10973, 10979, 10987, 10993, 11003, 11027, 11047, 11057, 11059, - 11069, 11071, 11083, 11087, 11093, 11113, 11117, 11119, 11131, 11149, - 11159, 11161, 11171, 11173, 11177, 11197, 11213, 11239, 11243, 11251, - 11257, 11261, 11273, 11279, 11287, 11299, 11311, 11317, 11321, 11329, - 11351, 11353, 11369, 11383, 11393, 11399, 11411, 11423, 11437, 11443, - 11447, 11467, 11471, 11483, 11489, 11491, 11497, 11503, 11519, 11527, - 11549, 11551, 11579, 11587, 11593, 11597, 11617, 11621, 11633, 11657, - 11677, 11681, 11689, 11699, 11701, 11717, 11719, 11731, 11743, 11777, - 11779, 11783, 11789, 11801, 11807, 11813, 11821, 11827, 11831, 11833, - 11839, 11863, 11867, 11887, 11897, 11903, 11909, 11923, 11927, 11933, - 11939, 11941, 11953, 11959, 11969, 11971, 11981, 11987, 12007, 12011, - 12037, 12041, 12043, 12049, 12071, 12073, 12097, 12101, 12107, 12109, - 12113, 12119, 12143, 12149, 12157, 12161, 12163, 12197, 12203, 12211, - 12227, 12239, 12241, 12251, 12253, 12263, 12269, 12277, 12281, 12289, - 12301, 12323, 12329, 12343, 12347, 12373, 12377, 12379, 12391, 12401, - 12409, 12413, 12421, 12433, 12437, 12451, 12457, 12473, 12479, 12487, - 12491, 12497, 12503, 12511, 12517, 12527, 12539, 12541, 12547, 12553, - 12569, 12577, 12583, 12589, 12601, 12611, 12613, 12619, 12637, 12641, - 12647, 12653, 12659, 12671, 12689, 12697, 12703, 12713, 12721, 12739, - 12743, 12757, 12763, 12781, 12791, 12799, 12809, 12821, 12823, 12829, - 12841, 12853, 12889, 12893, 12899, 12907, 12911, 12917, 12919, 12923, - 12941, 12953, 12959, 12967, 12973, 12979, 12983, 13001, 13003, 13007, - 13009, 13033, 13037, 13043, 13049, 13063, 13093, 13099, 13103, 13109, - 13121, 13127, 13147, 13151, 13159, 13163, 13171, 13177, 13183, 13187, - 13217, 13219, 13229, 13241, 13249, 13259, 13267, 13291, 13297, 13309, - 13313, 13327, 13331, 13337, 13339, 13367, 13381, 13397, 13399, 13411, - 13417, 13421, 13441, 13451, 13457, 13463, 13469, 13477, 13487, 13499, - 13513, 13523, 13537, 13553, 13567, 13577, 13591, 13597, 13613, 13619, - 13627, 13633, 13649, 13669, 13679, 13681, 13687, 13691, 13693, 13697, - 13709, 13711, 13721, 13723, 13729, 13751, 13757, 13759, 13763, 13781, - 13789, 13799, 13807, 13829, 13831, 13841, 13859, 13873, 13877, 13879, - 13883, 13901, 13903, 13907, 13913, 13921, 13931, 13933, 13963, 13967, - 13997, 13999, 14009, 14011, 14029, 14033, 14051, 14057, 14071, 14081, - 14083, 14087, 14107, 14143, 14149, 14153, 14159, 14173, 14177, 14197, - 14207, 14221, 14243, 14249, 14251, 14281, 14293, 14303, 14321, 14323, - 14327, 14341, 14347, 14369, 14387, 14389, 14401, 14407, 14411, 14419, - 14423, 14431, 14437, 14447, 14449, 14461, 14479, 14489, 14503, 14519, - 14533, 14537, 14543, 14549, 14551, 14557, 14561, 14563, 14591, 14593, - 14621, 14627, 14629, 14633, 14639, 14653, 14657, 14669, 14683, 14699, - 14713, 14717, 14723, 14731, 14737, 14741, 14747, 14753, 14759, 14767, - 14771, 14779, 14783, 14797, 14813, 14821, 14827, 14831, 14843, 14851, - 14867, 14869, 14879, 14887, 14891, 14897, 14923, 14929, 14939, 14947, - 14951, 14957, 14969, 14983, 15013, 15017, 15031, 15053, 15061, 15073, - 15077, 15083, 15091, 15101, 15107, 15121, 15131, 15137, 15139, 15149, - 15161, 15173, 15187, 15193, 15199, 15217, 15227, 15233, 15241, 15259, - 15263, 15269, 15271, 15277, 15287, 15289, 15299, 15307, 15313, 15319, - 15329, 15331, 15349, 15359, 15361, 15373, 15377, 15383, 15391, 15401, - 15413, 15427, 15439, 15443, 15451, 15461, 15467, 15473, 15493, 15497, - 15511, 15527, 15541, 15551, 15559, 15569, 15581, 15583, 15601, 15607, - 15619, 15629, 15641, 15643, 15647, 15649, 15661, 15667, 15671, 15679, - 15683, 15727, 15731, 15733, 15737, 15739, 15749, 15761, 15767, 15773, - 15787, 15791, 15797, 15803, 15809, 15817, 15823, 15859, 15877, 15881, - 15887, 15889, 15901, 15907, 15913, 15919, 15923, 15937, 15959, 15971, - 15973, 15991, 16001, 16007, 16033, 16057, 16061, 16063, 16067, 16069, - 16073, 16087, 16091, 16097, 16103, 16111, 16127, 16139, 16141, 16183, - 16187, 16189, 16193, 16217, 16223, 16229, 16231, 16249, 16253, 16267, - 16273, 16301, 16319, 16333, 16339, 16349, 16361, 16363, 16369, 16381, - 16411, 16417, 16421, 16427, 16433, 16447, 16451, 16453, 16477, 16481, - 16487, 16493, 16519, 16529, 16547, 16553, 16561, 16567, 16573, 16603, - 16607, 16619, 16631, 16633, 16649, 16651, 16657, 16661, 16673, 16691, - 16693, 16699, 16703, 16729, 16741, 16747, 16759, 16763, 16787, 16811, - 16823, 16829, 16831, 16843, 16871, 16879, 16883, 16889, 16901, 16903, - 16921, 16927, 16931, 16937, 16943, 16963, 16979, 16981, 16987, 16993, - 17011, 17021, 17027, 17029, 17033, 17041, 17047, 17053, 17077, 17093, - 17099, 17107, 17117, 17123, 17137, 17159, 17167, 17183, 17189, 17191, - 17203, 17207, 17209, 17231, 17239, 17257, 17291, 17293, 17299, 17317, - 17321, 17327, 17333, 17341, 17351, 17359, 17377, 17383, 17387, 17389, - 17393, 17401, 17417, 17419, 17431, 17443, 17449, 17467, 17471, 17477, - 17483, 17489, 17491, 17497, 17509, 17519, 17539, 17551, 17569, 17573, - 17579, 17581, 17597, 17599, 17609, 17623, 17627, 17657, 17659, 17669, - 17681, 17683, 17707, 17713, 17729, 17737, 17747, 17749, 17761, 17783, - 17789, 17791, 17807, 17827, 17837, 17839, 17851, 17863, 17881, 17891, - 17903, 17909, 17911, 17921, 17923, 17929, 17939, 17957, 17959, 17971, - 17977, 17981, 17987, 17989, 18013, 18041, 18043, 18047, 18049, 18059, - 18061, 18077, 18089, 18097, 18119, 18121, 18127, 18131, 18133, 18143, - 18149, 18169, 18181, 18191, 18199, 18211, 18217, 18223, 18229, 18233, - 18251, 18253, 18257, 18269, 18287, 18289, 18301, 18307, 18311, 18313, - 18329, 18341, 18353, 18367, 18371, 18379, 18397, 18401, 18413, 18427, - 18433, 18439, 18443, 18451, 18457, 18461, 18481, 18493, 18503, 18517, - 18521, 18523, 18539, 18541, 18553, 18583, 18587, 18593, 18617, 18637, - 18661, 18671, 18679, 18691, 18701, 18713, 18719, 18731, 18743, 18749, - 18757, 18773, 18787, 18793, 18797, 18803, 18839, 18859, 18869, 18899, - 18911, 18913, 18917, 18919, 18947, 18959, 18973, 18979, 19001, 19009, - 19013, 19031, 19037, 19051, 19069, 19073, 19079, 19081, 19087, 19121, - 19139, 19141, 19157, 19163, 19181, 19183, 19207, 19211, 19213, 19219, - 19231, 19237, 19249, 19259, 19267, 19273, 19289, 19301, 19309, 19319, - 19333, 19373, 19379, 19381, 19387, 19391, 19403, 19417, 19421, 19423, - 19427, 19429, 19433, 19441, 19447, 19457, 19463, 19469, 19471, 19477, - 19483, 19489, 19501, 19507, 19531, 19541, 19543, 19553, 19559, 19571, - 19577, 19583, 19597, 19603, 19609, 19661, 19681, 19687, 19697, 19699, - 19709, 19717, 19727, 19739, 19751, 19753, 19759, 19763, 19777, 19793, - 19801, 19813, 19819, 19841, 19843, 19853, 19861, 19867, 19889, 19891, - 19913, 19919, 19927, 19937, 19949, 19961, 19963, 19973, 19979, 19991, - 19993, 19997, 20011, 20021, 20023, 20029, 20047, 20051, 20063, 20071, - 20089, 20101, 20107, 20113, 20117, 20123, 20129, 20143, 20147, 20149, - 20161, 20173, 20177, 20183, 20201, 20219, 20231, 20233, 20249, 20261, - 20269, 20287, 20297, 20323, 20327, 20333, 20341, 20347, 20353, 20357, - 20359, 20369, 20389, 20393, 20399, 20407, 20411, 20431, 20441, 20443, - 20477, 20479, 20483, 20507, 20509, 20521, 20533, 20543, 20549, 20551, - 20563, 20593, 20599, 20611, 20627, 20639, 20641, 20663, 20681, 20693, - 20707, 20717, 20719, 20731, 20743, 20747, 20749, 20753, 20759, 20771, - 20773, 20789, 20807, 20809, 20849, 20857, 20873, 20879, 20887, 20897, - 20899, 20903, 20921, 20929, 20939, 20947, 20959, 20963, 20981, 20983, - 21001, 21011, 21013, 21017, 21019, 21023, 21031, 21059, 21061, 21067, - 21089, 21101, 21107, 21121, 21139, 21143, 21149, 21157, 21163, 21169, - 21179, 21187, 21191, 21193, 21211, 21221, 21227, 21247, 21269, 21277, - 21283, 21313, 21317, 21319, 21323, 21341, 21347, 21377, 21379, 21383, - 21391, 21397, 21401, 21407, 21419, 21433, 21467, 21481, 21487, 21491, - 21493, 21499, 21503, 21517, 21521, 21523, 21529, 21557, 21559, 21563, - 21569, 21577, 21587, 21589, 21599, 21601, 21611, 21613, 21617, 21647, - 21649, 21661, 21673, 21683, 21701, 21713, 21727, 21737, 21739, 21751, - 21757, 21767, 21773, 21787, 21799, 21803, 21817, 21821, 21839, 21841, - 21851, 21859, 21863, 21871, 21881, 21893, 21911, 21929, 21937, 21943, - 21961, 21977, 21991, 21997, 22003, 22013, 22027, 22031, 22037, 22039, - 22051, 22063, 22067, 22073, 22079, 22091, 22093, 22109, 22111, 22123, - 22129, 22133, 22147, 22153, 22157, 22159, 22171, 22189, 22193, 22229, - 22247, 22259, 22271, 22273, 22277, 22279, 22283, 22291, 22303, 22307, - 22343, 22349, 22367, 22369, 22381, 22391, 22397, 22409, 22433, 22441, - 22447, 22453, 22469, 22481, 22483, 22501, 22511, 22531, 22541, 22543, - 22549, 22567, 22571, 22573, 22613, 22619, 22621, 22637, 22639, 22643, - 22651, 22669, 22679, 22691, 22697, 22699, 22709, 22717, 22721, 22727, - 22739, 22741, 22751, 22769, 22777, 22783, 22787, 22807, 22811, 22817, - 22853, 22859, 22861, 22871, 22877, 22901, 22907, 22921, 22937, 22943, - 22961, 22963, 22973, 22993, 23003, 23011, 23017, 23021, 23027, 23029, - 23039, 23041, 23053, 23057, 23059, 23063, 23071, 23081, 23087, 23099, - 23117, 23131, 23143, 23159, 23167, 23173, 23189, 23197, 23201, 23203, - 23209, 23227, 23251, 23269, 23279, 23291, 23293, 23297, 23311, 23321, - 23327, 23333, 23339, 23357, 23369, 23371, 23399, 23417, 23431, 23447, - 23459, 23473, 23497, 23509, 23531, 23537, 23539, 23549, 23557, 23561, - 23563, 23567, 23581, 23593, 23599, 23603, 23609, 23623, 23627, 23629, - 23633, 23663, 23669, 23671, 23677, 23687, 23689, 23719, 23741, 23743, - 23747, 23753, 23761, 23767, 23773, 23789, 23801, 23813, 23819, 23827, - 23831, 23833, 23857, 23869, 23873, 23879, 23887, 23893, 23899, 23909, - 23911, 23917, 23929, 23957, 23971, 23977, 23981, 23993, 24001, 24007, - 24019, 24023, 24029, 24043, 24049, 24061, 24071, 24077, 24083, 24091, - 24097, 24103, 24107, 24109, 24113, 24121, 24133, 24137, 24151, 24169, - 24179, 24181, 24197, 24203, 24223, 24229, 24239, 24247, 24251, 24281, - 24317, 24329, 24337, 24359, 24371, 24373, 24379, 24391, 24407, 24413, - 24419, 24421, 24439, 24443, 24469, 24473, 24481, 24499, 24509, 24517, - 24527, 24533, 24547, 24551, 24571, 24593, 24611, 24623, 24631, 24659, - 24671, 24677, 24683, 24691, 24697, 24709, 24733, 24749, 24763, 24767, - 24781, 24793, 24799, 24809, 24821, 24841, 24847, 24851, 24859, 24877, - 24889, 24907, 24917, 24919, 24923, 24943, 24953, 24967, 24971, 24977, - 24979, 24989, 25013, 25031, 25033, 25037, 25057, 25073, 25087, 25097, - 25111, 25117, 25121, 25127, 25147, 25153, 25163, 25169, 25171, 25183, - 25189, 25219, 25229, 25237, 25243, 25247, 25253, 25261, 25301, 25303, - 25307, 25309, 25321, 25339, 25343, 25349, 25357, 25367, 25373, 25391, - 25409, 25411, 25423, 25439, 25447, 25453, 25457, 25463, 25469, 25471, - 25523, 25537, 25541, 25561, 25577, 25579, 25583, 25589, 25601, 25603, - 25609, 25621, 25633, 25639, 25643, 25657, 25667, 25673, 25679, 25693, - 25703, 25717, 25733, 25741, 25747, 25759, 25763, 25771, 25793, 25799, - 25801, 25819, 25841, 25847, 25849, 25867, 25873, 25889, 25903, 25913, - 25919, 25931, 25933, 25939, 25943, 25951, 25969, 25981, 25997, 25999, - 26003, 26017, 26021, 26029, 26041, 26053, 26083, 26099, 26107, 26111, - 26113, 26119, 26141, 26153, 26161, 26171, 26177, 26183, 26189, 26203, - 26209, 26227, 26237, 26249, 26251, 26261, 26263, 26267, 26293, 26297, - 26309, 26317, 26321, 26339, 26347, 26357, 26371, 26387, 26393, 26399, - 26407, 26417, 26423, 26431, 26437, 26449, 26459, 26479, 26489, 26497, - 26501, 26513, 26539, 26557, 26561, 26573, 26591, 26597, 26627, 26633, - 26641, 26647, 26669, 26681, 26683, 26687, 26693, 26699, 26701, 26711, - 26713, 26717, 26723, 26729, 26731, 26737, 26759, 26777, 26783, 26801, - 26813, 26821, 26833, 26839, 26849, 26861, 26863, 26879, 26881, 26891, - 26893, 26903, 26921, 26927, 26947, 26951, 26953, 26959, 26981, 26987, - 26993, 27011, 27017, 27031, 27043, 27059, 27061, 27067, 27073, 27077, - 27091, 27103, 27107, 27109, 27127, 27143, 27179, 27191, 27197, 27211, - 27239, 27241, 27253, 27259, 27271, 27277, 27281, 27283, 27299, 27329, - 27337, 27361, 27367, 27397, 27407, 27409, 27427, 27431, 27437, 27449, - 27457, 27479, 27481, 27487, 27509, 27527, 27529, 27539, 27541, 27551, - 27581, 27583, 27611, 27617, 27631, 27647, 27653, 27673, 27689, 27691, - 27697, 27701, 27733, 27737, 27739, 27743, 27749, 27751, 27763, 27767, - 27773, 27779, 27791, 27793, 27799, 27803, 27809, 27817, 27823, 27827, - 27847, 27851, 27883, 27893, 27901, 27917, 27919, 27941, 27943, 27947, - 27953, 27961, 27967, 27983, 27997, 28001, 28019, 28027, 28031, 28051, - 28057, 28069, 28081, 28087, 28097, 28099, 28109, 28111, 28123, 28151, - 28163, 28181, 28183, 28201, 28211, 28219, 28229, 28277, 28279, 28283, - 28289, 28297, 28307, 28309, 28319, 28349, 28351, 28387, 28393, 28403, - 28409, 28411, 28429, 28433, 28439, 28447, 28463, 28477, 28493, 28499, - 28513, 28517, 28537, 28541, 28547, 28549, 28559, 28571, 28573, 28579, - 28591, 28597, 28603, 28607, 28619, 28621, 28627, 28631, 28643, 28649, - 28657, 28661, 28663, 28669, 28687, 28697, 28703, 28711, 28723, 28729, - 28751, 28753, 28759, 28771, 28789, 28793, 28807, 28813, 28817, 28837, - 28843, 28859, 28867, 28871, 28879, 28901, 28909, 28921, 28927, 28933, - 28949, 28961, 28979, 29009, 29017, 29021, 29023, 29027, 29033, 29059, - 29063, 29077, 29101, 29123, 29129, 29131, 29137, 29147, 29153, 29167, - 29173, 29179, 29191, 29201, 29207, 29209, 29221, 29231, 29243, 29251, - 29269, 29287, 29297, 29303, 29311, 29327, 29333, 29339, 29347, 29363, - 29383, 29387, 29389, 29399, 29401, 29411, 29423, 29429, 29437, 29443, - 29453, 29473, 29483, 29501, 29527, 29531, 29537, 29567, 29569, 29573, - 29581, 29587, 29599, 29611, 29629, 29633, 29641, 29663, 29669, 29671, - 29683, 29717, 29723, 29741, 29753, 29759, 29761, 29789, 29803, 29819, - 29833, 29837, 29851, 29863, 29867, 29873, 29879, 29881, 29917, 29921, - 29927, 29947, 29959, 29983, 29989, 30011, 30013, 30029, 30047, 30059, - 30071, 30089, 30091, 30097, 30103, 30109, 30113, 30119, 30133, 30137, - 30139, 30161, 30169, 30181, 30187, 30197, 30203, 30211, 30223, 30241, - 30253, 30259, 30269, 30271, 30293, 30307, 30313, 30319, 30323, 30341, - 30347, 30367, 30389, 30391, 30403, 30427, 30431, 30449, 30467, 30469, - 30491, 30493, 30497, 30509, 30517, 30529, 30539, 30553, 30557, 30559, - 30577, 30593, 30631, 30637, 30643, 30649, 30661, 30671, 30677, 30689, - 30697, 30703, 30707, 30713, 30727, 30757, 30763, 30773, 30781, 30803, - 30809, 30817, 30829, 30839, 30841, 30851, 30853, 30859, 30869, 30871, - 30881, 30893, 30911, 30931, 30937, 30941, 30949, 30971, 30977, 30983, - 31013, 31019, 31033, 31039, 31051, 31063, 31069, 31079, 31081, 31091, - 31121, 31123, 31139, 31147, 31151, 31153, 31159, 31177, 31181, 31183, - 31189, 31193, 31219, 31223, 31231, 31237, 31247, 31249, 31253, 31259, - 31267, 31271, 31277, 31307, 31319, 31321, 31327, 31333, 31337, 31357, - 31379, 31387, 31391, 31393, 31397, 31469, 31477, 31481, 31489, 31511, - 31513, 31517, 31531, 31541, 31543, 31547, 31567, 31573, 31583, 31601, - 31607, 31627, 31643, 31649, 31657, 31663, 31667, 31687, 31699, 31721, - 31723, 31727, 31729, 31741, 31751, 31769, 31771, 31793, 31799, 31817, - 31847, 31849, 31859, 31873, 31883, 31891, 31907, 31957, 31963, 31973, - 31981, 31991, 32003, 32009, 32027, 32029, 32051, 32057, 32059, 32063, - 32069, 32077, 32083, 32089, 32099, 32117, 32119, 32141, 32143, 32159, - 32173, 32183, 32189, 32191, 32203, 32213, 32233, 32237, 32251, 32257, - 32261, 32297, 32299, 32303, 32309, 32321, 32323, 32327, 32341, 32353, - 32359, 32363, 32369, 32371, 32377, 32381, 32401, 32411, 32413, 32423, - 32429, 32441, 32443, 32467, 32479, 32491, 32497, 32503, 32507, 32531, - 32533, 32537, 32561, 32563, 32569, 32573, 32579, 32587, 32603, 32609, - 32611, 32621, 32633, 32647, 32653, 32687, 32693, 32707, 32713, 32717, - 32719, 32749, 32771, 32779, 32783, 32789, 32797, 32801, 32803, 32831, - 32833, 32839, 32843, 32869, 32887, 32909, 32911, 32917, 32933, 32939, - 32941, 32957, 32969, 32971, 32983, 32987, 32993, 32999, 33013, 33023, - 33029, 33037, 33049, 33053, 33071, 33073, 33083, 33091, 33107, 33113, - 33119, 33149, 33151, 33161, 33179, 33181, 33191, 33199, 33203, 33211, - 33223, 33247, 33287, 33289, 33301, 33311, 33317, 33329, 33331, 33343, - 33347, 33349, 33353, 33359, 33377, 33391, 33403, 33409, 33413, 33427, - 33457, 33461, 33469, 33479, 33487, 33493, 33503, 33521, 33529, 33533, - 33547, 33563, 33569, 33577, 33581, 33587, 33589, 33599, 33601, 33613, - 33617, 33619, 33623, 33629, 33637, 33641, 33647, 33679, 33703, 33713, - 33721, 33739, 33749, 33751, 33757, 33767, 33769, 33773, 33791, 33797, - 33809, 33811, 33827, 33829, 33851, 33857, 33863, 33871, 33889, 33893, - 33911, 33923, 33931, 33937, 33941, 33961, 33967, 33997, 34019, 34031, - 34033, 34039, 34057, 34061, 34123, 34127, 34129, 34141, 34147, 34157, - 34159, 34171, 34183, 34211, 34213, 34217, 34231, 34253, 34259, 34261, - 34267, 34273, 34283, 34297, 34301, 34303, 34313, 34319, 34327, 34337, - 34351, 34361, 34367, 34369, 34381, 34403, 34421, 34429, 34439, 34457, - 34469, 34471, 34483, 34487, 34499, 34501, 34511, 34513, 34519, 34537, - 34543, 34549, 34583, 34589, 34591, 34603, 34607, 34613, 34631, 34649, - 34651, 34667, 34673, 34679, 34687, 34693, 34703, 34721, 34729, 34739, - 34747, 34757, 34759, 34763, 34781, 34807, 34819, 34841, 34843, 34847, - 34849, 34871, 34877, 34883, 34897, 34913, 34919, 34939, 34949, 34961, - 34963, 34981, 35023, 35027, 35051, 35053, 35059, 35069, 35081, 35083, - 35089, 35099, 35107, 35111, 35117, 35129, 35141, 35149, 35153, 35159, - 35171, 35201, 35221, 35227, 35251, 35257, 35267, 35279, 35281, 35291, - 35311, 35317, 35323, 35327, 35339, 35353, 35363, 35381, 35393, 35401, - 35407, 35419, 35423, 35437, 35447, 35449, 35461, 35491, 35507, 35509, - 35521, 35527, 35531, 35533, 35537, 35543, 35569, 35573, 35591, 35593, - 35597, 35603, 35617, 35671, 35677, 35729, 35731, 35747, 35753, 35759, - 35771, 35797, 35801, 35803, 35809, 35831, 35837, 35839, 35851, 35863, - 35869, 35879, 35897, 35899, 35911, 35923, 35933, 35951, 35963, 35969, - 35977, 35983, 35993, 35999, 36007, 36011, 36013, 36017, 36037, 36061, - 36067, 36073, 36083, 36097, 36107, 36109, 36131, 36137, 36151, 36161, - 36187, 36191, 36209, 36217, 36229, 36241, 36251, 36263, 36269, 36277, - 36293, 36299, 36307, 36313, 36319, 36341, 36343, 36353, 36373, 36383, - 36389, 36433, 36451, 36457, 36467, 36469, 36473, 36479, 36493, 36497, - 36523, 36527, 36529, 36541, 36551, 36559, 36563, 36571, 36583, 36587, - 36599, 36607, 36629, 36637, 36643, 36653, 36671, 36677, 36683, 36691, - 36697, 36709, 36713, 36721, 36739, 36749, 36761, 36767, 36779, 36781, - 36787, 36791, 36793, 36809, 36821, 36833, 36847, 36857, 36871, 36877, - 36887, 36899, 36901, 36913, 36919, 36923, 36929, 36931, 36943, 36947, - 36973, 36979, 36997, 37003, 37013, 37019, 37021, 37039, 37049, 37057, - 37061, 37087, 37097, 37117, 37123, 37139, 37159, 37171, 37181, 37189, - 37199, 37201, 37217, 37223, 37243, 37253, 37273, 37277, 37307, 37309, - 37313, 37321, 37337, 37339, 37357, 37361, 37363, 37369, 37379, 37397, - 37409, 37423, 37441, 37447, 37463, 37483, 37489, 37493, 37501, 37507, - 37511, 37517, 37529, 37537, 37547, 37549, 37561, 37567, 37571, 37573, - 37579, 37589, 37591, 37607, 37619, 37633, 37643, 37649, 37657, 37663, - 37691, 37693, 37699, 37717, 37747, 37781, 37783, 37799, 37811, 37813, - 37831, 37847, 37853, 37861, 37871, 37879, 37889, 37897, 37907, 37951, - 37957, 37963, 37967, 37987, 37991, 37993, 37997, 38011, 38039, 38047, - 38053, 38069, 38083, 38113, 38119, 38149, 38153, 38167, 38177, 38183, - 38189, 38197, 38201, 38219, 38231, 38237, 38239, 38261, 38273, 38281, - 38287, 38299, 38303, 38317, 38321, 38327, 38329, 38333, 38351, 38371, - 38377, 38393, 38431, 38447, 38449, 38453, 38459, 38461, 38501, 38543, - 38557, 38561, 38567, 38569, 38593, 38603, 38609, 38611, 38629, 38639, - 38651, 38653, 38669, 38671, 38677, 38693, 38699, 38707, 38711, 38713, - 38723, 38729, 38737, 38747, 38749, 38767, 38783, 38791, 38803, 38821, - 38833, 38839, 38851, 38861, 38867, 38873, 38891, 38903, 38917, 38921, - 38923, 38933, 38953, 38959, 38971, 38977, 38993, 39019, 39023, 39041, - 39043, 39047, 39079, 39089, 39097, 39103, 39107, 39113, 39119, 39133, - 39139, 39157, 39161, 39163, 39181, 39191, 39199, 39209, 39217, 39227, - 39229, 39233, 39239, 39241, 39251, 39293, 39301, 39313, 39317, 39323, - 39341, 39343, 39359, 39367, 39371, 39373, 39383, 39397, 39409, 39419, - 39439, 39443, 39451, 39461, 39499, 39503, 39509, 39511, 39521, 39541, - 39551, 39563, 39569, 39581, 39607, 39619, 39623, 39631, 39659, 39667, - 39671, 39679, 39703, 39709, 39719, 39727, 39733, 39749, 39761, 39769, - 39779, 39791, 39799, 39821, 39827, 39829, 39839, 39841, 39847, 39857, - 39863, 39869, 39877, 39883, 39887, 39901, 39929, 39937, 39953, 39971, - 39979, 39983, 39989, 40009, 40013, 40031, 40037, 40039, 40063, 40087, - 40093, 40099, 40111, 40123, 40127, 40129, 40151, 40153, 40163, 40169, - 40177, 40189, 40193, 40213, 40231, 40237, 40241, 40253, 40277, 40283, - 40289, 40343, 40351, 40357, 40361, 40387, 40423, 40427, 40429, 40433, - 40459, 40471, 40483, 40487, 40493, 40499, 40507, 40519, 40529, 40531, - 40543, 40559, 40577, 40583, 40591, 40597, 40609, 40627, 40637, 40639, - 40693, 40697, 40699, 40709, 40739, 40751, 40759, 40763, 40771, 40787, - 40801, 40813, 40819, 40823, 40829, 40841, 40847, 40849, 40853, 40867, - 40879, 40883, 40897, 40903, 40927, 40933, 40939, 40949, 40961, 40973, - 40993, 41011, 41017, 41023, 41039, 41047, 41051, 41057, 41077, 41081, - 41113, 41117, 41131, 41141, 41143, 41149, 41161, 41177, 41179, 41183, - 41189, 41201, 41203, 41213, 41221, 41227, 41231, 41233, 41243, 41257, - 41263, 41269, 41281, 41299, 41333, 41341, 41351, 41357, 41381, 41387, - 41389, 41399, 41411, 41413, 41443, 41453, 41467, 41479, 41491, 41507, - 41513, 41519, 41521, 41539, 41543, 41549, 41579, 41593, 41597, 41603, - 41609, 41611, 41617, 41621, 41627, 41641, 41647, 41651, 41659, 41669, - 41681, 41687, 41719, 41729, 41737, 41759, 41761, 41771, 41777, 41801, - 41809, 41813, 41843, 41849, 41851, 41863, 41879, 41887, 41893, 41897, - 41903, 41911, 41927, 41941, 41947, 41953, 41957, 41959, 41969, 41981, - 41983, 41999, 42013, 42017, 42019, 42023, 42043, 42061, 42071, 42073, - 42083, 42089, 42101, 42131, 42139, 42157, 42169, 42179, 42181, 42187, - 42193, 42197, 42209, 42221, 42223, 42227, 42239, 42257, 42281, 42283, - 42293, 42299, 42307, 42323, 42331, 42337, 42349, 42359, 42373, 42379, - 42391, 42397, 42403, 42407, 42409, 42433, 42437, 42443, 42451, 42457, - 42461, 42463, 42467, 42473, 42487, 42491, 42499, 42509, 42533, 42557, - 42569, 42571, 42577, 42589, 42611, 42641, 42643, 42649, 42667, 42677, - 42683, 42689, 42697, 42701, 42703, 42709, 42719, 42727, 42737, 42743, - 42751, 42767, 42773, 42787, 42793, 42797, 42821, 42829, 42839, 42841, - 42853, 42859, 42863, 42899, 42901, 42923, 42929, 42937, 42943, 42953, - 42961, 42967, 42979, 42989, 43003, 43013, 43019, 43037, 43049, 43051, - 43063, 43067, 43093, 43103, 43117, 43133, 43151, 43159, 43177, 43189, - 43201, 43207, 43223, 43237, 43261, 43271, 43283, 43291, 43313, 43319, - 43321, 43331, 43391, 43397, 43399, 43403, 43411, 43427, 43441, 43451, - 43457, 43481, 43487, 43499, 43517, 43541, 43543, 43573, 43577, 43579, - 43591, 43597, 43607, 43609, 43613, 43627, 43633, 43649, 43651, 43661, - 43669, 43691, 43711, 43717, 43721, 43753, 43759, 43777, 43781, 43783, - 43787, 43789, 43793, 43801, 43853, 43867, 43889, 43891, 43913, 43933, - 43943, 43951, 43961, 43963, 43969, 43973, 43987, 43991, 43997, 44017, - 44021, 44027, 44029, 44041, 44053, 44059, 44071, 44087, 44089, 44101, - 44111, 44119, 44123, 44129, 44131, 44159, 44171, 44179, 44189, 44201, - 44203, 44207, 44221, 44249, 44257, 44263, 44267, 44269, 44273, 44279, - 44281, 44293, 44351, 44357, 44371, 44381, 44383, 44389, 44417, 44449, - 44453, 44483, 44491, 44497, 44501, 44507, 44519, 44531, 44533, 44537, - 44543, 44549, 44563, 44579, 44587, 44617, 44621, 44623, 44633, 44641, - 44647, 44651, 44657, 44683, 44687, 44699, 44701, 44711, 44729, 44741, - 44753, 44771, 44773, 44777, 44789, 44797, 44809, 44819, 44839, 44843, - 44851, 44867, 44879, 44887, 44893, 44909, 44917, 44927, 44939, 44953, - 44959, 44963, 44971, 44983, 44987, 45007, 45013, 45053, 45061, 45077, - 45083, 45119, 45121, 45127, 45131, 45137, 45139, 45161, 45179, 45181, - 45191, 45197, 45233, 45247, 45259, 45263, 45281, 45289, 45293, 45307, - 45317, 45319, 45329, 45337, 45341, 45343, 45361, 45377, 45389, 45403, - 45413, 45427, 45433, 45439, 45481, 45491, 45497, 45503, 45523, 45533, - 45541, 45553, 45557, 45569, 45587, 45589, 45599, 45613, 45631, 45641, - 45659, 45667, 45673, 45677, 45691, 45697, 45707, 45737, 45751, 45757, - 45763, 45767, 45779, 45817, 45821, 45823, 45827, 45833, 45841, 45853, - 45863, 45869, 45887, 45893, 45943, 45949, 45953, 45959, 45971, 45979, - 45989, 46021, 46027, 46049, 46051, 46061, 46073, 46091, 46093, 46099, - 46103, 46133, 46141, 46147, 46153, 46171, 46181, 46183, 46187, 46199, - 46219, 46229, 46237, 46261, 46271, 46273, 46279, 46301, 46307, 46309, - 46327, 46337, 46349, 46351, 46381, 46399, 46411, 46439, 46441, 46447, - 46451, 46457, 46471, 46477, 46489, 46499, 46507, 46511, 46523, 46549, - 46559, 46567, 46573, 46589, 46591, 46601, 46619, 46633, 46639, 46643, - 46649, 46663, 46679, 46681, 46687, 46691, 46703, 46723, 46727, 46747, - 46751, 46757, 46769, 46771, 46807, 46811, 46817, 46819, 46829, 46831, - 46853, 46861, 46867, 46877, 46889, 46901, 46919, 46933, 46957, 46993, - 46997, 47017, 47041, 47051, 47057, 47059, 47087, 47093, 47111, 47119, - 47123, 47129, 47137, 47143, 47147, 47149, 47161, 47189, 47207, 47221, - 47237, 47251, 47269, 47279, 47287, 47293, 47297, 47303, 47309, 47317, - 47339, 47351, 47353, 47363, 47381, 47387, 47389, 47407, 47417, 47419, - 47431, 47441, 47459, 47491, 47497, 47501, 47507, 47513, 47521, 47527, - 47533, 47543, 47563, 47569, 47581, 47591, 47599, 47609, 47623, 47629, - 47639, 47653, 47657, 47659, 47681, 47699, 47701, 47711, 47713, 47717, - 47737, 47741, 47743, 47777, 47779, 47791, 47797, 47807, 47809, 47819, - 47837, 47843, 47857, 47869, 47881, 47903, 47911, 47917, 47933, 47939, - 47947, 47951, 47963, 47969, 47977, 47981, 48017, 48023, 48029, 48049, - 48073, 48079, 48091, 48109, 48119, 48121, 48131, 48157, 48163, 48179, - 48187, 48193, 48197, 48221, 48239, 48247, 48259, 48271, 48281, 48299, - 48311, 48313, 48337, 48341, 48353, 48371, 48383, 48397, 48407, 48409, - 48413, 48437, 48449, 48463, 48473, 48479, 48481, 48487, 48491, 48497, - 48523, 48527, 48533, 48539, 48541, 48563, 48571, 48589, 48593, 48611, - 48619, 48623, 48647, 48649, 48661, 48673, 48677, 48679, 48731, 48733, - 48751, 48757, 48761, 48767, 48779, 48781, 48787, 48799, 48809, 48817, - 48821, 48823, 48847, 48857, 48859, 48869, 48871, 48883, 48889, 48907, - 48947, 48953, 48973, 48989, 48991, 49003, 49009, 49019, 49031, 49033, - 49037, 49043, 49057, 49069, 49081, 49103, 49109, 49117, 49121, 49123, - 49139, 49157, 49169, 49171, 49177, 49193, 49199, 49201, 49207, 49211, - 49223, 49253, 49261, 49277, 49279, 49297, 49307, 49331, 49333, 49339, - 49363, 49367, 49369, 49391, 49393, 49409, 49411, 49417, 49429, 49433, - 49451, 49459, 49463, 49477, 49481, 49499, 49523, 49529, 49531, 49537, - 49547, 49549, 49559, 49597, 49603, 49613, 49627, 49633, 49639, 49663, - 49667, 49669, 49681, 49697, 49711, 49727, 49739, 49741, 49747, 49757, - 49783, 49787, 49789, 49801, 49807, 49811, 49823, 49831, 49843, 49853, - 49871, 49877, 49891, 49919, 49921, 49927, 49937, 49939, 49943, 49957, - 49991, 49993, 49999, 50021, 50023, 50033, 50047, 50051, 50053, 50069, - 50077, 50087, 50093, 50101, 50111, 50119, 50123, 50129, 50131, 50147, - 50153, 50159, 50177, 50207, 50221, 50227, 50231, 50261, 50263, 50273, - 50287, 50291, 50311, 50321, 50329, 50333, 50341, 50359, 50363, 50377, - 50383, 50387, 50411, 50417, 50423, 50441, 50459, 50461, 50497, 50503, - 50513, 50527, 50539, 50543, 50549, 50551, 50581, 50587, 50591, 50593, - 50599, 50627, 50647, 50651, 50671, 50683, 50707, 50723, 50741, 50753, - 50767, 50773, 50777, 50789, 50821, 50833, 50839, 50849, 50857, 50867, - 50873, 50891, 50893, 50909, 50923, 50929, 50951, 50957, 50969, 50971, - 50989, 50993, 51001, 51031, 51043, 51047, 51059, 51061, 51071, 51109, - 51131, 51133, 51137, 51151, 51157, 51169, 51193, 51197, 51199, 51203, - 51217, 51229, 51239, 51241, 51257, 51263, 51283, 51287, 51307, 51329, - 51341, 51343, 51347, 51349, 51361, 51383, 51407, 51413, 51419, 51421, - 51427, 51431, 51437, 51439, 51449, 51461, 51473, 51479, 51481, 51487, - 51503, 51511, 51517, 51521, 51539, 51551, 51563, 51577, 51581, 51593, - 51599, 51607, 51613, 51631, 51637, 51647, 51659, 51673, 51679, 51683, - 51691, 51713, 51719, 51721, 51749, 51767, 51769, 51787, 51797, 51803, - 51817, 51827, 51829, 51839, 51853, 51859, 51869, 51871, 51893, 51899, - 51907, 51913, 51929, 51941, 51949, 51971, 51973, 51977, 51991, 52009, - 52021, 52027, 52051, 52057, 52067, 52069, 52081, 52103, 52121, 52127, - 52147, 52153, 52163, 52177, 52181, 52183, 52189, 52201, 52223, 52237, - 52249, 52253, 52259, 52267, 52289, 52291, 52301, 52313, 52321, 52361, - 52363, 52369, 52379, 52387, 52391, 52433, 52453, 52457, 52489, 52501, - 52511, 52517, 52529, 52541, 52543, 52553, 52561, 52567, 52571, 52579, - 52583, 52609, 52627, 52631, 52639, 52667, 52673, 52691, 52697, 52709, - 52711, 52721, 52727, 52733, 52747, 52757, 52769, 52783, 52807, 52813, - 52817, 52837, 52859, 52861, 52879, 52883, 52889, 52901, 52903, 52919, - 52937, 52951, 52957, 52963, 52967, 52973, 52981, 52999, 53003, 53017, - 53047, 53051, 53069, 53077, 53087, 53089, 53093, 53101, 53113, 53117, - 53129, 53147, 53149, 53161, 53171, 53173, 53189, 53197, 53201, 53231, - 53233, 53239, 53267, 53269, 53279, 53281, 53299, 53309, 53323, 53327, - 53353, 53359, 53377, 53381, 53401, 53407, 53411, 53419, 53437, 53441, - 53453, 53479, 53503, 53507, 53527, 53549, 53551, 53569, 53591, 53593, - 53597, 53609, 53611, 53617, 53623, 53629, 53633, 53639, 53653, 53657, - 53681, 53693, 53699, 53717, 53719, 53731, 53759, 53773, 53777, 53783, - 53791, 53813, 53819, 53831, 53849, 53857, 53861, 53881, 53887, 53891, - 53897, 53899, 53917, 53923, 53927, 53939, 53951, 53959, 53987, 53993, - 54001, 54011, 54013, 54037, 54049, 54059, 54083, 54091, 54101, 54121, - 54133, 54139, 54151, 54163, 54167, 54181, 54193, 54217, 54251, 54269, - 54277, 54287, 54293, 54311, 54319, 54323, 54331, 54347, 54361, 54367, - 54371, 54377, 54401, 54403, 54409, 54413, 54419, 54421, 54437, 54443, - 54449, 54469, 54493, 54497, 54499, 54503, 54517, 54521, 54539, 54541, - 54547, 54559, 54563, 54577, 54581, 54583, 54601, 54617, 54623, 54629, - 54631, 54647, 54667, 54673, 54679, 54709, 54713, 54721, 54727, 54751, - 54767, 54773, 54779, 54787, 54799, 54829, 54833, 54851, 54869, 54877, - 54881, 54907, 54917, 54919, 54941, 54949, 54959, 54973, 54979, 54983, - 55001, 55009, 55021, 55049, 55051, 55057, 55061, 55073, 55079, 55103, - 55109, 55117, 55127, 55147, 55163, 55171, 55201, 55207, 55213, 55217, - 55219, 55229, 55243, 55249, 55259, 55291, 55313, 55331, 55333, 55337, - 55339, 55343, 55351, 55373, 55381, 55399, 55411, 55439, 55441, 55457, - 55469, 55487, 55501, 55511, 55529, 55541, 55547, 55579, 55589, 55603, - 55609, 55619, 55621, 55631, 55633, 55639, 55661, 55663, 55667, 55673, - 55681, 55691, 55697, 55711, 55717, 55721, 55733, 55763, 55787, 55793, - 55799, 55807, 55813, 55817, 55819, 55823, 55829, 55837, 55843, 55849, - 55871, 55889, 55897, 55901, 55903, 55921, 55927, 55931, 55933, 55949, - 55967, 55987, 55997, 56003, 56009, 56039, 56041, 56053, 56081, 56087, - 56093, 56099, 56101, 56113, 56123, 56131, 56149, 56167, 56171, 56179, - 56197, 56207, 56209, 56237, 56239, 56249, 56263, 56267, 56269, 56299, - 56311, 56333, 56359, 56369, 56377, 56383, 56393, 56401, 56417, 56431, - 56437, 56443, 56453, 56467, 56473, 56477, 56479, 56489, 56501, 56503, - 56509, 56519, 56527, 56531, 56533, 56543, 56569, 56591, 56597, 56599, - 56611, 56629, 56633, 56659, 56663, 56671, 56681, 56687, 56701, 56711, - 56713, 56731, 56737, 56747, 56767, 56773, 56779, 56783, 56807, 56809, - 56813, 56821, 56827, 56843, 56857, 56873, 56891, 56893, 56897, 56909, - 56911, 56921, 56923, 56929, 56941, 56951, 56957, 56963, 56983, 56989, - 56993, 56999, 57037, 57041, 57047, 57059, 57073, 57077, 57089, 57097, - 57107, 57119, 57131, 57139, 57143, 57149, 57163, 57173, 57179, 57191, - 57193, 57203, 57221, 57223, 57241, 57251, 57259, 57269, 57271, 57283, - 57287, 57301, 57329, 57331, 57347, 57349, 57367, 57373, 57383, 57389, - 57397, 57413, 57427, 57457, 57467, 57487, 57493, 57503, 57527, 57529, - 57557, 57559, 57571, 57587, 57593, 57601, 57637, 57641, 57649, 57653, - 57667, 57679, 57689, 57697, 57709, 57713, 57719, 57727, 57731, 57737, - 57751, 57773, 57781, 57787, 57791, 57793, 57803, 57809, 57829, 57839, - 57847, 57853, 57859, 57881, 57899, 57901, 57917, 57923, 57943, 57947, - 57973, 57977, 57991, 58013, 58027, 58031, 58043, 58049, 58057, 58061, - 58067, 58073, 58099, 58109, 58111, 58129, 58147, 58151, 58153, 58169, - 58171, 58189, 58193, 58199, 58207, 58211, 58217, 58229, 58231, 58237, - 58243, 58271, 58309, 58313, 58321, 58337, 58363, 58367, 58369, 58379, - 58391, 58393, 58403, 58411, 58417, 58427, 58439, 58441, 58451, 58453, - 58477, 58481, 58511, 58537, 58543, 58549, 58567, 58573, 58579, 58601, - 58603, 58613, 58631, 58657, 58661, 58679, 58687, 58693, 58699, 58711, - 58727, 58733, 58741, 58757, 58763, 58771, 58787, 58789, 58831, 58889, - 58897, 58901, 58907, 58909, 58913, 58921, 58937, 58943, 58963, 58967, - 58979, 58991, 58997, 59009, 59011, 59021, 59023, 59029, 59051, 59053, - 59063, 59069, 59077, 59083, 59093, 59107, 59113, 59119, 59123, 59141, - 59149, 59159, 59167, 59183, 59197, 59207, 59209, 59219, 59221, 59233, - 59239, 59243, 59263, 59273, 59281, 59333, 59341, 59351, 59357, 59359, - 59369, 59377, 59387, 59393, 59399, 59407, 59417, 59419, 59441, 59443, - 59447, 59453, 59467, 59471, 59473, 59497, 59509, 59513, 59539, 59557, - 59561, 59567, 59581, 59611, 59617, 59621, 59627, 59629, 59651, 59659, - 59663, 59669, 59671, 59693, 59699, 59707, 59723, 59729, 59743, 59747, - 59753, 59771, 59779, 59791, 59797, 59809, 59833, 59863, 59879, 59887, - 59921, 59929, 59951, 59957, 59971, 59981, 59999, 60013, 60017, 60029, - 60037, 60041, 60077, 60083, 60089, 60091, 60101, 60103, 60107, 60127, - 60133, 60139, 60149, 60161, 60167, 60169, 60209, 60217, 60223, 60251, - 60257, 60259, 60271, 60289, 60293, 60317, 60331, 60337, 60343, 60353, - 60373, 60383, 60397, 60413, 60427, 60443, 60449, 60457, 60493, 60497, - 60509, 60521, 60527, 60539, 60589, 60601, 60607, 60611, 60617, 60623, - 60631, 60637, 60647, 60649, 60659, 60661, 60679, 60689, 60703, 60719, - 60727, 60733, 60737, 60757, 60761, 60763, 60773, 60779, 60793, 60811, - 60821, 60859, 60869, 60887, 60889, 60899, 60901, 60913, 60917, 60919, - 60923, 60937, 60943, 60953, 60961, 61001, 61007, 61027, 61031, 61043, - 61051, 61057, 61091, 61099, 61121, 61129, 61141, 61151, 61153, 61169, - 61211, 61223, 61231, 61253, 61261, 61283, 61291, 61297, 61331, 61333, - 61339, 61343, 61357, 61363, 61379, 61381, 61403, 61409, 61417, 61441, - 61463, 61469, 61471, 61483, 61487, 61493, 61507, 61511, 61519, 61543, - 61547, 61553, 61559, 61561, 61583, 61603, 61609, 61613, 61627, 61631, - 61637, 61643, 61651, 61657, 61667, 61673, 61681, 61687, 61703, 61717, - 61723, 61729, 61751, 61757, 61781, 61813, 61819, 61837, 61843, 61861, - 61871, 61879, 61909, 61927, 61933, 61949, 61961, 61967, 61979, 61981, - 61987, 61991, 62003, 62011, 62017, 62039, 62047, 62053, 62057, 62071, - 62081, 62099, 62119, 62129, 62131, 62137, 62141, 62143, 62171, 62189, - 62191, 62201, 62207, 62213, 62219, 62233, 62273, 62297, 62299, 62303, - 62311, 62323, 62327, 62347, 62351, 62383, 62401, 62417, 62423, 62459, - 62467, 62473, 62477, 62483, 62497, 62501, 62507, 62533, 62539, 62549, - 62563, 62581, 62591, 62597, 62603, 62617, 62627, 62633, 62639, 62653, - 62659, 62683, 62687, 62701, 62723, 62731, 62743, 62753, 62761, 62773, - 62791, 62801, 62819, 62827, 62851, 62861, 62869, 62873, 62897, 62903, - 62921, 62927, 62929, 62939, 62969, 62971, 62981, 62983, 62987, 62989, - 63029, 63031, 63059, 63067, 63073, 63079, 63097, 63103, 63113, 63127, - 63131, 63149, 63179, 63197, 63199, 63211, 63241, 63247, 63277, 63281, - 63299, 63311, 63313, 63317, 63331, 63337, 63347, 63353, 63361, 63367, - 63377, 63389, 63391, 63397, 63409, 63419, 63421, 63439, 63443, 63463, - 63467, 63473, 63487, 63493, 63499, 63521, 63527, 63533, 63541, 63559, - 63577, 63587, 63589, 63599, 63601, 63607, 63611, 63617, 63629, 63647, - 63649, 63659, 63667, 63671, 63689, 63691, 63697, 63703, 63709, 63719, - 63727, 63737, 63743, 63761, 63773, 63781, 63793, 63799, 63803, 63809, - 63823, 63839, 63841, 63853, 63857, 63863, 63901, 63907, 63913, 63929, - 63949, 63977, 63997, 64007, 64013, 64019, 64033, 64037, 64063, 64067, - 64081, 64091, 64109, 64123, 64151, 64153, 64157, 64171, 64187, 64189, - 64217, 64223, 64231, 64237, 64271, 64279, 64283, 64301, 64303, 64319, - 64327, 64333, 64373, 64381, 64399, 64403, 64433, 64439, 64451, 64453, - 64483, 64489, 64499, 64513, 64553, 64567, 64577, 64579, 64591, 64601, - 64609, 64613, 64621, 64627, 64633, 64661, 64663, 64667, 64679, 64693, - 64709, 64717, 64747, 64763, 64781, 64783, 64793, 64811, 64817, 64849, - 64853, 64871, 64877, 64879, 64891, 64901, 64919, 64921, 64927, 64937, - 64951, 64969, 64997, 65003, 65011, 65027, 65029, 65033, 65053, 65063, - 65071, 65089, 65099, 65101, 65111, 65119, 65123, 65129, 65141, 65147, - 65167, 65171, 65173, 65179, 65183, 65203, 65213, 65239, 65257, 65267, - 65269, 65287, 65293, 65309, 65323, 65327, 65353, 65357, 65371, 65381, - 65393, 65407, 65413, 65419, 65423, 65437, 65447, 65449, 65479, 65497, - 65519, 65521, 65537, 65539, 65543, 65551, 65557, 65563, 65579, 65581, - 65587, 65599, 65609, 65617, 65629, 65633, 65647, 65651, 65657, 65677, - 65687, 65699, 65701, 65707, 65713, 65717, 65719, 65729, 65731, 65761, - 65777, 65789, 65809, 65827, 65831, 65837, 65839, 65843, 65851, 65867, - 65881, 65899, 65921, 65927, 65929, 65951, 65957, 65963, 65981, 65983, - 65993, 66029, 66037, 66041, 66047, 66067, 66071, 66083, 66089, 66103, - 66107, 66109, 66137, 66161, 66169, 66173, 66179, 66191, 66221, 66239, - 66271, 66293, 66301, 66337, 66343, 66347, 66359, 66361, 66373, 66377, - 66383, 66403, 66413, 66431, 66449, 66457, 66463, 66467, 66491, 66499, - 66509, 66523, 66529, 66533, 66541, 66553, 66569, 66571, 66587, 66593, - 66601, 66617, 66629, 66643, 66653, 66683, 66697, 66701, 66713, 66721, - 66733, 66739, 66749, 66751, 66763, 66791, 66797, 66809, 66821, 66841, - 66851, 66853, 66863, 66877, 66883, 66889, 66919, 66923, 66931, 66943, - 66947, 66949, 66959, 66973, 66977, 67003, 67021, 67033, 67043, 67049, - 67057, 67061, 67073, 67079, 67103, 67121, 67129, 67139, 67141, 67153, - 67157, 67169, 67181, 67187, 67189, 67211, 67213, 67217, 67219, 67231, - 67247, 67261, 67271, 67273, 67289, 67307, 67339, 67343, 67349, 67369, - 67391, 67399, 67409, 67411, 67421, 67427, 67429, 67433, 67447, 67453, - 67477, 67481, 67489, 67493, 67499, 67511, 67523, 67531, 67537, 67547, - 67559, 67567, 67577, 67579, 67589, 67601, 67607, 67619, 67631, 67651, - 67679, 67699, 67709, 67723, 67733, 67741, 67751, 67757, 67759, 67763, - 67777, 67783, 67789, 67801, 67807, 67819, 67829, 67843, 67853, 67867, - 67883, 67891, 67901, 67927, 67931, 67933, 67939, 67943, 67957, 67961, - 67967, 67979, 67987, 67993, 68023, 68041, 68053, 68059, 68071, 68087, - 68099, 68111, 68113, 68141, 68147, 68161, 68171, 68207, 68209, 68213, - 68219, 68227, 68239, 68261, 68279, 68281, 68311, 68329, 68351, 68371, - 68389, 68399, 68437, 68443, 68447, 68449, 68473, 68477, 68483, 68489, - 68491, 68501, 68507, 68521, 68531, 68539, 68543, 68567, 68581, 68597, - 68611, 68633, 68639, 68659, 68669, 68683, 68687, 68699, 68711, 68713, - 68729, 68737, 68743, 68749, 68767, 68771, 68777, 68791, 68813, 68819, - 68821, 68863, 68879, 68881, 68891, 68897, 68899, 68903, 68909, 68917, - 68927, 68947, 68963, 68993, 69001, 69011, 69019, 69029, 69031, 69061, - 69067, 69073, 69109, 69119, 69127, 69143, 69149, 69151, 69163, 69191, - 69193, 69197, 69203, 69221, 69233, 69239, 69247, 69257, 69259, 69263, - 69313, 69317, 69337, 69341, 69371, 69379, 69383, 69389, 69401, 69403, - 69427, 69431, 69439, 69457, 69463, 69467, 69473, 69481, 69491, 69493, - 69497, 69499, 69539, 69557, 69593, 69623, 69653, 69661, 69677, 69691, - 69697, 69709, 69737, 69739, 69761, 69763, 69767, 69779, 69809, 69821, - 69827, 69829, 69833, 69847, 69857, 69859, 69877, 69899, 69911, 69929, - 69931, 69941, 69959, 69991, 69997, 70001, 70003, 70009, 70019, 70039, - 70051, 70061, 70067, 70079, 70099, 70111, 70117, 70121, 70123, 70139, - 70141, 70157, 70163, 70177, 70181, 70183, 70199, 70201, 70207, 70223, - 70229, 70237, 70241, 70249, 70271, 70289, 70297, 70309, 70313, 70321, - 70327, 70351, 70373, 70379, 70381, 70393, 70423, 70429, 70439, 70451, - 70457, 70459, 70481, 70487, 70489, 70501, 70507, 70529, 70537, 70549, - 70571, 70573, 70583, 70589, 70607, 70619, 70621, 70627, 70639, 70657, - 70663, 70667, 70687, 70709, 70717, 70729, 70753, 70769, 70783, 70793, - 70823, 70841, 70843, 70849, 70853, 70867, 70877, 70879, 70891, 70901, - 70913, 70919, 70921, 70937, 70949, 70951, 70957, 70969, 70979, 70981, - 70991, 70997, 70999, 71011, 71023, 71039, 71059, 71069, 71081, 71089, - 71119, 71129, 71143, 71147, 71153, 71161, 71167, 71171, 71191, 71209, - 71233, 71237, 71249, 71257, 71261, 71263, 71287, 71293, 71317, 71327, - 71329, 71333, 71339, 71341, 71347, 71353, 71359, 71363, 71387, 71389, - 71399, 71411, 71413, 71419, 71429, 71437, 71443, 71453, 71471, 71473, - 71479, 71483, 71503, 71527, 71537, 71549, 71551, 71563, 71569, 71593, - 71597, 71633, 71647, 71663, 71671, 71693, 71699, 71707, 71711, 71713, - 71719, 71741, 71761, 71777, 71789, 71807, 71809, 71821, 71837, 71843, - 71849, 71861, 71867, 71879, 71881, 71887, 71899, 71909, 71917, 71933, - 71941, 71947, 71963, 71971, 71983, 71987, 71993, 71999, 72019, 72031, - 72043, 72047, 72053, 72073, 72077, 72089, 72091, 72101, 72103, 72109, - 72139, 72161, 72167, 72169, 72173, 72211, 72221, 72223, 72227, 72229, - 72251, 72253, 72269, 72271, 72277, 72287, 72307, 72313, 72337, 72341, - 72353, 72367, 72379, 72383, 72421, 72431, 72461, 72467, 72469, 72481, - 72493, 72497, 72503, 72533, 72547, 72551, 72559, 72577, 72613, 72617, - 72623, 72643, 72647, 72649, 72661, 72671, 72673, 72679, 72689, 72701, - 72707, 72719, 72727, 72733, 72739, 72763, 72767, 72797, 72817, 72823, - 72859, 72869, 72871, 72883, 72889, 72893, 72901, 72907, 72911, 72923, - 72931, 72937, 72949, 72953, 72959, 72973, 72977, 72997, 73009, 73013, - 73019, 73037, 73039, 73043, 73061, 73063, 73079, 73091, 73121, 73127, - 73133, 73141, 73181, 73189, 73237, 73243, 73259, 73277, 73291, 73303, - 73309, 73327, 73331, 73351, 73361, 73363, 73369, 73379, 73387, 73417, - 73421, 73433, 73453, 73459, 73471, 73477, 73483, 73517, 73523, 73529, - 73547, 73553, 73561, 73571, 73583, 73589, 73597, 73607, 73609, 73613, - 73637, 73643, 73651, 73673, 73679, 73681, 73693, 73699, 73709, 73721, - 73727, 73751, 73757, 73771, 73783, 73819, 73823, 73847, 73849, 73859, - 73867, 73877, 73883, 73897, 73907, 73939, 73943, 73951, 73961, 73973, - 73999, 74017, 74021, 74027, 74047, 74051, 74071, 74077, 74093, 74099, - 74101, 74131, 74143, 74149, 74159, 74161, 74167, 74177, 74189, 74197, - 74201, 74203, 74209, 74219, 74231, 74257, 74279, 74287, 74293, 74297, - 74311, 74317, 74323, 74353, 74357, 74363, 74377, 74381, 74383, 74411, - 74413, 74419, 74441, 74449, 74453, 74471, 74489, 74507, 74509, 74521, - 74527, 74531, 74551, 74561, 74567, 74573, 74587, 74597, 74609, 74611, - 74623, 74653, 74687, 74699, 74707, 74713, 74717, 74719, 74729, 74731, - 74747, 74759, 74761, 74771, 74779, 74797, 74821, 74827, 74831, 74843, - 74857, 74861, 74869, 74873, 74887, 74891, 74897, 74903, 74923, 74929, - 74933, 74941, 74959, 75011, 75013, 75017, 75029, 75037, 75041, 75079, - 75083, 75109, 75133, 75149, 75161, 75167, 75169, 75181, 75193, 75209, - 75211, 75217, 75223, 75227, 75239, 75253, 75269, 75277, 75289, 75307, - 75323, 75329, 75337, 75347, 75353, 75367, 75377, 75389, 75391, 75401, - 75403, 75407, 75431, 75437, 75479, 75503, 75511, 75521, 75527, 75533, - 75539, 75541, 75553, 75557, 75571, 75577, 75583, 75611, 75617, 75619, - 75629, 75641, 75653, 75659, 75679, 75683, 75689, 75703, 75707, 75709, - 75721, 75731, 75743, 75767, 75773, 75781, 75787, 75793, 75797, 75821, - 75833, 75853, 75869, 75883, 75913, 75931, 75937, 75941, 75967, 75979, - 75983, 75989, 75991, 75997, 76001, 76003, 76031, 76039, 76079, 76081, - 76091, 76099, 76103, 76123, 76129, 76147, 76157, 76159, 76163, 76207, - 76213, 76231, 76243, 76249, 76253, 76259, 76261, 76283, 76289, 76303, - 76333, 76343, 76367, 76369, 76379, 76387, 76403, 76421, 76423, 76441, - 76463, 76471, 76481, 76487, 76493, 76507, 76511, 76519, 76537, 76541, - 76543, 76561, 76579, 76597, 76603, 76607, 76631, 76649, 76651, 76667, - 76673, 76679, 76697, 76717, 76733, 76753, 76757, 76771, 76777, 76781, - 76801, 76819, 76829, 76831, 76837, 76847, 76871, 76873, 76883, 76907, - 76913, 76919, 76943, 76949, 76961, 76963, 76991, 77003, 77017, 77023, - 77029, 77041, 77047, 77069, 77081, 77093, 77101, 77137, 77141, 77153, - 77167, 77171, 77191, 77201, 77213, 77237, 77239, 77243, 77249, 77261, - 77263, 77267, 77269, 77279, 77291, 77317, 77323, 77339, 77347, 77351, - 77359, 77369, 77377, 77383, 77417, 77419, 77431, 77447, 77471, 77477, - 77479, 77489, 77491, 77509, 77513, 77521, 77527, 77543, 77549, 77551, - 77557, 77563, 77569, 77573, 77587, 77591, 77611, 77617, 77621, 77641, - 77647, 77659, 77681, 77687, 77689, 77699, 77711, 77713, 77719, 77723, - 77731, 77743, 77747, 77761, 77773, 77783, 77797, 77801, 77813, 77839, - 77849, 77863, 77867, 77893, 77899, 77929, 77933, 77951, 77969, 77977, - 77983, 77999, 78007, 78017, 78031, 78041, 78049, 78059, 78079, 78101, - 78121, 78137, 78139, 78157, 78163, 78167, 78173, 78179, 78191, 78193, - 78203, 78229, 78233, 78241, 78259, 78277, 78283, 78301, 78307, 78311, - 78317, 78341, 78347, 78367, 78401, 78427, 78437, 78439, 78467, 78479, - 78487, 78497, 78509, 78511, 78517, 78539, 78541, 78553, 78569, 78571, - 78577, 78583, 78593, 78607, 78623, 78643, 78649, 78653, 78691, 78697, - 78707, 78713, 78721, 78737, 78779, 78781, 78787, 78791, 78797, 78803, - 78809, 78823, 78839, 78853, 78857, 78877, 78887, 78889, 78893, 78901, - 78919, 78929, 78941, 78977, 78979, 78989, 79031, 79039, 79043, 79063, - 79087, 79103, 79111, 79133, 79139, 79147, 79151, 79153, 79159, 79181, - 79187, 79193, 79201, 79229, 79231, 79241, 79259, 79273, 79279, 79283, - 79301, 79309, 79319, 79333, 79337, 79349, 79357, 79367, 79379, 79393, - 79397, 79399, 79411, 79423, 79427, 79433, 79451, 79481, 79493, 79531, - 79537, 79549, 79559, 79561, 79579, 79589, 79601, 79609, 79613, 79621, - 79627, 79631, 79633, 79657, 79669, 79687, 79691, 79693, 79697, 79699, - 79757, 79769, 79777, 79801, 79811, 79813, 79817, 79823, 79829, 79841, - 79843, 79847, 79861, 79867, 79873, 79889, 79901, 79903, 79907, 79939, - 79943, 79967, 79973, 79979, 79987, 79997, 79999, 80021, 80039, 80051, - 80071, 80077, 80107, 80111, 80141, 80147, 80149, 80153, 80167, 80173, - 80177, 80191, 80207, 80209, 80221, 80231, 80233, 80239, 80251, 80263, - 80273, 80279, 80287, 80309, 80317, 80329, 80341, 80347, 80363, 80369, - 80387, 80407, 80429, 80447, 80449, 80471, 80473, 80489, 80491, 80513, - 80527, 80537, 80557, 80567, 80599, 80603, 80611, 80621, 80627, 80629, - 80651, 80657, 80669, 80671, 80677, 80681, 80683, 80687, 80701, 80713, - 80737, 80747, 80749, 80761, 80777, 80779, 80783, 80789, 80803, 80809, - 80819, 80831, 80833, 80849, 80863, 80897, 80909, 80911, 80917, 80923, - 80929, 80933, 80953, 80963, 80989, 81001, 81013, 81017, 81019, 81023, - 81031, 81041, 81043, 81047, 81049, 81071, 81077, 81083, 81097, 81101, - 81119, 81131, 81157, 81163, 81173, 81181, 81197, 81199, 81203, 81223, - 81233, 81239, 81281, 81283, 81293, 81299, 81307, 81331, 81343, 81349, - 81353, 81359, 81371, 81373, 81401, 81409, 81421, 81439, 81457, 81463, - 81509, 81517, 81527, 81533, 81547, 81551, 81553, 81559, 81563, 81569, - 81611, 81619, 81629, 81637, 81647, 81649, 81667, 81671, 81677, 81689, - 81701, 81703, 81707, 81727, 81737, 81749, 81761, 81769, 81773, 81799, - 81817, 81839, 81847, 81853, 81869, 81883, 81899, 81901, 81919, 81929, - 81931, 81937, 81943, 81953, 81967, 81971, 81973, 82003, 82007, 82009, - 82013, 82021, 82031, 82037, 82039, 82051, 82067, 82073, 82129, 82139, - 82141, 82153, 82163, 82171, 82183, 82189, 82193, 82207, 82217, 82219, - 82223, 82231, 82237, 82241, 82261, 82267, 82279, 82301, 82307, 82339, - 82349, 82351, 82361, 82373, 82387, 82393, 82421, 82457, 82463, 82469, - 82471, 82483, 82487, 82493, 82499, 82507, 82529, 82531, 82549, 82559, - 82561, 82567, 82571, 82591, 82601, 82609, 82613, 82619, 82633, 82651, - 82657, 82699, 82721, 82723, 82727, 82729, 82757, 82759, 82763, 82781, - 82787, 82793, 82799, 82811, 82813, 82837, 82847, 82883, 82889, 82891, - 82903, 82913, 82939, 82963, 82981, 82997, 83003, 83009, 83023, 83047, - 83059, 83063, 83071, 83077, 83089, 83093, 83101, 83117, 83137, 83177, - 83203, 83207, 83219, 83221, 83227, 83231, 83233, 83243, 83257, 83267, - 83269, 83273, 83299, 83311, 83339, 83341, 83357, 83383, 83389, 83399, - 83401, 83407, 83417, 83423, 83431, 83437, 83443, 83449, 83459, 83471, - 83477, 83497, 83537, 83557, 83561, 83563, 83579, 83591, 83597, 83609, - 83617, 83621, 83639, 83641, 83653, 83663, 83689, 83701, 83717, 83719, - 83737, 83761, 83773, 83777, 83791, 83813, 83833, 83843, 83857, 83869, - 83873, 83891, 83903, 83911, 83921, 83933, 83939, 83969, 83983, 83987, - 84011, 84017, 84047, 84053, 84059, 84061, 84067, 84089, 84121, 84127, - 84131, 84137, 84143, 84163, 84179, 84181, 84191, 84199, 84211, 84221, - 84223, 84229, 84239, 84247, 84263, 84299, 84307, 84313, 84317, 84319, - 84347, 84349, 84377, 84389, 84391, 84401, 84407, 84421, 84431, 84437, - 84443, 84449, 84457, 84463, 84467, 84481, 84499, 84503, 84509, 84521, - 84523, 84533, 84551, 84559, 84589, 84629, 84631, 84649, 84653, 84659, - 84673, 84691, 84697, 84701, 84713, 84719, 84731, 84737, 84751, 84761, - 84787, 84793, 84809, 84811, 84827, 84857, 84859, 84869, 84871, 84913, - 84919, 84947, 84961, 84967, 84977, 84979, 84991, 85009, 85021, 85027, - 85037, 85049, 85061, 85081, 85087, 85091, 85093, 85103, 85109, 85121, - 85133, 85147, 85159, 85193, 85199, 85201, 85213, 85223, 85229, 85237, - 85243, 85247, 85259, 85297, 85303, 85313, 85331, 85333, 85361, 85363, - 85369, 85381, 85411, 85427, 85429, 85439, 85447, 85451, 85453, 85469, - 85487, 85513, 85517, 85523, 85531, 85549, 85571, 85577, 85597, 85601, - 85607, 85619, 85621, 85627, 85639, 85643, 85661, 85667, 85669, 85691, - 85703, 85711, 85717, 85733, 85751, 85781, 85793, 85817, 85819, 85829, - 85831, 85837, 85843, 85847, 85853, 85889, 85903, 85909, 85931, 85933, - 85991, 85999, 86011, 86017, 86027, 86029, 86069, 86077, 86083, 86111, - 86113, 86117, 86131, 86137, 86143, 86161, 86171, 86179, 86183, 86197, - 86201, 86209, 86239, 86243, 86249, 86257, 86263, 86269, 86287, 86291, - 86293, 86297, 86311, 86323, 86341, 86351, 86353, 86357, 86369, 86371, - 86381, 86389, 86399, 86413, 86423, 86441, 86453, 86461, 86467, 86477, - 86491, 86501, 86509, 86531, 86533, 86539, 86561, 86573, 86579, 86587, - 86599, 86627, 86629, 86677, 86689, 86693, 86711, 86719, 86729, 86743, - 86753, 86767, 86771, 86783, 86813, 86837, 86843, 86851, 86857, 86861, - 86869, 86923, 86927, 86929, 86939, 86951, 86959, 86969, 86981, 86993, - 87011, 87013, 87037, 87041, 87049, 87071, 87083, 87103, 87107, 87119, - 87121, 87133, 87149, 87151, 87179, 87181, 87187, 87211, 87221, 87223, - 87251, 87253, 87257, 87277, 87281, 87293, 87299, 87313, 87317, 87323, - 87337, 87359, 87383, 87403, 87407, 87421, 87427, 87433, 87443, 87473, - 87481, 87491, 87509, 87511, 87517, 87523, 87539, 87541, 87547, 87553, - 87557, 87559, 87583, 87587, 87589, 87613, 87623, 87629, 87631, 87641, - 87643, 87649, 87671, 87679, 87683, 87691, 87697, 87701, 87719, 87721, - 87739, 87743, 87751, 87767, 87793, 87797, 87803, 87811, 87833, 87853, - 87869, 87877, 87881, 87887, 87911, 87917, 87931, 87943, 87959, 87961, - 87973, 87977, 87991, 88001, 88003, 88007, 88019, 88037, 88069, 88079, - 88093, 88117, 88129, 88169, 88177, 88211, 88223, 88237, 88241, 88259, - 88261, 88289, 88301, 88321, 88327, 88337, 88339, 88379, 88397, 88411, - 88423, 88427, 88463, 88469, 88471, 88493, 88499, 88513, 88523, 88547, - 88589, 88591, 88607, 88609, 88643, 88651, 88657, 88661, 88663, 88667, - 88681, 88721, 88729, 88741, 88747, 88771, 88789, 88793, 88799, 88801, - 88807, 88811, 88813, 88817, 88819, 88843, 88853, 88861, 88867, 88873, - 88883, 88897, 88903, 88919, 88937, 88951, 88969, 88993, 88997, 89003, - 89009, 89017, 89021, 89041, 89051, 89057, 89069, 89071, 89083, 89087, - 89101, 89107, 89113, 89119, 89123, 89137, 89153, 89189, 89203, 89209, - 89213, 89227, 89231, 89237, 89261, 89269, 89273, 89293, 89303, 89317, - 89329, 89363, 89371, 89381, 89387, 89393, 89399, 89413, 89417, 89431, - 89443, 89449, 89459, 89477, 89491, 89501, 89513, 89519, 89521, 89527, - 89533, 89561, 89563, 89567, 89591, 89597, 89599, 89603, 89611, 89627, - 89633, 89653, 89657, 89659, 89669, 89671, 89681, 89689, 89753, 89759, - 89767, 89779, 89783, 89797, 89809, 89819, 89821, 89833, 89839, 89849, - 89867, 89891, 89897, 89899, 89909, 89917, 89923, 89939, 89959, 89963, - 89977, 89983, 89989, 90001, 90007, 90011, 90017, 90019, 90023, 90031, - 90053, 90059, 90067, 90071, 90073, 90089, 90107, 90121, 90127, 90149, - 90163, 90173, 90187, 90191, 90197, 90199, 90203, 90217, 90227, 90239, - 90247, 90263, 90271, 90281, 90289, 90313, 90353, 90359, 90371, 90373, - 90379, 90397, 90401, 90403, 90407, 90437, 90439, 90469, 90473, 90481, - 90499, 90511, 90523, 90527, 90529, 90533, 90547, 90583, 90599, 90617, - 90619, 90631, 90641, 90647, 90659, 90677, 90679, 90697, 90703, 90709, - 90731, 90749, 90787, 90793, 90803, 90821, 90823, 90833, 90841, 90847, - 90863, 90887, 90901, 90907, 90911, 90917, 90931, 90947, 90971, 90977, - 90989, 90997, 91009, 91019, 91033, 91079, 91081, 91097, 91099, 91121, - 91127, 91129, 91139, 91141, 91151, 91153, 91159, 91163, 91183, 91193, - 91199, 91229, 91237, 91243, 91249, 91253, 91283, 91291, 91297, 91303, - 91309, 91331, 91367, 91369, 91373, 91381, 91387, 91393, 91397, 91411, - 91423, 91433, 91453, 91457, 91459, 91463, 91493, 91499, 91513, 91529, - 91541, 91571, 91573, 91577, 91583, 91591, 91621, 91631, 91639, 91673, - 91691, 91703, 91711, 91733, 91753, 91757, 91771, 91781, 91801, 91807, - 91811, 91813, 91823, 91837, 91841, 91867, 91873, 91909, 91921, 91939, - 91943, 91951, 91957, 91961, 91967, 91969, 91997, 92003, 92009, 92033, - 92041, 92051, 92077, 92083, 92107, 92111, 92119, 92143, 92153, 92173, - 92177, 92179, 92189, 92203, 92219, 92221, 92227, 92233, 92237, 92243, - 92251, 92269, 92297, 92311, 92317, 92333, 92347, 92353, 92357, 92363, - 92369, 92377, 92381, 92383, 92387, 92399, 92401, 92413, 92419, 92431, - 92459, 92461, 92467, 92479, 92489, 92503, 92507, 92551, 92557, 92567, - 92569, 92581, 92593, 92623, 92627, 92639, 92641, 92647, 92657, 92669, - 92671, 92681, 92683, 92693, 92699, 92707, 92717, 92723, 92737, 92753, - 92761, 92767, 92779, 92789, 92791, 92801, 92809, 92821, 92831, 92849, - 92857, 92861, 92863, 92867, 92893, 92899, 92921, 92927, 92941, 92951, - 92957, 92959, 92987, 92993, 93001, 93047, 93053, 93059, 93077, 93083, - 93089, 93097, 93103, 93113, 93131, 93133, 93139, 93151, 93169, 93179, - 93187, 93199, 93229, 93239, 93241, 93251, 93253, 93257, 93263, 93281, - 93283, 93287, 93307, 93319, 93323, 93329, 93337, 93371, 93377, 93383, - 93407, 93419, 93427, 93463, 93479, 93481, 93487, 93491, 93493, 93497, - 93503, 93523, 93529, 93553, 93557, 93559, 93563, 93581, 93601, 93607, - 93629, 93637, 93683, 93701, 93703, 93719, 93739, 93761, 93763, 93787, - 93809, 93811, 93827, 93851, 93871, 93887, 93889, 93893, 93901, 93911, - 93913, 93923, 93937, 93941, 93949, 93967, 93971, 93979, 93983, 93997, - 94007, 94009, 94033, 94049, 94057, 94063, 94079, 94099, 94109, 94111, - 94117, 94121, 94151, 94153, 94169, 94201, 94207, 94219, 94229, 94253, - 94261, 94273, 94291, 94307, 94309, 94321, 94327, 94331, 94343, 94349, - 94351, 94379, 94397, 94399, 94421, 94427, 94433, 94439, 94441, 94447, - 94463, 94477, 94483, 94513, 94529, 94531, 94541, 94543, 94547, 94559, - 94561, 94573, 94583, 94597, 94603, 94613, 94621, 94649, 94651, 94687, - 94693, 94709, 94723, 94727, 94747, 94771, 94777, 94781, 94789, 94793, - 94811, 94819, 94823, 94837, 94841, 94847, 94849, 94873, 94889, 94903, - 94907, 94933, 94949, 94951, 94961, 94993, 94999, 95003, 95009, 95021, - 95027, 95063, 95071, 95083, 95087, 95089, 95093, 95101, 95107, 95111, - 95131, 95143, 95153, 95177, 95189, 95191, 95203, 95213, 95219, 95231, - 95233, 95239, 95257, 95261, 95267, 95273, 95279, 95287, 95311, 95317, - 95327, 95339, 95369, 95383, 95393, 95401, 95413, 95419, 95429, 95441, - 95443, 95461, 95467, 95471, 95479, 95483, 95507, 95527, 95531, 95539, - 95549, 95561, 95569, 95581, 95597, 95603, 95617, 95621, 95629, 95633, - 95651, 95701, 95707, 95713, 95717, 95723, 95731, 95737, 95747, 95773, - 95783, 95789, 95791, 95801, 95803, 95813, 95819, 95857, 95869, 95873, - 95881, 95891, 95911, 95917, 95923, 95929, 95947, 95957, 95959, 95971, - 95987, 95989, 96001, 96013, 96017, 96043, 96053, 96059, 96079, 96097, - 96137, 96149, 96157, 96167, 96179, 96181, 96199, 96211, 96221, 96223, - 96233, 96259, 96263, 96269, 96281, 96289, 96293, 96323, 96329, 96331, - 96337, 96353, 96377, 96401, 96419, 96431, 96443, 96451, 96457, 96461, - 96469, 96479, 96487, 96493, 96497, 96517, 96527, 96553, 96557, 96581, - 96587, 96589, 96601, 96643, 96661, 96667, 96671, 96697, 96703, 96731, - 96737, 96739, 96749, 96757, 96763, 96769, 96779, 96787, 96797, 96799, - 96821, 96823, 96827, 96847, 96851, 96857, 96893, 96907, 96911, 96931, - 96953, 96959, 96973, 96979, 96989, 96997, 97001, 97003, 97007, 97021, - 97039, 97073, 97081, 97103, 97117, 97127, 97151, 97157, 97159, 97169, - 97171, 97177, 97187, 97213, 97231, 97241, 97259, 97283, 97301, 97303, - 97327, 97367, 97369, 97373, 97379, 97381, 97387, 97397, 97423, 97429, - 97441, 97453, 97459, 97463, 97499, 97501, 97511, 97523, 97547, 97549, - 97553, 97561, 97571, 97577, 97579, 97583, 97607, 97609, 97613, 97649, - 97651, 97673, 97687, 97711, 97729, 97771, 97777, 97787, 97789, 97813, - 97829, 97841, 97843, 97847, 97849, 97859, 97861, 97871, 97879, 97883, - 97919, 97927, 97931, 97943, 97961, 97967, 97973, 97987, 98009, 98011, - 98017, 98041, 98047, 98057, 98081, 98101, 98123, 98129, 98143, 98179, - 98207, 98213, 98221, 98227, 98251, 98257, 98269, 98297, 98299, 98317, - 98321, 98323, 98327, 98347, 98369, 98377, 98387, 98389, 98407, 98411, - 98419, 98429, 98443, 98453, 98459, 98467, 98473, 98479, 98491, 98507, - 98519, 98533, 98543, 98561, 98563, 98573, 98597, 98621, 98627, 98639, - 98641, 98663, 98669, 98689, 98711, 98713, 98717, 98729, 98731, 98737, - 98773, 98779, 98801, 98807, 98809, 98837, 98849, 98867, 98869, 98873, - 98887, 98893, 98897, 98899, 98909, 98911, 98927, 98929, 98939, 98947, - 98953, 98963, 98981, 98993, 98999, 99013, 99017, 99023, 99041, 99053, - 99079, 99083, 99089, 99103, 99109, 99119, 99131, 99133, 99137, 99139, - 99149, 99173, 99181, 99191, 99223, 99233, 99241, 99251, 99257, 99259, - 99277, 99289, 99317, 99347, 99349, 99367, 99371, 99377, 99391, 99397, - 99401, 99409, 99431, 99439, 99469, 99487, 99497, 99523, 99527, 99529, - 99551, 99559, 99563, 99571, 99577, 99581, 99607, 99611, 99623, 99643, - 99661, 99667, 99679, 99689, 99707, 99709, 99713, 99719, 99721, 99733, - 99761, 99767, 99787, 99793, 99809, 99817, 99823, 99829, 99833, 99839, - 99859, 99871, 99877, 99881, 99901, 99907, 99923, 99929, 99961, 99971, - 99989, 99991, 100003, 100019, 100043, 100049, 100057, 100069, 100103, 100109, -100129, 100151, 100153, 100169, 100183, 100189, 100193, 100207, 100213, 100237, -100267, 100271, 100279, 100291, 100297, 100313, 100333, 100343, 100357, 100361, -100363, 100379, 100391, 100393, 100403, 100411, 100417, 100447, 100459, 100469, -100483, 100493, 100501, 100511, 100517, 100519, 100523, 100537, 100547, 100549, -100559, 100591, 100609, 100613, 100621, 100649, 100669, 100673, 100693, 100699, -100703, 100733, 100741, 100747, 100769, 100787, 100799, 100801, 100811, 100823, -100829, 100847, 100853, 100907, 100913, 100927, 100931, 100937, 100943, 100957, -100981, 100987, 100999, 101009, 101021, 101027, 101051, 101063, 101081, 101089, -101107, 101111, 101113, 101117, 101119, 101141, 101149, 101159, 101161, 101173, -101183, 101197, 101203, 101207, 101209, 101221, 101267, 101273, 101279, 101281, -101287, 101293, 101323, 101333, 101341, 101347, 101359, 101363, 101377, 101383, -101399, 101411, 101419, 101429, 101449, 101467, 101477, 101483, 101489, 101501, -101503, 101513, 101527, 101531, 101533, 101537, 101561, 101573, 101581, 101599, -101603, 101611, 101627, 101641, 101653, 101663, 101681, 101693, 101701, 101719, -101723, 101737, 101741, 101747, 101749, 101771, 101789, 101797, 101807, 101833, -101837, 101839, 101863, 101869, 101873, 101879, 101891, 101917, 101921, 101929, -101939, 101957, 101963, 101977, 101987, 101999, 102001, 102013, 102019, 102023, -102031, 102043, 102059, 102061, 102071, 102077, 102079, 102101, 102103, 102107, -102121, 102139, 102149, 102161, 102181, 102191, 102197, 102199, 102203, 102217, -102229, 102233, 102241, 102251, 102253, 102259, 102293, 102299, 102301, 102317, -102329, 102337, 102359, 102367, 102397, 102407, 102409, 102433, 102437, 102451, -102461, 102481, 102497, 102499, 102503, 102523, 102533, 102539, 102547, 102551, -102559, 102563, 102587, 102593, 102607, 102611, 102643, 102647, 102653, 102667, -102673, 102677, 102679, 102701, 102761, 102763, 102769, 102793, 102797, 102811, -102829, 102841, 102859, 102871, 102877, 102881, 102911, 102913, 102929, 102931, -102953, 102967, 102983, 103001, 103007, 103043, 103049, 103067, 103069, 103079, -103087, 103091, 103093, 103099, 103123, 103141, 103171, 103177, 103183, 103217, -103231, 103237, 103289, 103291, 103307, 103319, 103333, 103349, 103357, 103387, -103391, 103393, 103399, 103409, 103421, 103423, 103451, 103457, 103471, 103483, -103511, 103529, 103549, 103553, 103561, 103567, 103573, 103577, 103583, 103591, -103613, 103619, 103643, 103651, 103657, 103669, 103681, 103687, 103699, 103703, -103723, 103769, 103787, 103801, 103811, 103813, 103837, 103841, 103843, 103867, -103889, 103903, 103913, 103919, 103951, 103963, 103967, 103969, 103979, 103981, -103991, 103993, 103997, 104003, 104009, 104021, 104033, 104047, 104053, 104059, -104087, 104089, 104107, 104113, 104119, 104123, 104147, 104149, 104161, 104173, -104179, 104183, 104207, 104231, 104233, 104239, 104243, 104281, 104287, 104297, -104309, 104311, 104323, 104327, 104347, 104369, 104381, 104383, 104393, 104399, -104417, 104459, 104471, 104473, 104479, 104491, 104513, 104527, 104537, 104543, -104549, 104551, 104561, 104579, 104593, 104597, 104623, 104639, 104651, 104659, -104677, 104681, 104683, 104693, 104701, 104707, 104711, 104717, 104723, 104729, -) diff --git a/spaces/juancopi81/youtube-music-transcribe/t5x/examples/t5/t5_1_1/examples/__init__.py b/spaces/juancopi81/youtube-music-transcribe/t5x/examples/t5/t5_1_1/examples/__init__.py deleted file mode 100644 index da022c16301721a096a208e8bdb2a71bb87f9788..0000000000000000000000000000000000000000 --- a/spaces/juancopi81/youtube-music-transcribe/t5x/examples/t5/t5_1_1/examples/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright 2022 The T5X Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# This empty file is needed for loading the gin files in this directory. diff --git a/spaces/juanhuggingface/ChuanhuChatGPT_Beta/modules/models/base_model.py b/spaces/juanhuggingface/ChuanhuChatGPT_Beta/modules/models/base_model.py deleted file mode 100644 index 856a660e895e07c60fdcc75f713bf9e8dbf7f6ca..0000000000000000000000000000000000000000 --- a/spaces/juanhuggingface/ChuanhuChatGPT_Beta/modules/models/base_model.py +++ /dev/null @@ -1,639 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import commentjson as cjson -import os -import sys -import requests -import urllib3 -import traceback -import pathlib - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp -from enum import Enum - -from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler -from langchain.callbacks.manager import BaseCallbackManager - -from typing import Any, Dict, List, Optional, Union - -from langchain.callbacks.base import BaseCallbackHandler -from langchain.input import print_text -from langchain.schema import AgentAction, AgentFinish, LLMResult -from threading import Thread, Condition -from collections import deque - -from ..presets import * -from ..index_func import * -from ..utils import * -from .. import shared -from ..config import retrieve_proxy - -class CallbackToIterator: - def __init__(self): - self.queue = deque() - self.cond = Condition() - self.finished = False - - def callback(self, result): - with self.cond: - self.queue.append(result) - self.cond.notify() # Wake up the generator. - - def __iter__(self): - return self - - def __next__(self): - with self.cond: - while not self.queue and not self.finished: # Wait for a value to be added to the queue. - self.cond.wait() - if not self.queue: - raise StopIteration() - return self.queue.popleft() - - def finish(self): - with self.cond: - self.finished = True - self.cond.notify() # Wake up the generator if it's waiting. - -class ChuanhuCallbackHandler(BaseCallbackHandler): - - def __init__(self, callback) -> None: - """Initialize callback handler.""" - self.callback = callback - - def on_agent_action( - self, action: AgentAction, color: Optional[str] = None, **kwargs: Any - ) -> Any: - self.callback(action.log) - - def on_tool_end( - self, - output: str, - color: Optional[str] = None, - observation_prefix: Optional[str] = None, - llm_prefix: Optional[str] = None, - **kwargs: Any, - ) -> None: - """If not the final action, print out observation.""" - if observation_prefix is not None: - self.callback(f"\n\n{observation_prefix}") - self.callback(output) - if llm_prefix is not None: - self.callback(f"\n\n{llm_prefix}") - - def on_agent_finish( - self, finish: AgentFinish, color: Optional[str] = None, **kwargs: Any - ) -> None: - self.callback(f"{finish.log}\n\n") - - def on_llm_new_token(self, token: str, **kwargs: Any) -> None: - """Run on new LLM token. Only available when streaming is enabled.""" - self.callback(token) - - -class ModelType(Enum): - Unknown = -1 - OpenAI = 0 - ChatGLM = 1 - LLaMA = 2 - XMChat = 3 - StableLM = 4 - MOSS = 5 - YuanAI = 6 - ChuanhuAgent = 7 - PaLM = 8 - - @classmethod - def get_type(cls, model_name: str): - model_type = None - model_name_lower = model_name.lower() - if "gpt" in model_name_lower: - model_type = ModelType.OpenAI - elif "chatglm" in model_name_lower: - model_type = ModelType.ChatGLM - elif "llama" in model_name_lower or "alpaca" in model_name_lower: - model_type = ModelType.LLaMA - elif "xmchat" in model_name_lower: - model_type = ModelType.XMChat - elif "stablelm" in model_name_lower: - model_type = ModelType.StableLM - elif "moss" in model_name_lower: - model_type = ModelType.MOSS - elif "yuanai" in model_name_lower: - model_type = ModelType.YuanAI - elif "川虎助理" in model_name_lower: - model_type = ModelType.ChuanhuAgent - elif "palm" in model_name_lower: - model_type = ModelType.PaLM - else: - model_type = ModelType.Unknown - return model_type - - -class BaseLLMModel: - def __init__( - self, - model_name, - system_prompt="", - temperature=1.0, - top_p=1.0, - n_choices=1, - stop=None, - max_generation_token=None, - presence_penalty=0, - frequency_penalty=0, - logit_bias=None, - user="", - ) -> None: - self.history = [] - self.all_token_counts = [] - self.model_name = model_name - self.model_type = ModelType.get_type(model_name) - try: - self.token_upper_limit = MODEL_TOKEN_LIMIT[model_name] - except KeyError: - self.token_upper_limit = DEFAULT_TOKEN_LIMIT - self.interrupted = False - self.system_prompt = system_prompt - self.api_key = None - self.need_api_key = False - self.single_turn = False - - self.temperature = temperature - self.top_p = top_p - self.n_choices = n_choices - self.stop_sequence = stop - self.max_generation_token = None - self.presence_penalty = presence_penalty - self.frequency_penalty = frequency_penalty - self.logit_bias = logit_bias - self.user_identifier = user - - def get_answer_stream_iter(self): - """stream predict, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - should return a generator, each time give the next word (str) in the answer - """ - logging.warning("stream predict not implemented, using at once predict instead") - response, _ = self.get_answer_at_once() - yield response - - def get_answer_at_once(self): - """predict at once, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - Should return: - the answer (str) - total token count (int) - """ - logging.warning("at once predict not implemented, using stream predict instead") - response_iter = self.get_answer_stream_iter() - count = 0 - for response in response_iter: - count += 1 - return response, sum(self.all_token_counts) + count - - def billing_info(self): - """get billing infomation, inplement if needed""" - logging.warning("billing info not implemented, using default") - return BILLING_NOT_APPLICABLE_MSG - - def count_token(self, user_input): - """get token count from input, implement if needed""" - # logging.warning("token count not implemented, using default") - return len(user_input) - - def stream_next_chatbot(self, inputs, chatbot, fake_input=None, display_append=""): - def get_return_value(): - return chatbot, status_text - - status_text = i18n("开始实时传输回答……") - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - logging.debug(f"输入token计数: {user_token_count}") - - stream_iter = self.get_answer_stream_iter() - - for partial_text in stream_iter: - chatbot[-1] = (chatbot[-1][0], partial_text + display_append) - self.all_token_counts[-1] += 1 - status_text = self.token_message() - yield get_return_value() - if self.interrupted: - self.recover() - break - self.history.append(construct_assistant(partial_text)) - - def next_chatbot_at_once(self, inputs, chatbot, fake_input=None, display_append=""): - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - if fake_input is not None: - user_token_count = self.count_token(fake_input) - else: - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - ai_reply, total_token_count = self.get_answer_at_once() - self.history.append(construct_assistant(ai_reply)) - if fake_input is not None: - self.history[-2] = construct_user(fake_input) - chatbot[-1] = (chatbot[-1][0], ai_reply + display_append) - if fake_input is not None: - self.all_token_counts[-1] += count_token(construct_assistant(ai_reply)) - else: - self.all_token_counts[-1] = total_token_count - sum(self.all_token_counts) - status_text = self.token_message() - return chatbot, status_text - - def handle_file_upload(self, files, chatbot, language): - """if the model accepts multi modal input, implement this function""" - status = gr.Markdown.update() - if files: - index = construct_index(self.api_key, file_src=files) - status = i18n("索引构建完成") - return gr.Files.update(), chatbot, status - - def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot): - fake_inputs = None - display_append = [] - limited_context = False - fake_inputs = real_inputs - if files: - from langchain.embeddings.huggingface import HuggingFaceEmbeddings - from langchain.vectorstores.base import VectorStoreRetriever - limited_context = True - msg = "加载索引中……" - logging.info(msg) - index = construct_index(self.api_key, file_src=files) - assert index is not None, "获取索引失败" - msg = "索引获取成功,生成回答中……" - logging.info(msg) - with retrieve_proxy(): - retriever = VectorStoreRetriever(vectorstore=index, search_type="similarity_score_threshold",search_kwargs={"k":6, "score_threshold": 0.5}) - relevant_documents = retriever.get_relevant_documents(real_inputs) - reference_results = [[d.page_content.strip("�"), os.path.basename(d.metadata["source"])] for d in relevant_documents] - reference_results = add_source_numbers(reference_results) - display_append = add_details(reference_results) - display_append = "\n\n" + "".join(display_append) - real_inputs = ( - replace_today(PROMPT_TEMPLATE) - .replace("{query_str}", real_inputs) - .replace("{context_str}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - elif use_websearch: - limited_context = True - search_results = ddg(real_inputs, max_results=5) - reference_results = [] - for idx, result in enumerate(search_results): - logging.debug(f"搜索结果{idx + 1}:{result}") - domain_name = urllib3.util.parse_url(result["href"]).host - reference_results.append([result["body"], result["href"]]) - display_append.append( - # f"{idx+1}. [{domain_name}]({result['href']})\n" - f"
      • {domain_name}
      • \n" - ) - reference_results = add_source_numbers(reference_results) - display_append = "
          \n\n" + "".join(display_append) + "
        " - real_inputs = ( - replace_today(WEBSEARCH_PTOMPT_TEMPLATE) - .replace("{query}", real_inputs) - .replace("{web_results}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - else: - display_append = "" - return limited_context, fake_inputs, display_append, real_inputs, chatbot - - def predict( - self, - inputs, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - should_check_token_count=True, - ): # repetition_penalty, top_k - - status_text = "开始生成回答……" - logging.info( - "输入为:" + colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL - ) - if should_check_token_count: - yield chatbot + [(inputs, "")], status_text - if reply_language == "跟随问题语言(不稳定)": - reply_language = "the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch." - - limited_context, fake_inputs, display_append, inputs, chatbot = self.prepare_inputs(real_inputs=inputs, use_websearch=use_websearch, files=files, reply_language=reply_language, chatbot=chatbot) - yield chatbot + [(fake_inputs, "")], status_text - - if ( - self.need_api_key and - self.api_key is None - and not shared.state.multi_api_key - ): - status_text = STANDARD_ERROR_MSG + NO_APIKEY_MSG - logging.info(status_text) - chatbot.append((inputs, "")) - if len(self.history) == 0: - self.history.append(construct_user(inputs)) - self.history.append("") - self.all_token_counts.append(0) - else: - self.history[-2] = construct_user(inputs) - yield chatbot + [(inputs, "")], status_text - return - elif len(inputs.strip()) == 0: - status_text = STANDARD_ERROR_MSG + NO_INPUT_MSG - logging.info(status_text) - yield chatbot + [(inputs, "")], status_text - return - - if self.single_turn: - self.history = [] - self.all_token_counts = [] - self.history.append(construct_user(inputs)) - - try: - if stream: - logging.debug("使用流式传输") - iter = self.stream_next_chatbot( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - for chatbot, status_text in iter: - yield chatbot, status_text - else: - logging.debug("不使用流式传输") - chatbot, status_text = self.next_chatbot_at_once( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - yield chatbot, status_text - except Exception as e: - traceback.print_exc() - status_text = STANDARD_ERROR_MSG + str(e) - yield chatbot, status_text - - if len(self.history) > 1 and self.history[-1]["content"] != inputs: - logging.info( - "回答为:" - + colorama.Fore.BLUE - + f"{self.history[-1]['content']}" - + colorama.Style.RESET_ALL - ) - - if limited_context: - # self.history = self.history[-4:] - # self.all_token_counts = self.all_token_counts[-2:] - self.history = [] - self.all_token_counts = [] - - max_token = self.token_upper_limit - TOKEN_OFFSET - - if sum(self.all_token_counts) > max_token and should_check_token_count: - count = 0 - while ( - sum(self.all_token_counts) - > self.token_upper_limit * REDUCE_TOKEN_FACTOR - and sum(self.all_token_counts) > 0 - ): - count += 1 - del self.all_token_counts[0] - del self.history[:2] - logging.info(status_text) - status_text = f"为了防止token超限,模型忘记了早期的 {count} 轮对话" - yield chatbot, status_text - - self.auto_save(chatbot) - - def retry( - self, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - ): - logging.debug("重试中……") - if len(self.history) > 0: - inputs = self.history[-2]["content"] - del self.history[-2:] - self.all_token_counts.pop() - elif len(chatbot) > 0: - inputs = chatbot[-1][0] - else: - yield chatbot, f"{STANDARD_ERROR_MSG}上下文是空的" - return - - iter = self.predict( - inputs, - chatbot, - stream=stream, - use_websearch=use_websearch, - files=files, - reply_language=reply_language, - ) - for x in iter: - yield x - logging.debug("重试完毕") - - # def reduce_token_size(self, chatbot): - # logging.info("开始减少token数量……") - # chatbot, status_text = self.next_chatbot_at_once( - # summarize_prompt, - # chatbot - # ) - # max_token_count = self.token_upper_limit * REDUCE_TOKEN_FACTOR - # num_chat = find_n(self.all_token_counts, max_token_count) - # logging.info(f"previous_token_count: {self.all_token_counts}, keeping {num_chat} chats") - # chatbot = chatbot[:-1] - # self.history = self.history[-2*num_chat:] if num_chat > 0 else [] - # self.all_token_counts = self.all_token_counts[-num_chat:] if num_chat > 0 else [] - # msg = f"保留了最近{num_chat}轮对话" - # logging.info(msg) - # logging.info("减少token数量完毕") - # return chatbot, msg + "," + self.token_message(self.all_token_counts if len(self.all_token_counts) > 0 else [0]) - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_token_upper_limit(self, new_upper_limit): - self.token_upper_limit = new_upper_limit - print(f"token上限设置为{new_upper_limit}") - - def set_temperature(self, new_temperature): - self.temperature = new_temperature - - def set_top_p(self, new_top_p): - self.top_p = new_top_p - - def set_n_choices(self, new_n_choices): - self.n_choices = new_n_choices - - def set_stop_sequence(self, new_stop_sequence: str): - new_stop_sequence = new_stop_sequence.split(",") - self.stop_sequence = new_stop_sequence - - def set_max_tokens(self, new_max_tokens): - self.max_generation_token = new_max_tokens - - def set_presence_penalty(self, new_presence_penalty): - self.presence_penalty = new_presence_penalty - - def set_frequency_penalty(self, new_frequency_penalty): - self.frequency_penalty = new_frequency_penalty - - def set_logit_bias(self, logit_bias): - logit_bias = logit_bias.split() - bias_map = {} - encoding = tiktoken.get_encoding("cl100k_base") - for line in logit_bias: - word, bias_amount = line.split(":") - if word: - for token in encoding.encode(word): - bias_map[token] = float(bias_amount) - self.logit_bias = bias_map - - def set_user_identifier(self, new_user_identifier): - self.user_identifier = new_user_identifier - - def set_system_prompt(self, new_system_prompt): - self.system_prompt = new_system_prompt - - def set_key(self, new_access_key): - self.api_key = new_access_key.strip() - msg = i18n("API密钥更改为了") + hide_middle_chars(self.api_key) - logging.info(msg) - return self.api_key, msg - - def set_single_turn(self, new_single_turn): - self.single_turn = new_single_turn - - def reset(self): - self.history = [] - self.all_token_counts = [] - self.interrupted = False - pathlib.Path(os.path.join(HISTORY_DIR, self.user_identifier, new_auto_history_filename(os.path.join(HISTORY_DIR, self.user_identifier)))).touch() - return [], self.token_message([0]) - - def delete_first_conversation(self): - if self.history: - del self.history[:2] - del self.all_token_counts[0] - return self.token_message() - - def delete_last_conversation(self, chatbot): - if len(chatbot) > 0 and STANDARD_ERROR_MSG in chatbot[-1][1]: - msg = "由于包含报错信息,只删除chatbot记录" - chatbot.pop() - return chatbot, self.history - if len(self.history) > 0: - self.history.pop() - self.history.pop() - if len(chatbot) > 0: - msg = "删除了一组chatbot对话" - chatbot.pop() - if len(self.all_token_counts) > 0: - msg = "删除了一组对话的token计数记录" - self.all_token_counts.pop() - msg = "删除了一组对话" - return chatbot, msg - - def token_message(self, token_lst=None): - if token_lst is None: - token_lst = self.all_token_counts - token_sum = 0 - for i in range(len(token_lst)): - token_sum += sum(token_lst[: i + 1]) - return i18n("Token 计数: ") + f"{sum(token_lst)}" + i18n(",本次对话累计消耗了 ") + f"{token_sum} tokens" - - def save_chat_history(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def auto_save(self, chatbot): - history_file_path = get_history_filepath(self.user_identifier) - save_file(history_file_path, self.system_prompt, self.history, chatbot, self.user_identifier) - - def export_markdown(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".md"): - filename += ".md" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def load_chat_history(self, filename, user_name): - logging.debug(f"{user_name} 加载对话历史中……") - logging.info(f"filename: {filename}") - if type(filename) != str and filename is not None: - filename = filename.name - try: - if "/" not in filename: - history_file_path = os.path.join(HISTORY_DIR, user_name, filename) - else: - history_file_path = filename - with open(history_file_path, "r") as f: - json_s = json.load(f) - try: - if type(json_s["history"][0]) == str: - logging.info("历史记录格式为旧版,正在转换……") - new_history = [] - for index, item in enumerate(json_s["history"]): - if index % 2 == 0: - new_history.append(construct_user(item)) - else: - new_history.append(construct_assistant(item)) - json_s["history"] = new_history - logging.info(new_history) - except: - pass - logging.debug(f"{user_name} 加载对话历史完毕") - self.history = json_s["history"] - return os.path.basename(filename), json_s["system"], json_s["chatbot"] - except: - # 没有对话历史或者对话历史解析失败 - logging.info(f"没有找到对话历史记录 {filename}") - return gr.update(), self.system_prompt, gr.update() - - def auto_load(self): - if self.user_identifier == "": - self.reset() - return self.system_prompt, gr.update() - history_file_path = get_history_filepath(self.user_identifier) - filename, system_prompt, chatbot = self.load_chat_history(history_file_path, self.user_identifier) - return system_prompt, chatbot - - - def like(self): - """like the last response, implement if needed - """ - return gr.update() - - def dislike(self): - """dislike the last response, implement if needed - """ - return gr.update() diff --git a/spaces/justest/gpt4free/g4f/utils.py b/spaces/justest/gpt4free/g4f/utils.py deleted file mode 100644 index d5ab41c79b44ab81e1843d209cb342bd83dafb42..0000000000000000000000000000000000000000 --- a/spaces/justest/gpt4free/g4f/utils.py +++ /dev/null @@ -1,49 +0,0 @@ -import browser_cookie3 - - -class Utils: - browsers = [ - browser_cookie3.chrome, # 62.74% market share - browser_cookie3.safari, # 24.12% market share - browser_cookie3.firefox, # 4.56% market share - browser_cookie3.edge, # 2.85% market share - browser_cookie3.opera, # 1.69% market share - browser_cookie3.brave, # 0.96% market share - browser_cookie3.opera_gx, # 0.64% market share - browser_cookie3.vivaldi, # 0.32% market share - ] - - def get_cookies(domain: str, setName: str = None, setBrowser: str = False) -> dict: - cookies = {} - - if setBrowser != False: - for browser in Utils.browsers: - if browser.__name__ == setBrowser: - try: - for c in browser(domain_name=domain): - if c.name not in cookies: - cookies = cookies | {c.name: c.value} - - except Exception as e: - pass - - else: - for browser in Utils.browsers: - try: - for c in browser(domain_name=domain): - if c.name not in cookies: - cookies = cookies | {c.name: c.value} - - except Exception as e: - pass - - if setName: - try: - return {setName: cookies[setName]} - - except ValueError: - print(f'Error: could not find {setName} cookie in any browser.') - exit(1) - - else: - return cookies diff --git a/spaces/jyseo/3DFuse/ldm/models/diffusion/dpm_solver/sampler.py b/spaces/jyseo/3DFuse/ldm/models/diffusion/dpm_solver/sampler.py deleted file mode 100644 index 7d137b8cf36718c1c58faa09f9dd919e5fb2977b..0000000000000000000000000000000000000000 --- a/spaces/jyseo/3DFuse/ldm/models/diffusion/dpm_solver/sampler.py +++ /dev/null @@ -1,87 +0,0 @@ -"""SAMPLING ONLY.""" -import torch - -from .dpm_solver import NoiseScheduleVP, model_wrapper, DPM_Solver - - -MODEL_TYPES = { - "eps": "noise", - "v": "v" -} - - -class DPMSolverSampler(object): - def __init__(self, model, **kwargs): - super().__init__() - self.model = model - to_torch = lambda x: x.clone().detach().to(torch.float32).to(model.device) - self.register_buffer('alphas_cumprod', to_torch(model.alphas_cumprod)) - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device("cuda"): - attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - cbs = conditioning[list(conditioning.keys())[0]].shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - - print(f'Data shape for DPM-Solver sampling is {size}, sampling steps {S}') - - device = self.model.betas.device - if x_T is None: - img = torch.randn(size, device=device) - else: - img = x_T - - ns = NoiseScheduleVP('discrete', alphas_cumprod=self.alphas_cumprod) - - model_fn = model_wrapper( - lambda x, t, c: self.model.apply_model(x, t, c), - ns, - model_type=MODEL_TYPES[self.model.parameterization], - guidance_type="classifier-free", - condition=conditioning, - unconditional_condition=unconditional_conditioning, - guidance_scale=unconditional_guidance_scale, - ) - - dpm_solver = DPM_Solver(model_fn, ns, predict_x0=True, thresholding=False) - x = dpm_solver.sample(img, steps=S, skip_type="time_uniform", method="multistep", order=2, lower_order_final=True) - - return x.to(device), None \ No newline at end of file diff --git a/spaces/kbora/minerva-generate-docker/blocks/utils/device.py b/spaces/kbora/minerva-generate-docker/blocks/utils/device.py deleted file mode 100644 index f5e27d43454f1ad2557c5f3971f93893ceff4ae3..0000000000000000000000000000000000000000 --- a/spaces/kbora/minerva-generate-docker/blocks/utils/device.py +++ /dev/null @@ -1,16 +0,0 @@ -import torch - - -def get_device(device = None): - if device is None: - # get cuda -> mps -> cpu - if torch.cuda.is_available(): - device = "cuda" - elif torch.backends.mps.is_available(): - if torch.backends.mps.is_built(): - device = "mps" - else: - device = "cpu" - else: - device = "cpu" - return device \ No newline at end of file diff --git a/spaces/kevinwang676/Bark-UI-with-Voice-Cloning-2/bark/settings.py b/spaces/kevinwang676/Bark-UI-with-Voice-Cloning-2/bark/settings.py deleted file mode 100644 index 66b9d29c954c2eb3304f6a59814185fbb4d850af..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Bark-UI-with-Voice-Cloning-2/bark/settings.py +++ /dev/null @@ -1,7 +0,0 @@ -import os - -def initenv(args): - os.environ['SUNO_USE_SMALL_MODELS'] = str("-smallmodels" in args) - os.environ['BARK_FORCE_CPU'] = str("-forcecpu" in args) - os.environ['SUNO_ENABLE_MPS'] = str("-enablemps" in args) - os.environ['SUNO_OFFLOAD_CPU'] = str("-offloadcpu" in args) \ No newline at end of file diff --git a/spaces/kingabzpro/Loan_Classifier/app.py b/spaces/kingabzpro/Loan_Classifier/app.py deleted file mode 100644 index a04629d68de60d0af8a27e45029bcc26f7531cf7..0000000000000000000000000000000000000000 --- a/spaces/kingabzpro/Loan_Classifier/app.py +++ /dev/null @@ -1,76 +0,0 @@ -import gradio as gr -import joblib - -# Load the trained model -model = joblib.load("loan_classifier.joblib") - -# Load Standared Scaler -scalar = joblib.load("std_scaler.bin") - - -def predict_loan_status( - int_rate, - installment, - log_annual_inc, - dti, - fico, - revol_bal, - revol_util, - inq_last_6mths, - delinq_2yrs, - pub_rec, - installment_to_income_ratio, - credit_history, -): - input_dict = { - "int.rate": int_rate, - "installment": installment, - "log.annual.inc": log_annual_inc, - "dti": dti, - "fico": fico, - "revol.bal": revol_bal, - "revol.util": revol_util, - "inq.last.6mths": inq_last_6mths, - "delinq.2yrs": delinq_2yrs, - "pub.rec": pub_rec, - "installment_to_income_ratio": installment_to_income_ratio, - "credit_history": credit_history, - } - # Convert the dictionary to a 2D array - input_array = [list(input_dict.values())] - scaled_array = scalar.transform(input_array) - prediction = model.predict(scaled_array)[0] - - if prediction == 0: - return "Loan fully paid" - else: - return "Loan not fully paid" - - -inputs = [ - gr.Slider(0.06, 0.23, step=0.01, label="Interest Rate"), - gr.Slider(100, 950, step=10, label="Installment"), - gr.Slider(7, 15, step=0.1, label="Log Annual Income"), - gr.Slider(0, 40, step=1, label="DTI Ratio"), - gr.Slider(600, 850, step=1, label="FICO Score"), - gr.Slider(0, 120000, step=1000, label="Revolving Balance"), - gr.Slider(0, 120, step=1, label="Revolving Utilization"), - gr.Slider(0, 10, step=1, label="Inquiries in Last 6 Months"), - gr.Slider(0, 20, step=1, label="Delinquencies in Last 2 Years"), - gr.Slider(0, 10, step=1, label="Public Records"), - gr.Slider(0, 5, step=0.1, label="Installment to Income Ratio"), - gr.Slider(0, 1, step=0.01, label="Credit History"), -] -outputs = [gr.Label(num_top_classes=2)] - -title = "Loan Approval Classifier" -description = ( - "Enter the details of the loan applicant to check if the loan is approved or not." -) -gr.Interface( - fn=predict_loan_status, - inputs=inputs, - outputs=outputs, - title=title, - description=description, -).launch() diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/gen_voice.py b/spaces/kira4424/Tacotron-zero-short-voice-clone/gen_voice.py deleted file mode 100644 index 3be4159e29e36851be761163c3e3ace02cf8d29c..0000000000000000000000000000000000000000 --- a/spaces/kira4424/Tacotron-zero-short-voice-clone/gen_voice.py +++ /dev/null @@ -1,128 +0,0 @@ -from encoder.params_model import model_embedding_size as speaker_embedding_size -from utils.argutils import print_args -from utils.modelutils import check_model_paths -from synthesizer.inference import Synthesizer -from encoder import inference as encoder -from vocoder.wavernn import inference as rnn_vocoder -from vocoder.hifigan import inference as gan_vocoder -from pathlib import Path -import numpy as np -import soundfile as sf -import librosa -import argparse -import torch -import sys -import os -import re -import cn2an -import glob - -from audioread.exceptions import NoBackendError -vocoder = gan_vocoder - -def gen_one_wav(synthesizer, in_fpath, embed, texts, file_name, seq): - embeds = [embed] * len(texts) - # If you know what the attention layer alignments are, you can retrieve them here by - # passing return_alignments=True - specs = synthesizer.synthesize_spectrograms(texts, embeds, style_idx=-1, min_stop_token=4, steps=400) - #spec = specs[0] - breaks = [spec.shape[1] for spec in specs] - spec = np.concatenate(specs, axis=1) - - # If seed is specified, reset torch seed and reload vocoder - # Synthesizing the waveform is fairly straightforward. Remember that the longer the - # spectrogram, the more time-efficient the vocoder. - generated_wav, output_sample_rate = vocoder.infer_waveform(spec) - - # Add breaks - b_ends = np.cumsum(np.array(breaks) * synthesizer.hparams.hop_size) - b_starts = np.concatenate(([0], b_ends[:-1])) - wavs = [generated_wav[start:end] for start, end, in zip(b_starts, b_ends)] - breaks = [np.zeros(int(0.15 * synthesizer.sample_rate))] * len(breaks) - generated_wav = np.concatenate([i for w, b in zip(wavs, breaks) for i in (w, b)]) - - ## Post-generation - # There's a bug with sounddevice that makes the audio cut one second earlier, so we - # pad it. - - # Trim excess silences to compensate for gaps in spectrograms (issue #53) - generated_wav = encoder.preprocess_wav(generated_wav) - generated_wav = generated_wav / np.abs(generated_wav).max() * 0.97 - - # Save it on the disk - model=os.path.basename(in_fpath) - filename = "%s_%d_%s.wav" %(file_name, seq, model) - sf.write(filename, generated_wav, synthesizer.sample_rate) - - print("\nSaved output as %s\n\n" % filename) - - -def generate_wav(enc_model_fpath, syn_model_fpath, voc_model_fpath, in_fpath, input_txt, file_name): - if torch.cuda.is_available(): - device_id = torch.cuda.current_device() - gpu_properties = torch.cuda.get_device_properties(device_id) - ## Print some environment information (for debugging purposes) - print("Found %d GPUs available. Using GPU %d (%s) of compute capability %d.%d with " - "%.1fGb total memory.\n" % - (torch.cuda.device_count(), - device_id, - gpu_properties.name, - gpu_properties.major, - gpu_properties.minor, - gpu_properties.total_memory / 1e9)) - else: - print("Using CPU for inference.\n") - - print("Preparing the encoder, the synthesizer and the vocoder...") - encoder.load_model(enc_model_fpath) - synthesizer = Synthesizer(syn_model_fpath) - vocoder.load_model(voc_model_fpath) - - encoder_wav = synthesizer.load_preprocess_wav(in_fpath) - embed, partial_embeds, _ = encoder.embed_utterance(encoder_wav, return_partials=True) - - texts = input_txt.split("\n") - seq=0 - each_num=1500 - - punctuation = '!,。、,' # punctuate and split/clean text - processed_texts = [] - cur_num = 0 - for text in texts: - for processed_text in re.sub(r'[{}]+'.format(punctuation), '\n', text).split('\n'): - if processed_text: - processed_texts.append(processed_text.strip()) - cur_num += len(processed_text.strip()) - if cur_num > each_num: - seq = seq +1 - gen_one_wav(synthesizer, in_fpath, embed, processed_texts, file_name, seq) - processed_texts = [] - cur_num = 0 - - if len(processed_texts)>0: - seq = seq +1 - gen_one_wav(synthesizer, in_fpath, embed, processed_texts, file_name, seq) - -if (len(sys.argv)>=3): - my_txt = "" - print("reading from :", sys.argv[1]) - with open(sys.argv[1], "r") as f: - for line in f.readlines(): - #line = line.strip('\n') - my_txt += line - txt_file_name = sys.argv[1] - wav_file_name = sys.argv[2] - - output = cn2an.transform(my_txt, "an2cn") - print(output) - generate_wav( - Path("encoder/saved_models/pretrained.pt"), - Path("synthesizer/saved_models/mandarin.pt"), - Path("vocoder/saved_models/pretrained/g_hifigan.pt"), wav_file_name, output, txt_file_name - ) - -else: - print("please input the file name") - exit(1) - - diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/datasets/builder.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/datasets/builder.py deleted file mode 100644 index 0798b14cd8b39fc58d8f2a4930f1e079b5bf8b55..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/datasets/builder.py +++ /dev/null @@ -1,169 +0,0 @@ -import copy -import platform -import random -from functools import partial - -import numpy as np -from annotator.uniformer.mmcv.parallel import collate -from annotator.uniformer.mmcv.runner import get_dist_info -from annotator.uniformer.mmcv.utils import Registry, build_from_cfg -from annotator.uniformer.mmcv.utils.parrots_wrapper import DataLoader, PoolDataLoader -from torch.utils.data import DistributedSampler - -if platform.system() != 'Windows': - # https://github.com/pytorch/pytorch/issues/973 - import resource - rlimit = resource.getrlimit(resource.RLIMIT_NOFILE) - hard_limit = rlimit[1] - soft_limit = min(4096, hard_limit) - resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit)) - -DATASETS = Registry('dataset') -PIPELINES = Registry('pipeline') - - -def _concat_dataset(cfg, default_args=None): - """Build :obj:`ConcatDataset by.""" - from .dataset_wrappers import ConcatDataset - img_dir = cfg['img_dir'] - ann_dir = cfg.get('ann_dir', None) - split = cfg.get('split', None) - num_img_dir = len(img_dir) if isinstance(img_dir, (list, tuple)) else 1 - if ann_dir is not None: - num_ann_dir = len(ann_dir) if isinstance(ann_dir, (list, tuple)) else 1 - else: - num_ann_dir = 0 - if split is not None: - num_split = len(split) if isinstance(split, (list, tuple)) else 1 - else: - num_split = 0 - if num_img_dir > 1: - assert num_img_dir == num_ann_dir or num_ann_dir == 0 - assert num_img_dir == num_split or num_split == 0 - else: - assert num_split == num_ann_dir or num_ann_dir <= 1 - num_dset = max(num_split, num_img_dir) - - datasets = [] - for i in range(num_dset): - data_cfg = copy.deepcopy(cfg) - if isinstance(img_dir, (list, tuple)): - data_cfg['img_dir'] = img_dir[i] - if isinstance(ann_dir, (list, tuple)): - data_cfg['ann_dir'] = ann_dir[i] - if isinstance(split, (list, tuple)): - data_cfg['split'] = split[i] - datasets.append(build_dataset(data_cfg, default_args)) - - return ConcatDataset(datasets) - - -def build_dataset(cfg, default_args=None): - """Build datasets.""" - from .dataset_wrappers import ConcatDataset, RepeatDataset - if isinstance(cfg, (list, tuple)): - dataset = ConcatDataset([build_dataset(c, default_args) for c in cfg]) - elif cfg['type'] == 'RepeatDataset': - dataset = RepeatDataset( - build_dataset(cfg['dataset'], default_args), cfg['times']) - elif isinstance(cfg.get('img_dir'), (list, tuple)) or isinstance( - cfg.get('split', None), (list, tuple)): - dataset = _concat_dataset(cfg, default_args) - else: - dataset = build_from_cfg(cfg, DATASETS, default_args) - - return dataset - - -def build_dataloader(dataset, - samples_per_gpu, - workers_per_gpu, - num_gpus=1, - dist=True, - shuffle=True, - seed=None, - drop_last=False, - pin_memory=True, - dataloader_type='PoolDataLoader', - **kwargs): - """Build PyTorch DataLoader. - - In distributed training, each GPU/process has a dataloader. - In non-distributed training, there is only one dataloader for all GPUs. - - Args: - dataset (Dataset): A PyTorch dataset. - samples_per_gpu (int): Number of training samples on each GPU, i.e., - batch size of each GPU. - workers_per_gpu (int): How many subprocesses to use for data loading - for each GPU. - num_gpus (int): Number of GPUs. Only used in non-distributed training. - dist (bool): Distributed training/test or not. Default: True. - shuffle (bool): Whether to shuffle the data at every epoch. - Default: True. - seed (int | None): Seed to be used. Default: None. - drop_last (bool): Whether to drop the last incomplete batch in epoch. - Default: False - pin_memory (bool): Whether to use pin_memory in DataLoader. - Default: True - dataloader_type (str): Type of dataloader. Default: 'PoolDataLoader' - kwargs: any keyword argument to be used to initialize DataLoader - - Returns: - DataLoader: A PyTorch dataloader. - """ - rank, world_size = get_dist_info() - if dist: - sampler = DistributedSampler( - dataset, world_size, rank, shuffle=shuffle) - shuffle = False - batch_size = samples_per_gpu - num_workers = workers_per_gpu - else: - sampler = None - batch_size = num_gpus * samples_per_gpu - num_workers = num_gpus * workers_per_gpu - - init_fn = partial( - worker_init_fn, num_workers=num_workers, rank=rank, - seed=seed) if seed is not None else None - - assert dataloader_type in ( - 'DataLoader', - 'PoolDataLoader'), f'unsupported dataloader {dataloader_type}' - - if dataloader_type == 'PoolDataLoader': - dataloader = PoolDataLoader - elif dataloader_type == 'DataLoader': - dataloader = DataLoader - - data_loader = dataloader( - dataset, - batch_size=batch_size, - sampler=sampler, - num_workers=num_workers, - collate_fn=partial(collate, samples_per_gpu=samples_per_gpu), - pin_memory=pin_memory, - shuffle=shuffle, - worker_init_fn=init_fn, - drop_last=drop_last, - **kwargs) - - return data_loader - - -def worker_init_fn(worker_id, num_workers, rank, seed): - """Worker init func for dataloader. - - The seed of each worker equals to num_worker * rank + worker_id + user_seed - - Args: - worker_id (int): Worker id. - num_workers (int): Number of workers. - rank (int): The rank of current process. - seed (int): The random seed to use. - """ - - worker_seed = num_workers * rank + worker_id + seed - np.random.seed(worker_seed) - random.seed(worker_seed) diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/backbones/vit.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/backbones/vit.py deleted file mode 100644 index 59e4479650690e08cbc4cab9427aefda47c2116d..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/backbones/vit.py +++ /dev/null @@ -1,459 +0,0 @@ -"""Modified from https://github.com/rwightman/pytorch-image- -models/blob/master/timm/models/vision_transformer.py.""" - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as cp -from annotator.uniformer.mmcv.cnn import (Conv2d, Linear, build_activation_layer, build_norm_layer, - constant_init, kaiming_init, normal_init) -from annotator.uniformer.mmcv.runner import _load_checkpoint -from annotator.uniformer.mmcv.utils.parrots_wrapper import _BatchNorm - -from annotator.uniformer.mmseg.utils import get_root_logger -from ..builder import BACKBONES -from ..utils import DropPath, trunc_normal_ - - -class Mlp(nn.Module): - """MLP layer for Encoder block. - - Args: - in_features(int): Input dimension for the first fully - connected layer. - hidden_features(int): Output dimension for the first fully - connected layer. - out_features(int): Output dementsion for the second fully - connected layer. - act_cfg(dict): Config dict for activation layer. - Default: dict(type='GELU'). - drop(float): Drop rate for the dropout layer. Dropout rate has - to be between 0 and 1. Default: 0. - """ - - def __init__(self, - in_features, - hidden_features=None, - out_features=None, - act_cfg=dict(type='GELU'), - drop=0.): - super(Mlp, self).__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = Linear(in_features, hidden_features) - self.act = build_activation_layer(act_cfg) - self.fc2 = Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class Attention(nn.Module): - """Attention layer for Encoder block. - - Args: - dim (int): Dimension for the input vector. - num_heads (int): Number of parallel attention heads. - qkv_bias (bool): Enable bias for qkv if True. Default: False. - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. - attn_drop (float): Drop rate for attention output weights. - Default: 0. - proj_drop (float): Drop rate for output weights. Default: 0. - """ - - def __init__(self, - dim, - num_heads=8, - qkv_bias=False, - qk_scale=None, - attn_drop=0., - proj_drop=0.): - super(Attention, self).__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim**-0.5 - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - def forward(self, x): - b, n, c = x.shape - qkv = self.qkv(x).reshape(b, n, 3, self.num_heads, - c // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] - - attn = (q @ k.transpose(-2, -1)) * self.scale - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(b, n, c) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class Block(nn.Module): - """Implements encoder block with residual connection. - - Args: - dim (int): The feature dimension. - num_heads (int): Number of parallel attention heads. - mlp_ratio (int): Ratio of mlp hidden dim to embedding dim. - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. - drop (float): Drop rate for mlp output weights. Default: 0. - attn_drop (float): Drop rate for attention output weights. - Default: 0. - proj_drop (float): Drop rate for attn layer output weights. - Default: 0. - drop_path (float): Drop rate for paths of model. - Default: 0. - act_cfg (dict): Config dict for activation layer. - Default: dict(type='GELU'). - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN', requires_grad=True). - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - """ - - def __init__(self, - dim, - num_heads, - mlp_ratio=4, - qkv_bias=False, - qk_scale=None, - drop=0., - attn_drop=0., - proj_drop=0., - drop_path=0., - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN', eps=1e-6), - with_cp=False): - super(Block, self).__init__() - self.with_cp = with_cp - _, self.norm1 = build_norm_layer(norm_cfg, dim) - self.attn = Attention(dim, num_heads, qkv_bias, qk_scale, attn_drop, - proj_drop) - self.drop_path = DropPath( - drop_path) if drop_path > 0. else nn.Identity() - _, self.norm2 = build_norm_layer(norm_cfg, dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp( - in_features=dim, - hidden_features=mlp_hidden_dim, - act_cfg=act_cfg, - drop=drop) - - def forward(self, x): - - def _inner_forward(x): - out = x + self.drop_path(self.attn(self.norm1(x))) - out = out + self.drop_path(self.mlp(self.norm2(out))) - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - return out - - -class PatchEmbed(nn.Module): - """Image to Patch Embedding. - - Args: - img_size (int | tuple): Input image size. - default: 224. - patch_size (int): Width and height for a patch. - default: 16. - in_channels (int): Input channels for images. Default: 3. - embed_dim (int): The embedding dimension. Default: 768. - """ - - def __init__(self, - img_size=224, - patch_size=16, - in_channels=3, - embed_dim=768): - super(PatchEmbed, self).__init__() - if isinstance(img_size, int): - self.img_size = (img_size, img_size) - elif isinstance(img_size, tuple): - self.img_size = img_size - else: - raise TypeError('img_size must be type of int or tuple') - h, w = self.img_size - self.patch_size = (patch_size, patch_size) - self.num_patches = (h // patch_size) * (w // patch_size) - self.proj = Conv2d( - in_channels, embed_dim, kernel_size=patch_size, stride=patch_size) - - def forward(self, x): - return self.proj(x).flatten(2).transpose(1, 2) - - -@BACKBONES.register_module() -class VisionTransformer(nn.Module): - """Vision transformer backbone. - - A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for - Image Recognition at Scale` - https://arxiv.org/abs/2010.11929 - - Args: - img_size (tuple): input image size. Default: (224, 224). - patch_size (int, tuple): patch size. Default: 16. - in_channels (int): number of input channels. Default: 3. - embed_dim (int): embedding dimension. Default: 768. - depth (int): depth of transformer. Default: 12. - num_heads (int): number of attention heads. Default: 12. - mlp_ratio (int): ratio of mlp hidden dim to embedding dim. - Default: 4. - out_indices (list | tuple | int): Output from which stages. - Default: -1. - qkv_bias (bool): enable bias for qkv if True. Default: True. - qk_scale (float): override default qk scale of head_dim ** -0.5 if set. - drop_rate (float): dropout rate. Default: 0. - attn_drop_rate (float): attention dropout rate. Default: 0. - drop_path_rate (float): Rate of DropPath. Default: 0. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN', eps=1e-6, requires_grad=True). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='GELU'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - final_norm (bool): Whether to add a additional layer to normalize - final feature map. Default: False. - interpolate_mode (str): Select the interpolate mode for position - embeding vector resize. Default: bicubic. - with_cls_token (bool): If concatenating class token into image tokens - as transformer input. Default: True. - with_cp (bool): Use checkpoint or not. Using checkpoint - will save some memory while slowing down the training speed. - Default: False. - """ - - def __init__(self, - img_size=(224, 224), - patch_size=16, - in_channels=3, - embed_dim=768, - depth=12, - num_heads=12, - mlp_ratio=4, - out_indices=11, - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - norm_cfg=dict(type='LN', eps=1e-6, requires_grad=True), - act_cfg=dict(type='GELU'), - norm_eval=False, - final_norm=False, - with_cls_token=True, - interpolate_mode='bicubic', - with_cp=False): - super(VisionTransformer, self).__init__() - self.img_size = img_size - self.patch_size = patch_size - self.features = self.embed_dim = embed_dim - self.patch_embed = PatchEmbed( - img_size=img_size, - patch_size=patch_size, - in_channels=in_channels, - embed_dim=embed_dim) - - self.with_cls_token = with_cls_token - self.cls_token = nn.Parameter(torch.zeros(1, 1, self.embed_dim)) - self.pos_embed = nn.Parameter( - torch.zeros(1, self.patch_embed.num_patches + 1, embed_dim)) - self.pos_drop = nn.Dropout(p=drop_rate) - - if isinstance(out_indices, int): - self.out_indices = [out_indices] - elif isinstance(out_indices, list) or isinstance(out_indices, tuple): - self.out_indices = out_indices - else: - raise TypeError('out_indices must be type of int, list or tuple') - - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth) - ] # stochastic depth decay rule - self.blocks = nn.ModuleList([ - Block( - dim=embed_dim, - num_heads=num_heads, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=dpr[i], - attn_drop=attn_drop_rate, - act_cfg=act_cfg, - norm_cfg=norm_cfg, - with_cp=with_cp) for i in range(depth) - ]) - - self.interpolate_mode = interpolate_mode - self.final_norm = final_norm - if final_norm: - _, self.norm = build_norm_layer(norm_cfg, embed_dim) - - self.norm_eval = norm_eval - self.with_cp = with_cp - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = get_root_logger() - checkpoint = _load_checkpoint(pretrained, logger=logger) - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - else: - state_dict = checkpoint - - if 'pos_embed' in state_dict.keys(): - if self.pos_embed.shape != state_dict['pos_embed'].shape: - logger.info(msg=f'Resize the pos_embed shape from \ -{state_dict["pos_embed"].shape} to {self.pos_embed.shape}') - h, w = self.img_size - pos_size = int( - math.sqrt(state_dict['pos_embed'].shape[1] - 1)) - state_dict['pos_embed'] = self.resize_pos_embed( - state_dict['pos_embed'], (h, w), (pos_size, pos_size), - self.patch_size, self.interpolate_mode) - - self.load_state_dict(state_dict, False) - - elif pretrained is None: - # We only implement the 'jax_impl' initialization implemented at - # https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py#L353 # noqa: E501 - trunc_normal_(self.pos_embed, std=.02) - trunc_normal_(self.cls_token, std=.02) - for n, m in self.named_modules(): - if isinstance(m, Linear): - trunc_normal_(m.weight, std=.02) - if m.bias is not None: - if 'mlp' in n: - normal_init(m.bias, std=1e-6) - else: - constant_init(m.bias, 0) - elif isinstance(m, Conv2d): - kaiming_init(m.weight, mode='fan_in') - if m.bias is not None: - constant_init(m.bias, 0) - elif isinstance(m, (_BatchNorm, nn.GroupNorm, nn.LayerNorm)): - constant_init(m.bias, 0) - constant_init(m.weight, 1.0) - else: - raise TypeError('pretrained must be a str or None') - - def _pos_embeding(self, img, patched_img, pos_embed): - """Positiong embeding method. - - Resize the pos_embed, if the input image size doesn't match - the training size. - Args: - img (torch.Tensor): The inference image tensor, the shape - must be [B, C, H, W]. - patched_img (torch.Tensor): The patched image, it should be - shape of [B, L1, C]. - pos_embed (torch.Tensor): The pos_embed weighs, it should be - shape of [B, L2, c]. - Return: - torch.Tensor: The pos encoded image feature. - """ - assert patched_img.ndim == 3 and pos_embed.ndim == 3, \ - 'the shapes of patched_img and pos_embed must be [B, L, C]' - x_len, pos_len = patched_img.shape[1], pos_embed.shape[1] - if x_len != pos_len: - if pos_len == (self.img_size[0] // self.patch_size) * ( - self.img_size[1] // self.patch_size) + 1: - pos_h = self.img_size[0] // self.patch_size - pos_w = self.img_size[1] // self.patch_size - else: - raise ValueError( - 'Unexpected shape of pos_embed, got {}.'.format( - pos_embed.shape)) - pos_embed = self.resize_pos_embed(pos_embed, img.shape[2:], - (pos_h, pos_w), self.patch_size, - self.interpolate_mode) - return self.pos_drop(patched_img + pos_embed) - - @staticmethod - def resize_pos_embed(pos_embed, input_shpae, pos_shape, patch_size, mode): - """Resize pos_embed weights. - - Resize pos_embed using bicubic interpolate method. - Args: - pos_embed (torch.Tensor): pos_embed weights. - input_shpae (tuple): Tuple for (input_h, intput_w). - pos_shape (tuple): Tuple for (pos_h, pos_w). - patch_size (int): Patch size. - Return: - torch.Tensor: The resized pos_embed of shape [B, L_new, C] - """ - assert pos_embed.ndim == 3, 'shape of pos_embed must be [B, L, C]' - input_h, input_w = input_shpae - pos_h, pos_w = pos_shape - cls_token_weight = pos_embed[:, 0] - pos_embed_weight = pos_embed[:, (-1 * pos_h * pos_w):] - pos_embed_weight = pos_embed_weight.reshape( - 1, pos_h, pos_w, pos_embed.shape[2]).permute(0, 3, 1, 2) - pos_embed_weight = F.interpolate( - pos_embed_weight, - size=[input_h // patch_size, input_w // patch_size], - align_corners=False, - mode=mode) - cls_token_weight = cls_token_weight.unsqueeze(1) - pos_embed_weight = torch.flatten(pos_embed_weight, 2).transpose(1, 2) - pos_embed = torch.cat((cls_token_weight, pos_embed_weight), dim=1) - return pos_embed - - def forward(self, inputs): - B = inputs.shape[0] - - x = self.patch_embed(inputs) - - cls_tokens = self.cls_token.expand(B, -1, -1) - x = torch.cat((cls_tokens, x), dim=1) - x = self._pos_embeding(inputs, x, self.pos_embed) - - if not self.with_cls_token: - # Remove class token for transformer input - x = x[:, 1:] - - outs = [] - for i, blk in enumerate(self.blocks): - x = blk(x) - if i == len(self.blocks) - 1: - if self.final_norm: - x = self.norm(x) - if i in self.out_indices: - if self.with_cls_token: - # Remove class token and reshape token for decoder head - out = x[:, 1:] - else: - out = x - B, _, C = out.shape - out = out.reshape(B, inputs.shape[2] // self.patch_size, - inputs.shape[3] // self.patch_size, - C).permute(0, 3, 1, 2) - outs.append(out) - - return tuple(outs) - - def train(self, mode=True): - super(VisionTransformer, self).train(mode) - if mode and self.norm_eval: - for m in self.modules(): - if isinstance(m, nn.LayerNorm): - m.eval() diff --git a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/training/trainers/__init__.py b/spaces/kquote03/lama-video-watermark-remover/saicinpainting/training/trainers/__init__.py deleted file mode 100644 index c59241f553efe4e2dd6b198e2e5656a2b1488857..0000000000000000000000000000000000000000 --- a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/training/trainers/__init__.py +++ /dev/null @@ -1,30 +0,0 @@ -import logging -import torch -from saicinpainting.training.trainers.default import DefaultInpaintingTrainingModule - - -def get_training_model_class(kind): - if kind == 'default': - return DefaultInpaintingTrainingModule - - raise ValueError(f'Unknown trainer module {kind}') - - -def make_training_model(config): - kind = config.training_model.kind - kwargs = dict(config.training_model) - kwargs.pop('kind') - kwargs['use_ddp'] = config.trainer.kwargs.get('accelerator', None) == 'ddp' - - logging.info(f'Make training model {kind}') - - cls = get_training_model_class(kind) - return cls(config, **kwargs) - - -def load_checkpoint(train_config, path, map_location='cuda', strict=True): - model: torch.nn.Module = make_training_model(train_config) - state = torch.load(path, map_location=map_location) - model.load_state_dict(state['state_dict'], strict=strict) - model.on_load_checkpoint(state) - return model diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/functorch/experimental/control_flow.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/functorch/experimental/control_flow.py deleted file mode 100644 index 5d42598c757aa0c1b894999b10b0737298c8442a..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/functorch/experimental/control_flow.py +++ /dev/null @@ -1,2 +0,0 @@ -from ._map import map # noqa: F401 -from ._cond import cond, UnsupportedAliasMutationException # noqa: F401 diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/_mathtext_data.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/_mathtext_data.py deleted file mode 100644 index ef571b90712eb82481e94ee5370af42993b8fec6..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/_mathtext_data.py +++ /dev/null @@ -1,1228 +0,0 @@ -""" -font data tables for truetype and afm computer modern fonts -""" - -latex_to_bakoma = { - '\\__sqrt__' : ('cmex10', 0x70), - '\\bigcap' : ('cmex10', 0x5c), - '\\bigcup' : ('cmex10', 0x5b), - '\\bigodot' : ('cmex10', 0x4b), - '\\bigoplus' : ('cmex10', 0x4d), - '\\bigotimes' : ('cmex10', 0x4f), - '\\biguplus' : ('cmex10', 0x5d), - '\\bigvee' : ('cmex10', 0x5f), - '\\bigwedge' : ('cmex10', 0x5e), - '\\coprod' : ('cmex10', 0x61), - '\\int' : ('cmex10', 0x5a), - '\\langle' : ('cmex10', 0xad), - '\\leftangle' : ('cmex10', 0xad), - '\\leftbrace' : ('cmex10', 0xa9), - '\\oint' : ('cmex10', 0x49), - '\\prod' : ('cmex10', 0x59), - '\\rangle' : ('cmex10', 0xae), - '\\rightangle' : ('cmex10', 0xae), - '\\rightbrace' : ('cmex10', 0xaa), - '\\sum' : ('cmex10', 0x58), - '\\widehat' : ('cmex10', 0x62), - '\\widetilde' : ('cmex10', 0x65), - '\\{' : ('cmex10', 0xa9), - '\\}' : ('cmex10', 0xaa), - '{' : ('cmex10', 0xa9), - '}' : ('cmex10', 0xaa), - - ',' : ('cmmi10', 0x3b), - '.' : ('cmmi10', 0x3a), - '/' : ('cmmi10', 0x3d), - '<' : ('cmmi10', 0x3c), - '>' : ('cmmi10', 0x3e), - '\\alpha' : ('cmmi10', 0xae), - '\\beta' : ('cmmi10', 0xaf), - '\\chi' : ('cmmi10', 0xc2), - '\\combiningrightarrowabove' : ('cmmi10', 0x7e), - '\\delta' : ('cmmi10', 0xb1), - '\\ell' : ('cmmi10', 0x60), - '\\epsilon' : ('cmmi10', 0xb2), - '\\eta' : ('cmmi10', 0xb4), - '\\flat' : ('cmmi10', 0x5b), - '\\frown' : ('cmmi10', 0x5f), - '\\gamma' : ('cmmi10', 0xb0), - '\\imath' : ('cmmi10', 0x7b), - '\\iota' : ('cmmi10', 0xb6), - '\\jmath' : ('cmmi10', 0x7c), - '\\kappa' : ('cmmi10', 0x2219), - '\\lambda' : ('cmmi10', 0xb8), - '\\leftharpoondown' : ('cmmi10', 0x29), - '\\leftharpoonup' : ('cmmi10', 0x28), - '\\mu' : ('cmmi10', 0xb9), - '\\natural' : ('cmmi10', 0x5c), - '\\nu' : ('cmmi10', 0xba), - '\\omega' : ('cmmi10', 0x21), - '\\phi' : ('cmmi10', 0xc1), - '\\pi' : ('cmmi10', 0xbc), - '\\psi' : ('cmmi10', 0xc3), - '\\rho' : ('cmmi10', 0xbd), - '\\rightharpoondown' : ('cmmi10', 0x2b), - '\\rightharpoonup' : ('cmmi10', 0x2a), - '\\sharp' : ('cmmi10', 0x5d), - '\\sigma' : ('cmmi10', 0xbe), - '\\smile' : ('cmmi10', 0x5e), - '\\tau' : ('cmmi10', 0xbf), - '\\theta' : ('cmmi10', 0xb5), - '\\triangleleft' : ('cmmi10', 0x2f), - '\\triangleright' : ('cmmi10', 0x2e), - '\\upsilon' : ('cmmi10', 0xc0), - '\\varepsilon' : ('cmmi10', 0x22), - '\\varphi' : ('cmmi10', 0x27), - '\\varrho' : ('cmmi10', 0x25), - '\\varsigma' : ('cmmi10', 0x26), - '\\vartheta' : ('cmmi10', 0x23), - '\\wp' : ('cmmi10', 0x7d), - '\\xi' : ('cmmi10', 0xbb), - '\\zeta' : ('cmmi10', 0xb3), - - '!' : ('cmr10', 0x21), - '%' : ('cmr10', 0x25), - '&' : ('cmr10', 0x26), - '(' : ('cmr10', 0x28), - ')' : ('cmr10', 0x29), - '+' : ('cmr10', 0x2b), - '0' : ('cmr10', 0x30), - '1' : ('cmr10', 0x31), - '2' : ('cmr10', 0x32), - '3' : ('cmr10', 0x33), - '4' : ('cmr10', 0x34), - '5' : ('cmr10', 0x35), - '6' : ('cmr10', 0x36), - '7' : ('cmr10', 0x37), - '8' : ('cmr10', 0x38), - '9' : ('cmr10', 0x39), - ':' : ('cmr10', 0x3a), - ';' : ('cmr10', 0x3b), - '=' : ('cmr10', 0x3d), - '?' : ('cmr10', 0x3f), - '@' : ('cmr10', 0x40), - '[' : ('cmr10', 0x5b), - '\\#' : ('cmr10', 0x23), - '\\$' : ('cmr10', 0x24), - '\\%' : ('cmr10', 0x25), - '\\Delta' : ('cmr10', 0xa2), - '\\Gamma' : ('cmr10', 0xa1), - '\\Lambda' : ('cmr10', 0xa4), - '\\Omega' : ('cmr10', 0xad), - '\\Phi' : ('cmr10', 0xa9), - '\\Pi' : ('cmr10', 0xa6), - '\\Psi' : ('cmr10', 0xaa), - '\\Sigma' : ('cmr10', 0xa7), - '\\Theta' : ('cmr10', 0xa3), - '\\Upsilon' : ('cmr10', 0xa8), - '\\Xi' : ('cmr10', 0xa5), - '\\circumflexaccent' : ('cmr10', 0x5e), - '\\combiningacuteaccent' : ('cmr10', 0xb6), - '\\combiningbreve' : ('cmr10', 0xb8), - '\\combiningdiaeresis' : ('cmr10', 0xc4), - '\\combiningdotabove' : ('cmr10', 0x5f), - '\\combininggraveaccent' : ('cmr10', 0xb5), - '\\combiningoverline' : ('cmr10', 0xb9), - '\\combiningtilde' : ('cmr10', 0x7e), - '\\leftbracket' : ('cmr10', 0x5b), - '\\leftparen' : ('cmr10', 0x28), - '\\rightbracket' : ('cmr10', 0x5d), - '\\rightparen' : ('cmr10', 0x29), - '\\widebar' : ('cmr10', 0xb9), - ']' : ('cmr10', 0x5d), - - '*' : ('cmsy10', 0xa4), - '\N{MINUS SIGN}' : ('cmsy10', 0xa1), - '\\Downarrow' : ('cmsy10', 0x2b), - '\\Im' : ('cmsy10', 0x3d), - '\\Leftarrow' : ('cmsy10', 0x28), - '\\Leftrightarrow' : ('cmsy10', 0x2c), - '\\P' : ('cmsy10', 0x7b), - '\\Re' : ('cmsy10', 0x3c), - '\\Rightarrow' : ('cmsy10', 0x29), - '\\S' : ('cmsy10', 0x78), - '\\Uparrow' : ('cmsy10', 0x2a), - '\\Updownarrow' : ('cmsy10', 0x6d), - '\\Vert' : ('cmsy10', 0x6b), - '\\aleph' : ('cmsy10', 0x40), - '\\approx' : ('cmsy10', 0xbc), - '\\ast' : ('cmsy10', 0xa4), - '\\asymp' : ('cmsy10', 0xb3), - '\\backslash' : ('cmsy10', 0x6e), - '\\bigcirc' : ('cmsy10', 0xb0), - '\\bigtriangledown' : ('cmsy10', 0x35), - '\\bigtriangleup' : ('cmsy10', 0x34), - '\\bot' : ('cmsy10', 0x3f), - '\\bullet' : ('cmsy10', 0xb2), - '\\cap' : ('cmsy10', 0x5c), - '\\cdot' : ('cmsy10', 0xa2), - '\\circ' : ('cmsy10', 0xb1), - '\\clubsuit' : ('cmsy10', 0x7c), - '\\cup' : ('cmsy10', 0x5b), - '\\dag' : ('cmsy10', 0x79), - '\\dashv' : ('cmsy10', 0x61), - '\\ddag' : ('cmsy10', 0x7a), - '\\diamond' : ('cmsy10', 0xa6), - '\\diamondsuit' : ('cmsy10', 0x7d), - '\\div' : ('cmsy10', 0xa5), - '\\downarrow' : ('cmsy10', 0x23), - '\\emptyset' : ('cmsy10', 0x3b), - '\\equiv' : ('cmsy10', 0xb4), - '\\exists' : ('cmsy10', 0x39), - '\\forall' : ('cmsy10', 0x38), - '\\geq' : ('cmsy10', 0xb8), - '\\gg' : ('cmsy10', 0xc0), - '\\heartsuit' : ('cmsy10', 0x7e), - '\\in' : ('cmsy10', 0x32), - '\\infty' : ('cmsy10', 0x31), - '\\lbrace' : ('cmsy10', 0x66), - '\\lceil' : ('cmsy10', 0x64), - '\\leftarrow' : ('cmsy10', 0xc3), - '\\leftrightarrow' : ('cmsy10', 0x24), - '\\leq' : ('cmsy10', 0x2219), - '\\lfloor' : ('cmsy10', 0x62), - '\\ll' : ('cmsy10', 0xbf), - '\\mid' : ('cmsy10', 0x6a), - '\\mp' : ('cmsy10', 0xa8), - '\\nabla' : ('cmsy10', 0x72), - '\\nearrow' : ('cmsy10', 0x25), - '\\neg' : ('cmsy10', 0x3a), - '\\ni' : ('cmsy10', 0x33), - '\\nwarrow' : ('cmsy10', 0x2d), - '\\odot' : ('cmsy10', 0xaf), - '\\ominus' : ('cmsy10', 0xaa), - '\\oplus' : ('cmsy10', 0xa9), - '\\oslash' : ('cmsy10', 0xae), - '\\otimes' : ('cmsy10', 0xad), - '\\pm' : ('cmsy10', 0xa7), - '\\prec' : ('cmsy10', 0xc1), - '\\preceq' : ('cmsy10', 0xb9), - '\\prime' : ('cmsy10', 0x30), - '\\propto' : ('cmsy10', 0x2f), - '\\rbrace' : ('cmsy10', 0x67), - '\\rceil' : ('cmsy10', 0x65), - '\\rfloor' : ('cmsy10', 0x63), - '\\rightarrow' : ('cmsy10', 0x21), - '\\searrow' : ('cmsy10', 0x26), - '\\sim' : ('cmsy10', 0xbb), - '\\simeq' : ('cmsy10', 0x27), - '\\slash' : ('cmsy10', 0x36), - '\\spadesuit' : ('cmsy10', 0xc4), - '\\sqcap' : ('cmsy10', 0x75), - '\\sqcup' : ('cmsy10', 0x74), - '\\sqsubseteq' : ('cmsy10', 0x76), - '\\sqsupseteq' : ('cmsy10', 0x77), - '\\subset' : ('cmsy10', 0xbd), - '\\subseteq' : ('cmsy10', 0xb5), - '\\succ' : ('cmsy10', 0xc2), - '\\succeq' : ('cmsy10', 0xba), - '\\supset' : ('cmsy10', 0xbe), - '\\supseteq' : ('cmsy10', 0xb6), - '\\swarrow' : ('cmsy10', 0x2e), - '\\times' : ('cmsy10', 0xa3), - '\\to' : ('cmsy10', 0x21), - '\\top' : ('cmsy10', 0x3e), - '\\uparrow' : ('cmsy10', 0x22), - '\\updownarrow' : ('cmsy10', 0x6c), - '\\uplus' : ('cmsy10', 0x5d), - '\\vdash' : ('cmsy10', 0x60), - '\\vee' : ('cmsy10', 0x5f), - '\\vert' : ('cmsy10', 0x6a), - '\\wedge' : ('cmsy10', 0x5e), - '\\wr' : ('cmsy10', 0x6f), - '\\|' : ('cmsy10', 0x6b), - '|' : ('cmsy10', 0x6a), - - '\\_' : ('cmtt10', 0x5f) -} - -# Automatically generated. - -type12uni = { - 'aring' : 229, - 'quotedblright' : 8221, - 'V' : 86, - 'dollar' : 36, - 'four' : 52, - 'Yacute' : 221, - 'P' : 80, - 'underscore' : 95, - 'p' : 112, - 'Otilde' : 213, - 'perthousand' : 8240, - 'zero' : 48, - 'dotlessi' : 305, - 'Scaron' : 352, - 'zcaron' : 382, - 'egrave' : 232, - 'section' : 167, - 'Icircumflex' : 206, - 'ntilde' : 241, - 'ampersand' : 38, - 'dotaccent' : 729, - 'degree' : 176, - 'K' : 75, - 'acircumflex' : 226, - 'Aring' : 197, - 'k' : 107, - 'smalltilde' : 732, - 'Agrave' : 192, - 'divide' : 247, - 'ocircumflex' : 244, - 'asciitilde' : 126, - 'two' : 50, - 'E' : 69, - 'scaron' : 353, - 'F' : 70, - 'bracketleft' : 91, - 'asciicircum' : 94, - 'f' : 102, - 'ordmasculine' : 186, - 'mu' : 181, - 'paragraph' : 182, - 'nine' : 57, - 'v' : 118, - 'guilsinglleft' : 8249, - 'backslash' : 92, - 'six' : 54, - 'A' : 65, - 'icircumflex' : 238, - 'a' : 97, - 'ogonek' : 731, - 'q' : 113, - 'oacute' : 243, - 'ograve' : 242, - 'edieresis' : 235, - 'comma' : 44, - 'otilde' : 245, - 'guillemotright' : 187, - 'ecircumflex' : 234, - 'greater' : 62, - 'uacute' : 250, - 'L' : 76, - 'bullet' : 8226, - 'cedilla' : 184, - 'ydieresis' : 255, - 'l' : 108, - 'logicalnot' : 172, - 'exclamdown' : 161, - 'endash' : 8211, - 'agrave' : 224, - 'Adieresis' : 196, - 'germandbls' : 223, - 'Odieresis' : 214, - 'space' : 32, - 'quoteright' : 8217, - 'ucircumflex' : 251, - 'G' : 71, - 'quoteleft' : 8216, - 'W' : 87, - 'Q' : 81, - 'g' : 103, - 'w' : 119, - 'question' : 63, - 'one' : 49, - 'ring' : 730, - 'figuredash' : 8210, - 'B' : 66, - 'iacute' : 237, - 'Ydieresis' : 376, - 'R' : 82, - 'b' : 98, - 'r' : 114, - 'Ccedilla' : 199, - 'minus' : 8722, - 'Lslash' : 321, - 'Uacute' : 218, - 'yacute' : 253, - 'Ucircumflex' : 219, - 'quotedbl' : 34, - 'onehalf' : 189, - 'Thorn' : 222, - 'M' : 77, - 'eight' : 56, - 'multiply' : 215, - 'grave' : 96, - 'Ocircumflex' : 212, - 'm' : 109, - 'Ugrave' : 217, - 'guilsinglright' : 8250, - 'Ntilde' : 209, - 'questiondown' : 191, - 'Atilde' : 195, - 'ccedilla' : 231, - 'Z' : 90, - 'copyright' : 169, - 'yen' : 165, - 'Eacute' : 201, - 'H' : 72, - 'X' : 88, - 'Idieresis' : 207, - 'bar' : 124, - 'h' : 104, - 'x' : 120, - 'udieresis' : 252, - 'ordfeminine' : 170, - 'braceleft' : 123, - 'macron' : 175, - 'atilde' : 227, - 'Acircumflex' : 194, - 'Oslash' : 216, - 'C' : 67, - 'quotedblleft' : 8220, - 'S' : 83, - 'exclam' : 33, - 'Zcaron' : 381, - 'equal' : 61, - 's' : 115, - 'eth' : 240, - 'Egrave' : 200, - 'hyphen' : 45, - 'period' : 46, - 'igrave' : 236, - 'colon' : 58, - 'Ecircumflex' : 202, - 'trademark' : 8482, - 'Aacute' : 193, - 'cent' : 162, - 'lslash' : 322, - 'c' : 99, - 'N' : 78, - 'breve' : 728, - 'Oacute' : 211, - 'guillemotleft' : 171, - 'n' : 110, - 'idieresis' : 239, - 'braceright' : 125, - 'seven' : 55, - 'brokenbar' : 166, - 'ugrave' : 249, - 'periodcentered' : 183, - 'sterling' : 163, - 'I' : 73, - 'Y' : 89, - 'Eth' : 208, - 'emdash' : 8212, - 'i' : 105, - 'daggerdbl' : 8225, - 'y' : 121, - 'plusminus' : 177, - 'less' : 60, - 'Udieresis' : 220, - 'D' : 68, - 'five' : 53, - 'T' : 84, - 'oslash' : 248, - 'acute' : 180, - 'd' : 100, - 'OE' : 338, - 'Igrave' : 204, - 't' : 116, - 'parenright' : 41, - 'adieresis' : 228, - 'quotesingle' : 39, - 'twodotenleader' : 8229, - 'slash' : 47, - 'ellipsis' : 8230, - 'numbersign' : 35, - 'odieresis' : 246, - 'O' : 79, - 'oe' : 339, - 'o' : 111, - 'Edieresis' : 203, - 'plus' : 43, - 'dagger' : 8224, - 'three' : 51, - 'hungarumlaut' : 733, - 'parenleft' : 40, - 'fraction' : 8260, - 'registered' : 174, - 'J' : 74, - 'dieresis' : 168, - 'Ograve' : 210, - 'j' : 106, - 'z' : 122, - 'ae' : 230, - 'semicolon' : 59, - 'at' : 64, - 'Iacute' : 205, - 'percent' : 37, - 'bracketright' : 93, - 'AE' : 198, - 'asterisk' : 42, - 'aacute' : 225, - 'U' : 85, - 'eacute' : 233, - 'e' : 101, - 'thorn' : 254, - 'u' : 117, -} - -uni2type1 = {v: k for k, v in type12uni.items()} - -tex2uni = { - 'widehat' : 0x0302, - 'widetilde' : 0x0303, - 'widebar' : 0x0305, - 'langle' : 0x27e8, - 'rangle' : 0x27e9, - 'perp' : 0x27c2, - 'neq' : 0x2260, - 'Join' : 0x2a1d, - 'leqslant' : 0x2a7d, - 'geqslant' : 0x2a7e, - 'lessapprox' : 0x2a85, - 'gtrapprox' : 0x2a86, - 'lesseqqgtr' : 0x2a8b, - 'gtreqqless' : 0x2a8c, - 'triangleeq' : 0x225c, - 'eqslantless' : 0x2a95, - 'eqslantgtr' : 0x2a96, - 'backepsilon' : 0x03f6, - 'precapprox' : 0x2ab7, - 'succapprox' : 0x2ab8, - 'fallingdotseq' : 0x2252, - 'subseteqq' : 0x2ac5, - 'supseteqq' : 0x2ac6, - 'varpropto' : 0x221d, - 'precnapprox' : 0x2ab9, - 'succnapprox' : 0x2aba, - 'subsetneqq' : 0x2acb, - 'supsetneqq' : 0x2acc, - 'lnapprox' : 0x2ab9, - 'gnapprox' : 0x2aba, - 'longleftarrow' : 0x27f5, - 'longrightarrow' : 0x27f6, - 'longleftrightarrow' : 0x27f7, - 'Longleftarrow' : 0x27f8, - 'Longrightarrow' : 0x27f9, - 'Longleftrightarrow' : 0x27fa, - 'longmapsto' : 0x27fc, - 'leadsto' : 0x21dd, - 'dashleftarrow' : 0x290e, - 'dashrightarrow' : 0x290f, - 'circlearrowleft' : 0x21ba, - 'circlearrowright' : 0x21bb, - 'leftrightsquigarrow' : 0x21ad, - 'leftsquigarrow' : 0x219c, - 'rightsquigarrow' : 0x219d, - 'Game' : 0x2141, - 'hbar' : 0x0127, - 'hslash' : 0x210f, - 'ldots' : 0x2026, - 'vdots' : 0x22ee, - 'doteqdot' : 0x2251, - 'doteq' : 8784, - 'partial' : 8706, - 'gg' : 8811, - 'asymp' : 8781, - 'blacktriangledown' : 9662, - 'otimes' : 8855, - 'nearrow' : 8599, - 'varpi' : 982, - 'vee' : 8744, - 'vec' : 8407, - 'smile' : 8995, - 'succnsim' : 8937, - 'gimel' : 8503, - 'vert' : 124, - '|' : 8214, - 'varrho' : 1009, - 'P' : 182, - 'approxident' : 8779, - 'Swarrow' : 8665, - 'textasciicircum' : 94, - 'imageof' : 8887, - 'ntriangleleft' : 8938, - 'nleq' : 8816, - 'div' : 247, - 'nparallel' : 8742, - 'Leftarrow' : 8656, - 'lll' : 8920, - 'oiint' : 8751, - 'ngeq' : 8817, - 'Theta' : 920, - 'origof' : 8886, - 'blacksquare' : 9632, - 'solbar' : 9023, - 'neg' : 172, - 'sum' : 8721, - 'Vdash' : 8873, - 'coloneq' : 8788, - 'degree' : 176, - 'bowtie' : 8904, - 'blacktriangleright' : 9654, - 'varsigma' : 962, - 'leq' : 8804, - 'ggg' : 8921, - 'lneqq' : 8808, - 'scurel' : 8881, - 'stareq' : 8795, - 'BbbN' : 8469, - 'nLeftarrow' : 8653, - 'nLeftrightarrow' : 8654, - 'k' : 808, - 'bot' : 8869, - 'BbbC' : 8450, - 'Lsh' : 8624, - 'leftleftarrows' : 8647, - 'BbbZ' : 8484, - 'digamma' : 989, - 'BbbR' : 8477, - 'BbbP' : 8473, - 'BbbQ' : 8474, - 'vartriangleright' : 8883, - 'succsim' : 8831, - 'wedge' : 8743, - 'lessgtr' : 8822, - 'veebar' : 8891, - 'mapsdown' : 8615, - 'Rsh' : 8625, - 'chi' : 967, - 'prec' : 8826, - 'nsubseteq' : 8840, - 'therefore' : 8756, - 'eqcirc' : 8790, - 'textexclamdown' : 161, - 'nRightarrow' : 8655, - 'flat' : 9837, - 'notin' : 8713, - 'llcorner' : 8990, - 'varepsilon' : 949, - 'bigtriangleup' : 9651, - 'aleph' : 8501, - 'dotminus' : 8760, - 'upsilon' : 965, - 'Lambda' : 923, - 'cap' : 8745, - 'barleftarrow' : 8676, - 'mu' : 956, - 'boxplus' : 8862, - 'mp' : 8723, - 'circledast' : 8859, - 'tau' : 964, - 'in' : 8712, - 'backslash' : 92, - 'varnothing' : 8709, - 'sharp' : 9839, - 'eqsim' : 8770, - 'gnsim' : 8935, - 'Searrow' : 8664, - 'updownarrows' : 8645, - 'heartsuit' : 9825, - 'trianglelefteq' : 8884, - 'ddag' : 8225, - 'sqsubseteq' : 8849, - 'mapsfrom' : 8612, - 'boxbar' : 9707, - 'sim' : 8764, - 'Nwarrow' : 8662, - 'nequiv' : 8802, - 'succ' : 8827, - 'vdash' : 8866, - 'Leftrightarrow' : 8660, - 'parallel' : 8741, - 'invnot' : 8976, - 'natural' : 9838, - 'ss' : 223, - 'uparrow' : 8593, - 'nsim' : 8769, - 'hookrightarrow' : 8618, - 'Equiv' : 8803, - 'approx' : 8776, - 'Vvdash' : 8874, - 'nsucc' : 8833, - 'leftrightharpoons' : 8651, - 'Re' : 8476, - 'boxminus' : 8863, - 'equiv' : 8801, - 'Lleftarrow' : 8666, - 'll' : 8810, - 'Cup' : 8915, - 'measeq' : 8798, - 'upharpoonleft' : 8639, - 'lq' : 8216, - 'Upsilon' : 933, - 'subsetneq' : 8842, - 'greater' : 62, - 'supsetneq' : 8843, - 'Cap' : 8914, - 'L' : 321, - 'spadesuit' : 9824, - 'lrcorner' : 8991, - 'not' : 824, - 'bar' : 772, - 'rightharpoonaccent' : 8401, - 'boxdot' : 8865, - 'l' : 322, - 'leftharpoondown' : 8637, - 'bigcup' : 8899, - 'iint' : 8748, - 'bigwedge' : 8896, - 'downharpoonleft' : 8643, - 'textasciitilde' : 126, - 'subset' : 8834, - 'leqq' : 8806, - 'mapsup' : 8613, - 'nvDash' : 8877, - 'looparrowleft' : 8619, - 'nless' : 8814, - 'rightarrowbar' : 8677, - 'Vert' : 8214, - 'downdownarrows' : 8650, - 'uplus' : 8846, - 'simeq' : 8771, - 'napprox' : 8777, - 'ast' : 8727, - 'twoheaduparrow' : 8607, - 'doublebarwedge' : 8966, - 'Sigma' : 931, - 'leftharpoonaccent' : 8400, - 'ntrianglelefteq' : 8940, - 'nexists' : 8708, - 'times' : 215, - 'measuredangle' : 8737, - 'bumpeq' : 8783, - 'carriagereturn' : 8629, - 'adots' : 8944, - 'checkmark' : 10003, - 'lambda' : 955, - 'xi' : 958, - 'rbrace' : 125, - 'rbrack' : 93, - 'Nearrow' : 8663, - 'maltese' : 10016, - 'clubsuit' : 9827, - 'top' : 8868, - 'overarc' : 785, - 'varphi' : 966, - 'Delta' : 916, - 'iota' : 953, - 'nleftarrow' : 8602, - 'candra' : 784, - 'supset' : 8835, - 'triangleleft' : 9665, - 'gtreqless' : 8923, - 'ntrianglerighteq' : 8941, - 'quad' : 8195, - 'Xi' : 926, - 'gtrdot' : 8919, - 'leftthreetimes' : 8907, - 'minus' : 8722, - 'preccurlyeq' : 8828, - 'nleftrightarrow' : 8622, - 'lambdabar' : 411, - 'blacktriangle' : 9652, - 'kernelcontraction' : 8763, - 'Phi' : 934, - 'angle' : 8736, - 'spadesuitopen' : 9828, - 'eqless' : 8924, - 'mid' : 8739, - 'varkappa' : 1008, - 'Ldsh' : 8626, - 'updownarrow' : 8597, - 'beta' : 946, - 'textquotedblleft' : 8220, - 'rho' : 961, - 'alpha' : 945, - 'intercal' : 8890, - 'beth' : 8502, - 'grave' : 768, - 'acwopencirclearrow' : 8634, - 'nmid' : 8740, - 'nsupset' : 8837, - 'sigma' : 963, - 'dot' : 775, - 'Rightarrow' : 8658, - 'turnednot' : 8985, - 'backsimeq' : 8909, - 'leftarrowtail' : 8610, - 'approxeq' : 8778, - 'curlyeqsucc' : 8927, - 'rightarrowtail' : 8611, - 'Psi' : 936, - 'copyright' : 169, - 'yen' : 165, - 'vartriangleleft' : 8882, - 'rasp' : 700, - 'triangleright' : 9655, - 'precsim' : 8830, - 'infty' : 8734, - 'geq' : 8805, - 'updownarrowbar' : 8616, - 'precnsim' : 8936, - 'H' : 779, - 'ulcorner' : 8988, - 'looparrowright' : 8620, - 'ncong' : 8775, - 'downarrow' : 8595, - 'circeq' : 8791, - 'subseteq' : 8838, - 'bigstar' : 9733, - 'prime' : 8242, - 'lceil' : 8968, - 'Rrightarrow' : 8667, - 'oiiint' : 8752, - 'curlywedge' : 8911, - 'vDash' : 8872, - 'lfloor' : 8970, - 'ddots' : 8945, - 'exists' : 8707, - 'underbar' : 817, - 'Pi' : 928, - 'leftrightarrows' : 8646, - 'sphericalangle' : 8738, - 'coprod' : 8720, - 'circledcirc' : 8858, - 'gtrsim' : 8819, - 'gneqq' : 8809, - 'between' : 8812, - 'theta' : 952, - 'complement' : 8705, - 'arceq' : 8792, - 'nVdash' : 8878, - 'S' : 167, - 'wr' : 8768, - 'wp' : 8472, - 'backcong' : 8780, - 'lasp' : 701, - 'c' : 807, - 'nabla' : 8711, - 'dotplus' : 8724, - 'eta' : 951, - 'forall' : 8704, - 'eth' : 240, - 'colon' : 58, - 'sqcup' : 8852, - 'rightrightarrows' : 8649, - 'sqsupset' : 8848, - 'mapsto' : 8614, - 'bigtriangledown' : 9661, - 'sqsupseteq' : 8850, - 'propto' : 8733, - 'pi' : 960, - 'pm' : 177, - 'dots' : 0x2026, - 'nrightarrow' : 8603, - 'textasciiacute' : 180, - 'Doteq' : 8785, - 'breve' : 774, - 'sqcap' : 8851, - 'twoheadrightarrow' : 8608, - 'kappa' : 954, - 'vartriangle' : 9653, - 'diamondsuit' : 9826, - 'pitchfork' : 8916, - 'blacktriangleleft' : 9664, - 'nprec' : 8832, - 'curvearrowright' : 8631, - 'barwedge' : 8892, - 'multimap' : 8888, - 'textquestiondown' : 191, - 'cong' : 8773, - 'rtimes' : 8906, - 'rightzigzagarrow' : 8669, - 'rightarrow' : 8594, - 'leftarrow' : 8592, - '__sqrt__' : 8730, - 'twoheaddownarrow' : 8609, - 'oint' : 8750, - 'bigvee' : 8897, - 'eqdef' : 8797, - 'sterling' : 163, - 'phi' : 981, - 'Updownarrow' : 8661, - 'backprime' : 8245, - 'emdash' : 8212, - 'Gamma' : 915, - 'i' : 305, - 'rceil' : 8969, - 'leftharpoonup' : 8636, - 'Im' : 8465, - 'curvearrowleft' : 8630, - 'wedgeq' : 8793, - 'curlyeqprec' : 8926, - 'questeq' : 8799, - 'less' : 60, - 'upuparrows' : 8648, - 'tilde' : 771, - 'textasciigrave' : 96, - 'smallsetminus' : 8726, - 'ell' : 8467, - 'cup' : 8746, - 'danger' : 9761, - 'nVDash' : 8879, - 'cdotp' : 183, - 'cdots' : 8943, - 'hat' : 770, - 'eqgtr' : 8925, - 'psi' : 968, - 'frown' : 8994, - 'acute' : 769, - 'downzigzagarrow' : 8623, - 'ntriangleright' : 8939, - 'cupdot' : 8845, - 'circleddash' : 8861, - 'oslash' : 8856, - 'mho' : 8487, - 'd' : 803, - 'sqsubset' : 8847, - 'cdot' : 8901, - 'Omega' : 937, - 'OE' : 338, - 'veeeq' : 8794, - 'Finv' : 8498, - 't' : 865, - 'leftrightarrow' : 8596, - 'swarrow' : 8601, - 'rightthreetimes' : 8908, - 'rightleftharpoons' : 8652, - 'lesssim' : 8818, - 'searrow' : 8600, - 'because' : 8757, - 'gtrless' : 8823, - 'star' : 8902, - 'nsubset' : 8836, - 'zeta' : 950, - 'dddot' : 8411, - 'bigcirc' : 9675, - 'Supset' : 8913, - 'circ' : 8728, - 'slash' : 8725, - 'ocirc' : 778, - 'prod' : 8719, - 'twoheadleftarrow' : 8606, - 'daleth' : 8504, - 'upharpoonright' : 8638, - 'odot' : 8857, - 'Uparrow' : 8657, - 'O' : 216, - 'hookleftarrow' : 8617, - 'trianglerighteq' : 8885, - 'nsime' : 8772, - 'oe' : 339, - 'nwarrow' : 8598, - 'o' : 248, - 'ddddot' : 8412, - 'downharpoonright' : 8642, - 'succcurlyeq' : 8829, - 'gamma' : 947, - 'scrR' : 8475, - 'dag' : 8224, - 'thickspace' : 8197, - 'frakZ' : 8488, - 'lessdot' : 8918, - 'triangledown' : 9663, - 'ltimes' : 8905, - 'scrB' : 8492, - 'endash' : 8211, - 'scrE' : 8496, - 'scrF' : 8497, - 'scrH' : 8459, - 'scrI' : 8464, - 'rightharpoondown' : 8641, - 'scrL' : 8466, - 'scrM' : 8499, - 'frakC' : 8493, - 'nsupseteq' : 8841, - 'circledR' : 174, - 'circledS' : 9416, - 'ngtr' : 8815, - 'bigcap' : 8898, - 'scre' : 8495, - 'Downarrow' : 8659, - 'scrg' : 8458, - 'overleftrightarrow' : 8417, - 'scro' : 8500, - 'lnsim' : 8934, - 'eqcolon' : 8789, - 'curlyvee' : 8910, - 'urcorner' : 8989, - 'lbrace' : 123, - 'Bumpeq' : 8782, - 'delta' : 948, - 'boxtimes' : 8864, - 'overleftarrow' : 8406, - 'prurel' : 8880, - 'clubsuitopen' : 9831, - 'cwopencirclearrow' : 8635, - 'geqq' : 8807, - 'rightleftarrows' : 8644, - 'aa' : 229, - 'ac' : 8766, - 'ae' : 230, - 'int' : 8747, - 'rfloor' : 8971, - 'risingdotseq' : 8787, - 'nvdash' : 8876, - 'diamond' : 8900, - 'ddot' : 776, - 'backsim' : 8765, - 'oplus' : 8853, - 'triangleq' : 8796, - 'check' : 780, - 'ni' : 8715, - 'iiint' : 8749, - 'ne' : 8800, - 'lesseqgtr' : 8922, - 'obar' : 9021, - 'supseteq' : 8839, - 'nu' : 957, - 'AA' : 197, - 'AE' : 198, - 'models' : 8871, - 'ominus' : 8854, - 'dashv' : 8867, - 'omega' : 969, - 'rq' : 8217, - 'Subset' : 8912, - 'rightharpoonup' : 8640, - 'Rdsh' : 8627, - 'bullet' : 8729, - 'divideontimes' : 8903, - 'lbrack' : 91, - 'textquotedblright' : 8221, - 'Colon' : 8759, - '%' : 37, - '$' : 36, - '{' : 123, - '}' : 125, - '_' : 95, - '#' : 35, - 'imath' : 0x131, - 'circumflexaccent' : 770, - 'combiningbreve' : 774, - 'combiningoverline' : 772, - 'combininggraveaccent' : 768, - 'combiningacuteaccent' : 769, - 'combiningdiaeresis' : 776, - 'combiningtilde' : 771, - 'combiningrightarrowabove' : 8407, - 'combiningdotabove' : 775, - 'combiningthreedotsabove' : 8411, - 'combiningfourdotsabove' : 8412, - 'to' : 8594, - 'succeq' : 8829, - 'emptyset' : 8709, - 'leftparen' : 40, - 'rightparen' : 41, - 'bigoplus' : 10753, - 'leftangle' : 10216, - 'rightangle' : 10217, - 'leftbrace' : 124, - 'rightbrace' : 125, - 'jmath' : 567, - 'bigodot' : 10752, - 'preceq' : 8828, - 'biguplus' : 10756, - 'epsilon' : 949, - 'vartheta' : 977, - 'bigotimes' : 10754, - 'guillemotleft' : 171, - 'ring' : 730, - 'Thorn' : 222, - 'guilsinglright' : 8250, - 'perthousand' : 8240, - 'macron' : 175, - 'cent' : 162, - 'guillemotright' : 187, - 'equal' : 61, - 'asterisk' : 42, - 'guilsinglleft' : 8249, - 'plus' : 43, - 'thorn' : 254, - 'dagger' : 8224 -} - -# Each element is a 4-tuple of the form: -# src_start, src_end, dst_font, dst_start -# -stix_virtual_fonts = { - 'bb': - { - 'rm': - [ - (0x0030, 0x0039, 'rm', 0x1d7d8), # 0-9 - (0x0041, 0x0042, 'rm', 0x1d538), # A-B - (0x0043, 0x0043, 'rm', 0x2102), # C - (0x0044, 0x0047, 'rm', 0x1d53b), # D-G - (0x0048, 0x0048, 'rm', 0x210d), # H - (0x0049, 0x004d, 'rm', 0x1d540), # I-M - (0x004e, 0x004e, 'rm', 0x2115), # N - (0x004f, 0x004f, 'rm', 0x1d546), # O - (0x0050, 0x0051, 'rm', 0x2119), # P-Q - (0x0052, 0x0052, 'rm', 0x211d), # R - (0x0053, 0x0059, 'rm', 0x1d54a), # S-Y - (0x005a, 0x005a, 'rm', 0x2124), # Z - (0x0061, 0x007a, 'rm', 0x1d552), # a-z - (0x0393, 0x0393, 'rm', 0x213e), # \Gamma - (0x03a0, 0x03a0, 'rm', 0x213f), # \Pi - (0x03a3, 0x03a3, 'rm', 0x2140), # \Sigma - (0x03b3, 0x03b3, 'rm', 0x213d), # \gamma - (0x03c0, 0x03c0, 'rm', 0x213c), # \pi - ], - 'it': - [ - (0x0030, 0x0039, 'rm', 0x1d7d8), # 0-9 - (0x0041, 0x0042, 'it', 0xe154), # A-B - (0x0043, 0x0043, 'it', 0x2102), # C - (0x0044, 0x0044, 'it', 0x2145), # D - (0x0045, 0x0047, 'it', 0xe156), # E-G - (0x0048, 0x0048, 'it', 0x210d), # H - (0x0049, 0x004d, 'it', 0xe159), # I-M - (0x004e, 0x004e, 'it', 0x2115), # N - (0x004f, 0x004f, 'it', 0xe15e), # O - (0x0050, 0x0051, 'it', 0x2119), # P-Q - (0x0052, 0x0052, 'it', 0x211d), # R - (0x0053, 0x0059, 'it', 0xe15f), # S-Y - (0x005a, 0x005a, 'it', 0x2124), # Z - (0x0061, 0x0063, 'it', 0xe166), # a-c - (0x0064, 0x0065, 'it', 0x2146), # d-e - (0x0066, 0x0068, 'it', 0xe169), # f-h - (0x0069, 0x006a, 'it', 0x2148), # i-j - (0x006b, 0x007a, 'it', 0xe16c), # k-z - (0x0393, 0x0393, 'it', 0x213e), # \Gamma (not in beta STIX fonts) - (0x03a0, 0x03a0, 'it', 0x213f), # \Pi - (0x03a3, 0x03a3, 'it', 0x2140), # \Sigma (not in beta STIX fonts) - (0x03b3, 0x03b3, 'it', 0x213d), # \gamma (not in beta STIX fonts) - (0x03c0, 0x03c0, 'it', 0x213c), # \pi - ], - 'bf': - [ - (0x0030, 0x0039, 'rm', 0x1d7d8), # 0-9 - (0x0041, 0x0042, 'bf', 0xe38a), # A-B - (0x0043, 0x0043, 'bf', 0x2102), # C - (0x0044, 0x0044, 'bf', 0x2145), # D - (0x0045, 0x0047, 'bf', 0xe38d), # E-G - (0x0048, 0x0048, 'bf', 0x210d), # H - (0x0049, 0x004d, 'bf', 0xe390), # I-M - (0x004e, 0x004e, 'bf', 0x2115), # N - (0x004f, 0x004f, 'bf', 0xe395), # O - (0x0050, 0x0051, 'bf', 0x2119), # P-Q - (0x0052, 0x0052, 'bf', 0x211d), # R - (0x0053, 0x0059, 'bf', 0xe396), # S-Y - (0x005a, 0x005a, 'bf', 0x2124), # Z - (0x0061, 0x0063, 'bf', 0xe39d), # a-c - (0x0064, 0x0065, 'bf', 0x2146), # d-e - (0x0066, 0x0068, 'bf', 0xe3a2), # f-h - (0x0069, 0x006a, 'bf', 0x2148), # i-j - (0x006b, 0x007a, 'bf', 0xe3a7), # k-z - (0x0393, 0x0393, 'bf', 0x213e), # \Gamma - (0x03a0, 0x03a0, 'bf', 0x213f), # \Pi - (0x03a3, 0x03a3, 'bf', 0x2140), # \Sigma - (0x03b3, 0x03b3, 'bf', 0x213d), # \gamma - (0x03c0, 0x03c0, 'bf', 0x213c), # \pi - ], - }, - 'cal': - [ - (0x0041, 0x005a, 'it', 0xe22d), # A-Z - ], - 'frak': - { - 'rm': - [ - (0x0041, 0x0042, 'rm', 0x1d504), # A-B - (0x0043, 0x0043, 'rm', 0x212d), # C - (0x0044, 0x0047, 'rm', 0x1d507), # D-G - (0x0048, 0x0048, 'rm', 0x210c), # H - (0x0049, 0x0049, 'rm', 0x2111), # I - (0x004a, 0x0051, 'rm', 0x1d50d), # J-Q - (0x0052, 0x0052, 'rm', 0x211c), # R - (0x0053, 0x0059, 'rm', 0x1d516), # S-Y - (0x005a, 0x005a, 'rm', 0x2128), # Z - (0x0061, 0x007a, 'rm', 0x1d51e), # a-z - ], - 'bf': - [ - (0x0041, 0x005a, 'bf', 0x1d56c), # A-Z - (0x0061, 0x007a, 'bf', 0x1d586), # a-z - ], - }, - 'scr': - [ - (0x0041, 0x0041, 'it', 0x1d49c), # A - (0x0042, 0x0042, 'it', 0x212c), # B - (0x0043, 0x0044, 'it', 0x1d49e), # C-D - (0x0045, 0x0046, 'it', 0x2130), # E-F - (0x0047, 0x0047, 'it', 0x1d4a2), # G - (0x0048, 0x0048, 'it', 0x210b), # H - (0x0049, 0x0049, 'it', 0x2110), # I - (0x004a, 0x004b, 'it', 0x1d4a5), # J-K - (0x004c, 0x004c, 'it', 0x2112), # L - (0x004d, 0x004d, 'it', 0x2133), # M - (0x004e, 0x0051, 'it', 0x1d4a9), # N-Q - (0x0052, 0x0052, 'it', 0x211b), # R - (0x0053, 0x005a, 'it', 0x1d4ae), # S-Z - (0x0061, 0x0064, 'it', 0x1d4b6), # a-d - (0x0065, 0x0065, 'it', 0x212f), # e - (0x0066, 0x0066, 'it', 0x1d4bb), # f - (0x0067, 0x0067, 'it', 0x210a), # g - (0x0068, 0x006e, 'it', 0x1d4bd), # h-n - (0x006f, 0x006f, 'it', 0x2134), # o - (0x0070, 0x007a, 'it', 0x1d4c5), # p-z - ], - 'sf': - { - 'rm': - [ - (0x0030, 0x0039, 'rm', 0x1d7e2), # 0-9 - (0x0041, 0x005a, 'rm', 0x1d5a0), # A-Z - (0x0061, 0x007a, 'rm', 0x1d5ba), # a-z - (0x0391, 0x03a9, 'rm', 0xe17d), # \Alpha-\Omega - (0x03b1, 0x03c9, 'rm', 0xe196), # \alpha-\omega - (0x03d1, 0x03d1, 'rm', 0xe1b0), # theta variant - (0x03d5, 0x03d5, 'rm', 0xe1b1), # phi variant - (0x03d6, 0x03d6, 'rm', 0xe1b3), # pi variant - (0x03f1, 0x03f1, 'rm', 0xe1b2), # rho variant - (0x03f5, 0x03f5, 'rm', 0xe1af), # lunate epsilon - (0x2202, 0x2202, 'rm', 0xe17c), # partial differential - ], - 'it': - [ - # These numerals are actually upright. We don't actually - # want italic numerals ever. - (0x0030, 0x0039, 'rm', 0x1d7e2), # 0-9 - (0x0041, 0x005a, 'it', 0x1d608), # A-Z - (0x0061, 0x007a, 'it', 0x1d622), # a-z - (0x0391, 0x03a9, 'rm', 0xe17d), # \Alpha-\Omega - (0x03b1, 0x03c9, 'it', 0xe1d8), # \alpha-\omega - (0x03d1, 0x03d1, 'it', 0xe1f2), # theta variant - (0x03d5, 0x03d5, 'it', 0xe1f3), # phi variant - (0x03d6, 0x03d6, 'it', 0xe1f5), # pi variant - (0x03f1, 0x03f1, 'it', 0xe1f4), # rho variant - (0x03f5, 0x03f5, 'it', 0xe1f1), # lunate epsilon - ], - 'bf': - [ - (0x0030, 0x0039, 'bf', 0x1d7ec), # 0-9 - (0x0041, 0x005a, 'bf', 0x1d5d4), # A-Z - (0x0061, 0x007a, 'bf', 0x1d5ee), # a-z - (0x0391, 0x03a9, 'bf', 0x1d756), # \Alpha-\Omega - (0x03b1, 0x03c9, 'bf', 0x1d770), # \alpha-\omega - (0x03d1, 0x03d1, 'bf', 0x1d78b), # theta variant - (0x03d5, 0x03d5, 'bf', 0x1d78d), # phi variant - (0x03d6, 0x03d6, 'bf', 0x1d78f), # pi variant - (0x03f0, 0x03f0, 'bf', 0x1d78c), # kappa variant - (0x03f1, 0x03f1, 'bf', 0x1d78e), # rho variant - (0x03f5, 0x03f5, 'bf', 0x1d78a), # lunate epsilon - (0x2202, 0x2202, 'bf', 0x1d789), # partial differential - (0x2207, 0x2207, 'bf', 0x1d76f), # \Nabla - ], - }, - 'tt': - [ - (0x0030, 0x0039, 'rm', 0x1d7f6), # 0-9 - (0x0041, 0x005a, 'rm', 0x1d670), # A-Z - (0x0061, 0x007a, 'rm', 0x1d68a) # a-z - ], - } - - -# Fix some incorrect glyphs. -stix_glyph_fixes = { - # Cap and Cup glyphs are swapped. - 0x22d2: 0x22d3, - 0x22d3: 0x22d2, -} diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/testing/jpl_units/StrConverter.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/testing/jpl_units/StrConverter.py deleted file mode 100644 index a62d4981dc79201214dc926eaa6a4c74ffcba078..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/testing/jpl_units/StrConverter.py +++ /dev/null @@ -1,97 +0,0 @@ -"""StrConverter module containing class StrConverter.""" - -import numpy as np - -import matplotlib.units as units - -__all__ = ['StrConverter'] - - -class StrConverter(units.ConversionInterface): - """ - A Matplotlib converter class for string data values. - - Valid units for string are: - - 'indexed' : Values are indexed as they are specified for plotting. - - 'sorted' : Values are sorted alphanumerically. - - 'inverted' : Values are inverted so that the first value is on top. - - 'sorted-inverted' : A combination of 'sorted' and 'inverted' - """ - - @staticmethod - def axisinfo(unit, axis): - # docstring inherited - return None - - @staticmethod - def convert(value, unit, axis): - # docstring inherited - - if value == []: - return [] - - # we delay loading to make matplotlib happy - ax = axis.axes - if axis is ax.xaxis: - isXAxis = True - else: - isXAxis = False - - axis.get_major_ticks() - ticks = axis.get_ticklocs() - labels = axis.get_ticklabels() - - labels = [l.get_text() for l in labels if l.get_text()] - - if not labels: - ticks = [] - labels = [] - - if not np.iterable(value): - value = [value] - - newValues = [] - for v in value: - if v not in labels and v not in newValues: - newValues.append(v) - - labels.extend(newValues) - - # DISABLED: This is disabled because matplotlib bar plots do not - # DISABLED: recalculate the unit conversion of the data values - # DISABLED: this is due to design and is not really a bug. - # DISABLED: If this gets changed, then we can activate the following - # DISABLED: block of code. Note that this works for line plots. - # DISABLED if unit: - # DISABLED if unit.find("sorted") > -1: - # DISABLED labels.sort() - # DISABLED if unit.find("inverted") > -1: - # DISABLED labels = labels[::-1] - - # add padding (so they do not appear on the axes themselves) - labels = [''] + labels + [''] - ticks = list(range(len(labels))) - ticks[0] = 0.5 - ticks[-1] = ticks[-1] - 0.5 - - axis.set_ticks(ticks) - axis.set_ticklabels(labels) - # we have to do the following lines to make ax.autoscale_view work - loc = axis.get_major_locator() - loc.set_bounds(ticks[0], ticks[-1]) - - if isXAxis: - ax.set_xlim(ticks[0], ticks[-1]) - else: - ax.set_ylim(ticks[0], ticks[-1]) - - result = [ticks[labels.index(v)] for v in value] - - ax.viewLim.ignore(-1) - return result - - @staticmethod - def default_units(value, axis): - # docstring inherited - # The default behavior for string indexing. - return "indexed" diff --git a/spaces/langvision/codellama-34b-chat/app.py b/spaces/langvision/codellama-34b-chat/app.py deleted file mode 100644 index 16fa36fed274b1c312ea9966d6158652e73e3980..0000000000000000000000000000000000000000 --- a/spaces/langvision/codellama-34b-chat/app.py +++ /dev/null @@ -1,254 +0,0 @@ -import os -from typing import Iterator - -import gradio as gr - -from model import run - -HF_PUBLIC = os.environ.get("HF_PUBLIC", False) - -DEFAULT_SYSTEM_PROMPT = "You are CodeLlama. You are AI-assistant, you are polite, give only truthful information and are based on the CodeLLaMA-34B model from Meta. You can communicate in different languages equally well." -MAX_MAX_NEW_TOKENS = 4096 -DEFAULT_MAX_NEW_TOKENS = 1024 -MAX_INPUT_TOKEN_LENGTH = 4000 - -DESCRIPTION = """ -# 🧑‍💻🦙 CodeLlama-34B Chat - -💻 This Space demonstrates model [CodeLlama-34b-Instruct](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) by Meta, a Code Llama model with 34B parameters fine-tuned for chat instructions and specialized on code tasks. Feel free to play with it, or duplicate to run generations without a queue! If you want to run your own service, you can also [deploy the model on Inference Endpoints](https://huggingface.co/inference-endpoints). - -🔎 For more details about the Code Llama family of models and how to use them with `transformers`, take a look [at our blog post](https://huggingface.co/blog/codellama) or [the paper](https://huggingface.co/papers/2308.12950). - -🏃🏻 Check out our [Playground](https://huggingface.co/spaces/codellama/codellama-playground) for a super-fast code completion demo that leverages a streaming [inference endpoint](https://huggingface.co/inference-endpoints). - -""" - -def clear_and_save_textbox(message: str) -> tuple[str, str]: - return '', message - - -def display_input(message: str, - history: list[tuple[str, str]]) -> list[tuple[str, str]]: - history.append((message, '')) - return history - - -def delete_prev_fn( - history: list[tuple[str, str]]) -> tuple[list[tuple[str, str]], str]: - try: - message, _ = history.pop() - except IndexError: - message = '' - return history, message or '' - - -def generate( - message: str, - history_with_input: list[tuple[str, str]], - system_prompt: str, - max_new_tokens: int, - temperature: float, - top_p: float, - top_k: int, -) -> Iterator[list[tuple[str, str]]]: - if max_new_tokens > MAX_MAX_NEW_TOKENS: - raise ValueError - - history = history_with_input[:-1] - generator = run(message, history, system_prompt, max_new_tokens, temperature, top_p, top_k) - try: - first_response = next(generator) - yield history + [(message, first_response)] - except StopIteration: - yield history + [(message, '')] - for response in generator: - yield history + [(message, response)] - - -def process_example(message: str) -> tuple[str, list[tuple[str, str]]]: - generator = generate(message, [], DEFAULT_SYSTEM_PROMPT, 1024, 1, 0.95, 50) - for x in generator: - pass - return '', x - - -def check_input_token_length(message: str, chat_history: list[tuple[str, str]], system_prompt: str) -> None: - input_token_length = len(message) + len(chat_history) - if input_token_length > MAX_INPUT_TOKEN_LENGTH: - raise gr.Error(f'The accumulated input is too long ({input_token_length} > {MAX_INPUT_TOKEN_LENGTH}). Clear your chat history and try again.') - - -with gr.Blocks(css='style.css') as demo: - gr.Markdown(DESCRIPTION) - gr.DuplicateButton(value='Duplicate Space for private use', - elem_id='duplicate-button') - - with gr.Group(): - chatbot = gr.Chatbot(label='Playground') - with gr.Row(): - textbox = gr.Textbox( - container=False, - show_label=False, - placeholder='Hi, CodeLlama!', - scale=10, - ) - submit_button = gr.Button('Submit', - variant='primary', - scale=1, - min_width=0) - with gr.Row(): - retry_button = gr.Button('🔄 Retry', variant='secondary') - undo_button = gr.Button('↩️ Undo', variant='secondary') - clear_button = gr.Button('🗑️ Clear', variant='secondary') - - saved_input = gr.State() - - with gr.Accordion(label='⚙️ Advanced options', open=False): - system_prompt = gr.Textbox(label='System prompt', - value=DEFAULT_SYSTEM_PROMPT, - lines=5, - interactive=False) - max_new_tokens = gr.Slider( - label='Max new tokens', - minimum=1, - maximum=MAX_MAX_NEW_TOKENS, - step=1, - value=DEFAULT_MAX_NEW_TOKENS, - ) - temperature = gr.Slider( - label='Temperature', - minimum=0.1, - maximum=4.0, - step=0.1, - value=0.1, - ) - top_p = gr.Slider( - label='Top-p (nucleus sampling)', - minimum=0.05, - maximum=1.0, - step=0.05, - value=0.9, - ) - top_k = gr.Slider( - label='Top-k', - minimum=1, - maximum=1000, - step=1, - value=10, - ) - - - - textbox.submit( - fn=clear_and_save_textbox, - inputs=textbox, - outputs=[textbox, saved_input], - api_name=False, - queue=False, - ).then( - fn=display_input, - inputs=[saved_input, chatbot], - outputs=chatbot, - api_name=False, - queue=False, - ).then( - fn=check_input_token_length, - inputs=[saved_input, chatbot, system_prompt], - api_name=False, - queue=False, - ).success( - fn=generate, - inputs=[ - saved_input, - chatbot, - system_prompt, - max_new_tokens, - temperature, - top_p, - top_k, - ], - outputs=chatbot, - api_name=False, - ) - - button_event_preprocess = submit_button.click( - fn=clear_and_save_textbox, - inputs=textbox, - outputs=[textbox, saved_input], - api_name=False, - queue=False, - ).then( - fn=display_input, - inputs=[saved_input, chatbot], - outputs=chatbot, - api_name=False, - queue=False, - ).then( - fn=check_input_token_length, - inputs=[saved_input, chatbot, system_prompt], - api_name=False, - queue=False, - ).success( - fn=generate, - inputs=[ - saved_input, - chatbot, - system_prompt, - max_new_tokens, - temperature, - top_p, - top_k, - ], - outputs=chatbot, - api_name=False, - ) - - retry_button.click( - fn=delete_prev_fn, - inputs=chatbot, - outputs=[chatbot, saved_input], - api_name=False, - queue=False, - ).then( - fn=display_input, - inputs=[saved_input, chatbot], - outputs=chatbot, - api_name=False, - queue=False, - ).then( - fn=generate, - inputs=[ - saved_input, - chatbot, - system_prompt, - max_new_tokens, - temperature, - top_p, - top_k, - ], - outputs=chatbot, - api_name=False, - ) - - undo_button.click( - fn=delete_prev_fn, - inputs=chatbot, - outputs=[chatbot, saved_input], - api_name=False, - queue=False, - ).then( - fn=lambda x: x, - inputs=[saved_input], - outputs=textbox, - api_name=False, - queue=False, - ) - - clear_button.click( - fn=lambda: ([], ''), - outputs=[chatbot, saved_input], - queue=False, - api_name=False, - ) - -demo.queue(max_size=32).launch(share=HF_PUBLIC, show_api=True) diff --git a/spaces/librarian-bots/dashboard/app.py b/spaces/librarian-bots/dashboard/app.py deleted file mode 100644 index a87399d05f1a3c498c4a603abe85afeb25ca9ebb..0000000000000000000000000000000000000000 --- a/spaces/librarian-bots/dashboard/app.py +++ /dev/null @@ -1,174 +0,0 @@ -import os -from datetime import datetime, timedelta -from functools import lru_cache -from typing import Any, List - -import gradio as gr -import httpx -import pandas as pd -import plotly.express as px -import polars as pl -from cachetools import TTLCache, cached -from datasets import Dataset, load_dataset -from dotenv import load_dotenv -from httpx import Client -from toolz import concat, frequencies -from tqdm.auto import tqdm - -load_dotenv() -token = os.environ["HUGGINGFACE_TOKEN"] -user_agent = os.environ["USER_AGENT"] -user = os.environ["USER_TO_TRACK"] -os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" -assert token -assert user_agent -assert user - -headers = {"user-agent": user_agent, "authorization": f"Bearer {token}"} -limits = httpx.Limits(max_keepalive_connections=10, max_connections=20) -client = Client(headers=headers, http2=True, limits=limits, timeout=60.0) - - -@lru_cache(maxsize=None) -def get_hub_community_activity(user: str, max: int = 200_000) -> List[Any]: - with tqdm() as pbar: - all_data = [] - i = 1 - while i <= max: - try: - r = client.get( - f"https://huggingface.co/api/recent-activity?limit=100&type=discussion&skip={i}&user={user}", - ) - activity = r.json()["recentActivity"] - if not activity: - break - all_data.append(activity) - if len(all_data) % 1000 == 0: - # print(f"Length of all_data: {len(all_data)}") - pbar.write(f"Length of all_data: {len(all_data)}") - i += 100 - pbar.update(100) - except Exception as e: - print(e) - continue - - return list(concat(all_data)) - - -# def get_hub_community_activity(user: str) -> List[Any]: -# all_data = [] -# for i in range(1, 2000, 100): -# r = httpx.get( -# f"https://huggingface.co/api/recent-activity?limit=100&type=discussion&skip={i}&user={user}" -# ) -# activity = r.json()["recentActivity"] -# all_data.append(activity) -# return list(concat(all_data)) - - -def parse_date_time(date_time: str) -> datetime: - return datetime.strptime(date_time, "%Y-%m-%dT%H:%M:%S.%fZ") - - -def parse_pr_data(data): - data = data["discussionData"] - createdAt = parse_date_time(data["createdAt"]) - pr_number = data["num"] - status = data["status"] - repo_id = data["repo"]["name"] - repo_type = data["repo"]["type"] - isPullRequest = data["isPullRequest"] - return { - "createdAt": createdAt, - "pr_number": pr_number, - "status": status, - "repo_id": repo_id, - "type": repo_type, - "isPullRequest": isPullRequest, - } - - -@cached(cache=TTLCache(maxsize=1000, ttl=timedelta(minutes=30), timer=datetime.now)) -def update_data(): - try: - previous_df = pl.DataFrame( - load_dataset(f"librarian-bot/{user}-stats", split="train").data.table - ) - except FileNotFoundError: - previous_df = pl.DataFrame() - data = get_hub_community_activity(user) - data = [parse_pr_data(d) for d in data] - update_df = pl.DataFrame(data) - df = pl.concat([previous_df, update_df]).unique() - if len(df) != len(previous_df): - Dataset(df.to_arrow()).push_to_hub(f"{user}-stats", token=token) - return df - - -# def get_pr_status(): -# df = update_data() -# df = df.filter(pl.col("isPullRequest") is True) -# return df.select(pl.col("status").value_counts()) -# # return frequencies(x["status"] for x in pr_data) - - -@lru_cache(maxsize=512) -def get_pr_status(user: str): - all_data = get_hub_community_activity(user) - pr_data = ( - x["discussionData"] for x in all_data if x["discussionData"]["isPullRequest"] - ) - return frequencies(x["status"] for x in pr_data) - - -def create_pie(): - frequencies = get_pr_status(user) - df = pd.DataFrame({"status": frequencies.keys(), "number": frequencies.values()}) - return px.pie(df, values="number", names="status", template="seaborn") - - -def group_status_by_pr_number(): - all_data = get_hub_community_activity(user) - all_data = [parse_pr_data(d) for d in all_data] - return ( - pl.DataFrame(all_data).groupby("status").agg(pl.mean("pr_number")).to_pandas() - ) - - -def plot_over_time(): - all_data = get_hub_community_activity(user) - all_data = [parse_pr_data(d) for d in all_data] - df = pl.DataFrame(all_data).with_columns(pl.col("createdAt").cast(pl.Date)) - df = df.pivot( - values=["status"], - index=["createdAt"], - columns=["status"], - aggregate_function="count", - ) - df = df.fill_null(0) - df = df.with_columns(pl.sum(["open", "closed", "merged"])).sort("createdAt") - df = df.to_pandas().set_index("createdAt").cumsum() - return px.line(df, x=df.index, y=[c for c in df.columns if c != "sum"]) - - -create_pie() - -with gr.Blocks() as demo: - # frequencies = get_pr_status("librarian-bot") - gr.Markdown(f"# {user} PR Stats") - gr.Markdown(f"Total prs and issues opened by {user}: {len(update_data()):,}") - # gr.Markdown(f"Total PRs opened: {sum(frequencies.values())}") - with gr.Column(): - gr.Markdown("## Pull requests status") - gr.Markdown( - "The below pie chart shows the percentage of pull requests made by" - " librarian bot that are open, closed or merged" - ) - gr.Plot(create_pie()) - with gr.Column(): - gr.Markdown("Pull requests opened, closed and merged over time (cumulative)") - gr.Plot(plot_over_time()) - with gr.Column(): - gr.Markdown("## Pull requests status by PR number") - gr.DataFrame(group_status_by_pr_number()) -demo.launch(debug=True) diff --git a/spaces/lilucheng/sourcedetection/common/config.py b/spaces/lilucheng/sourcedetection/common/config.py deleted file mode 100644 index 500e60f831139206a5fefe9f9f9cc2ae91a06f83..0000000000000000000000000000000000000000 --- a/spaces/lilucheng/sourcedetection/common/config.py +++ /dev/null @@ -1,22 +0,0 @@ -from rea_python.main.database import RedshiftHook, PostgresHook, DBCopyMode -from rea_python.constants import OutputFormat -from rea_python.main.aws import get_secret - -#---------------------------------------------------------- - -rs_hook = RedshiftHook \ -( - iam_role_arn = "arn:aws:iam::051694948699:role/prod-redshift-aws-access", - via_s3_bucket = 'dev-misc-usage', - via_s3_folder = 'redshift-copy' -) - -rs_hook.set_conn_from_uri(get_secret('prod/redshift/pipeline/db_conn_uri')) - -#---------------------------------------------------------- - -ps_hook = PostgresHook() -ps_hook.set_conn_from_uri(get_secret("prod/data-staging-db/pipeline/db_conn_uri")) - - - diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Fostex Mr8 Mkii Software 15 BEST.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Fostex Mr8 Mkii Software 15 BEST.md deleted file mode 100644 index b6474b378fbb892f1d6b1fc165578857ecbde45b..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Fostex Mr8 Mkii Software 15 BEST.md +++ /dev/null @@ -1,158 +0,0 @@ -
        -

        Fostex Mr8 Mkii Software 15: How to Upgrade and Manage Your Digital Multitracker

        - -

        Do you own a Fostex MR-8 MkII digital multitracker? If so, you might be interested in Fostex Mr8 Mkii Software 15, a software update file that contains the latest firmware version for your device. Firmware is the software that controls the operation of your device. By updating the firmware, you can enjoy new features and improvements on your device.

        -

        Fostex Mr8 Mkii Software 15


        Download File ✸✸✸ https://bytlly.com/2uGwcN



        - -

        But that's not all. Fostex Mr8 Mkii Software 15 also enables you to transfer, edit, and mix your tracks using a free WAV manager software on your PC or Mac. This way, you can backup your songs, edit them on your computer, or burn them on a CD using an external CD-R/RW drive.

        - -

        In this article, we will show you what Fostex Mr8 Mkii Software 15 is, how to update the firmware of your Fostex MR-8 MkII with it, and how to transfer, edit, and mix your tracks using a free WAV manager software on your PC or Mac.

        - -

        What is Fostex Mr8 Mkii Software 15?

        - -

        Fostex Mr8 Mkii Software 15 is a file that contains the latest firmware version for the Fostex MR-8 MkII digital multitracker. The Fostex MR-8 MkII is a compact and portable device that can record up to 8 tracks of high-quality digital audio on a CompactFlash card. It has various features and functions that make it a versatile tool for recording and mixing your music.

        - -

        Some of the features and functions of the Fostex MR-8 MkII are:

        -

        - -
          -
        • 8 tracks of recording and playback using CompactFlash cards
        • -
        • 2 track simultaneous recording
        • -
        • Superb in-built digital effects inc Mic and Amp simulations
        • -
        • USB Host function for CD burning with 'off-the-shel' CD burner
        • -
        • 2 mic inputs with +48V Phantom Power
        • -
        • Built-in microphone for handy song memo
        • -
        • Mastering effects for stereo buss
        • -
        • USB port for stereo WAV file transfer to and from PC
        • -
        • AC and Battery (6 x AA alkaline cells) operation
        • -
        • Free “ WAV Manager ” software available for multiple mono files transfer to and from PC
        • -
        - -

        You can download Fostex Mr8 Mkii Software 15 from the Fostex website for free. The file has a .zip extension (e.g. MR8HDCD_V105.zip) and you need to unzip it on your PC or Mac. Then, you will get a file with a .MOT extension (e.g. MR8CV105.MOT) that contains the firmware update.

        - -

        How to Update the Firmware of Your Fostex MR-8 MkII with Fostex Mr8 Mkii Software 15?

        - -

        To update the firmware of your Fostex MR-8 MkII with Fostex Mr8 Mkii Software 15, you need to follow these steps:

        - -
          -
        1. Download Fostex Mr8 Mkii Software 15 from the Fostex website and unzip it on your PC or Mac.
        2. -
        3. Copy the file with .MOT extension (e.g. MR8CV105.MOT) to the root directory of a formatted CompactFlash card.
        4. -
        5. Insert the card into the Fostex MR-8 MkII and turn on the power while holding down the [REC MODE] key.
        6. -
        7. The device will start updating the firmware automatically. Do not turn off the power or remove the card during the update process.
        8. -
        9. When the update is completed, the device will reboot and display the new firmware version on the screen.
        10. -
        - -

        You can check the current firmware version of your device by pressing [MENU] > [SYSTEM] > [VERSION]. You can also find more information about the firmware update procedures in the user manual or on the Fostex website.

        - -

        By updating the firmware, you can enjoy new features and improvements on your device. For example, the latest firmware version (v1.05) enables you to record up to 400 minutes per song (normal mode) or 800 minutes per song (extended mode), as well as adding new effects and functions. Some of the new features and improvements are:

        - -
          -
        • A new dial knob for menu selection
        • -
        • +48V phantom power on both mic inputs
        • -
        • USB host function for CD burning with an external CD-R/RW drive
        • -
        • Improved built-in digital effects and simulations
        • -
        • Analog guitar distortion with a dedicated knob on input A
        • -
        • Mastering effects for stereo buss
        • -
        - -

        How to Transfer, Edit, and Mix Your Tracks with Fostex Mr8 Mkii Software 15?

        - -

        Fostex Mr8 Mkii Software 15 also enables you to transfer, edit, and mix your tracks using a free WAV manager software on your PC or Mac. The WAV manager software is a utility that lets you import and export your tracks as standard mono WAV files (eight files for an 8-track song). You can also perform various operations on your WAV files, such as cutting, copying, pasting, deleting, inserting, adjusting, applying effects, creating new songs, and writing CDs.

        - -

        To transfer, edit, and mix your tracks with Fostex Mr8 Mkii Software 15, you need to follow these steps:

        - -
          -
        1. Download and install the free WAV manager software from the Fostex website on your PC or Mac.
        2. -
        3. Connect the Fostex MR-8 MkII to your computer using a USB cable and select [USB] as the [REC MODE]. The device will appear as a removable disk on your computer.
        4. -
        5. Use the WAV manager software to import or export your tracks as WAV files. You can also use the software's interface to edit each track individually or mix them together. You can preview your changes in real time using the playback function.
        6. -
        7. When you are satisfied with your results, you can export your tracks back to the Fostex MR-8 MkII or write them on a CD using an external CD-R/RW drive.
        8. -
        - -

        You can find more information about the WAV manager software in its user manual or on the Fostex website.

        - -

        Conclusion

        - -

        Fostex Mr8 Mkii Software 15 is a powerful tool that enhances the functionality of the Fostex MR-8 MkII digital multitracker. By using this software, you can update the firmware of your device, transfer your tracks to and from your PC or Mac, and edit and mix your songs using a free WAV manager software. The Fostex Mr8 Mkii Software 15 is easy to use and compatible with Windows XP/Vista/7/8/10 and Mac OS X 10.4/10.5/10.6/10.7/10.8/10.9/10.10/10.11.

        - -

        If you are interested in getting Fostex Mr8 Mkii Software 15 for your Fostex MR-8 MkII digital multitracker, you can download it from here. You can also find more information about the Fostex MR-8 MkII digital multitracker here.

        -

        Why Should You Use Fostex Mr8 Mkii Software 15?

        - -

        There are many benefits of using Fostex Mr8 Mkii Software 15 for your Fostex MR-8 MkII digital multitracker. Here are some of them:

        - -
          -
        • You can keep your device up to date with the latest firmware version and enjoy new features and improvements.
        • -
        • You can transfer your tracks to and from your PC or Mac easily and quickly as standard mono WAV files.
        • -
        • You can edit and mix your tracks on your PC or Mac using a free WAV manager software that offers various functions and effects.
        • -
        • You can backup your songs, edit them on your computer, or burn them on a CD using an external CD-R/RW drive.
        • -
        • You can enhance the functionality and versatility of your device and make the most of its features and functions.
        • -
        - -

        Fostex Mr8 Mkii Software 15 is a powerful tool that enhances the functionality of the Fostex MR-8 MkII digital multitracker. It is easy to use and compatible with Windows XP/Vista/7/8/10 and Mac OS X 10.4/10.5/10.6/10.7/10.8/10.9/10.10/10.11.

        - -

        How to Get Fostex Mr8 Mkii Software 15?

        - -

        If you want to get Fostex Mr8 Mkii Software 15 for your Fostex MR-8 MkII digital multitracker, you can download it from here. You can also find more information about the Fostex MR-8 MkII digital multitracker here.

        - -

        Fostex Mr8 Mkii Software 15 is a software update file that contains the latest firmware version for your device. By updating the firmware, you can enjoy new features and improvements on your device. But that's not all. Fostex Mr8 Mkii Software 15 also enables you to transfer, edit, and mix your tracks using a free WAV manager software on your PC or Mac.

        - -

        If you are looking for a simple and affordable way to record your music, you might want to check out the Fostex MR-8 MkII digital multitracker and Fostex Mr8 Mkii Software 15. They are powerful tools that will help you capture your inspiration whenever and wherever you are.

        -

        What are the Benefits of Fostex Mr8 Mkii Software 15?

        - -

        Fostex Mr8 Mkii Software 15 is a software update file that contains the latest firmware version for your Fostex MR-8 MkII digital multitracker. By updating the firmware, you can enjoy new features and improvements on your device. But that's not all. Fostex Mr8 Mkii Software 15 also enables you to transfer, edit, and mix your tracks using a free WAV manager software on your PC or Mac.

        - -

        There are many benefits of using Fostex Mr8 Mkii Software 15 for your Fostex MR-8 MkII digital multitracker. Here are some of them:

        - -
          -
        • You can keep your device up to date with the latest firmware version and enjoy new features and improvements.
        • -
        • You can transfer your tracks to and from your PC or Mac easily and quickly as standard mono WAV files.
        • -
        • You can edit and mix your tracks on your PC or Mac using a free WAV manager software that offers various functions and effects.
        • -
        • You can backup your songs, edit them on your computer, or burn them on a CD using an external CD-R/RW drive.
        • -
        • You can enhance the functionality and versatility of your device and make the most of its features and functions.
        • -
        - -

        Fostex Mr8 Mkii Software 15 is a powerful tool that enhances the functionality of the Fostex MR-8 MkII digital multitracker. It is easy to use and compatible with Windows XP/Vista/7/8/10 and Mac OS X 10.4/10.5/10.6/10.7/10.8/10.9/10.10/10.11.

        - -

        How to Download and Install Fostex Mr8 Mkii Software 15?

        - -

        If you want to download and install Fostex Mr8 Mkii Software 15 for your Fostex MR-8 MkII digital multitracker, you need to follow these steps:

        - -
          -
        1. Go to the Fostex website and find the Multitracker Software Updates page.
        2. -
        3. Choose the Fostex MR-8 MkII digital multitracker from the list of products.
        4. -
        5. Download Fostex Mr8 Mkii Software 15 from the link provided. The file has a .zip extension (e.g. MR8HDCD_V105.zip).
        6. -
        7. Unzip the file on your PC or Mac. You will get a file with a .MOT extension (e.g. MR8CV105.MOT) that contains the firmware update.
        8. -
        - -

        You can also find more information about the software update procedures in the technical bulletin or on the Fostex website.

        - -

        How to Use Fostex Mr8 Mkii Software 15?

        - -

        To use Fostex Mr8 Mkii Software 15 for your Fostex MR-8 MkII digital multitracker, you need to follow these steps:

        - -
          -
        1. Copy the file with .MOT extension (e.g. MR8CV105.MOT) to the root directory of a formatted CompactFlash card.
        2. -
        3. Insert the card into the Fostex MR-8 MkII and turn on the power while holding down the [REC MODE] key.
        4. -
        5. The device will start updating the firmware automatically. Do not turn off the power or remove the card during the update process.
        6. -
        7. When the update is completed, the device will reboot and display the new firmware version on the screen.
        8. -
        9. Connect the Fostex MR-8 MkII to your PC or Mac using a USB cable and select [USB] as the [REC MODE]. The device will appear as a removable disk on your computer.
        10. -
        11. Download and install the free WAV manager software from the Fostex website on your PC or Mac.
        12. -
        13. Use the WAV manager software to import or export your tracks as WAV files. You can also use the software's interface to edit each track individually or mix them together. You can preview your changes in real time using the playback function.
        14. -
        15. When you are satisfied with your results, you can export your tracks back to the Fostex MR-8 MkII or write them on a CD using an external CD-R/RW drive.
        16. -
        - -

        You can find more information about how to use Fostex Mr8 Mkii Software 15 in its user manual or on the Fostex website.

        - -

        Conclusion

        - -

        Fostex Mr8 Mkii Software 15 is a software update file that contains the latest firmware version for your Fostex MR-8 MkII digital multitracker. By updating the firmware, you can enjoy new features and improvements on your device. But that's not all. Fostex Mr8 Mkii Software 15 also enables you to transfer, edit, and mix your tracks using a free WAV manager software on your PC or Mac.

        - -

        If you are looking for a simple and affordable way to record your music, you might want to check out the Fostex MR-8 MkII digital multitracker and Fostex Mr8 Mkii Software 15. They are powerful tools that will help you capture your inspiration whenever and wherever you are.

        - -

        If you want to get Fostex Mr8 Mkii Software 15 for your Fostex MR-8 MkII digital multitracker, you can download it from here. You can also find more information about the Fostex MR-8 MkII digital multitracker here.

        -

        In conclusion, Fostex Mr8 Mkii Software 15 is a software update file that contains the latest firmware version for your Fostex MR-8 MkII digital multitracker. By updating the firmware, you can enjoy new features and improvements on your device. But that's not all. Fostex Mr8 Mkii Software 15 also enables you to transfer, edit, and mix your tracks using a free WAV manager software on your PC or Mac.

        - -

        If you are looking for a simple and affordable way to record your music, you might want to check out the Fostex MR-8 MkII digital multitracker and Fostex Mr8 Mkii Software 15. They are powerful tools that will help you capture your inspiration whenever and wherever you are.

        - -

        If you want to get Fostex Mr8 Mkii Software 15 for your Fostex MR-8 MkII digital multitracker, you can download it from here. You can also find more information about the Fostex MR-8 MkII digital multitracker here.

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/IObit Uninstaller 9 Pro Crack Full Version (Latest 2019).md b/spaces/lincquiQcaudo/Top-20-Diffusion/IObit Uninstaller 9 Pro Crack Full Version (Latest 2019).md deleted file mode 100644 index 26b3b659f2b1608f1d0bce79593c54340b18b58e..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/IObit Uninstaller 9 Pro Crack Full Version (Latest 2019).md +++ /dev/null @@ -1,6 +0,0 @@ -

        IObit Uninstaller 9 Pro Crack Full Version (Latest 2019)


        Download Filehttps://bytlly.com/2uGy6Q



        -
        -5 days ago - IObit Uninstaller Pro Crack is one of the best software uninstaller tools. It comes with the ability to perform all uninstall processes in . NET, Internet Explorer, Firefox, Windows Media Player, Adobe Reader, Netscape, AOL, Windows Messenger, Outlook Express, iTunes, Zune, QuickTime, Total Commander, Internet Browsers and even Apple Safari. IObit Uninstaller Pro Crack is extremely efficient and works very fast without slowing down your computer. That's why there are many people who use it after installing it. Now you can download the free version of IObit Uninstaller Pro Crack. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/lixq/bingo61/src/components/learn-more.tsx b/spaces/lixq/bingo61/src/components/learn-more.tsx deleted file mode 100644 index a64459ee7900a612292e117a6bda96ee9260990f..0000000000000000000000000000000000000000 --- a/spaces/lixq/bingo61/src/components/learn-more.tsx +++ /dev/null @@ -1,39 +0,0 @@ -import React from 'react' -import { SourceAttribution } from '@/lib/bots/bing/types' - -export interface LearnMoreProps { - sourceAttributions?: SourceAttribution[] -} - -export function LearnMore({ sourceAttributions }: LearnMoreProps) { - if (!sourceAttributions?.length) { - return null - } - - return ( -
        -
        了解详细信息:
        -
        -
        - {sourceAttributions.map((attribution, index) => { - const { providerDisplayName, seeMoreUrl } = attribution - const { host } = new URL(seeMoreUrl) - return ( - - {index + 1}. {host} - - ) - })} -
        -
        -
        - ) -} diff --git a/spaces/ljjggr/bingo/src/components/settings.tsx b/spaces/ljjggr/bingo/src/components/settings.tsx deleted file mode 100644 index 45ba6044ff9cbe584f62292a49ea2ace9acc1f48..0000000000000000000000000000000000000000 --- a/spaces/ljjggr/bingo/src/components/settings.tsx +++ /dev/null @@ -1,157 +0,0 @@ -import { useEffect, useState } from 'react' -import { useAtom } from 'jotai' -import { Switch } from '@headlessui/react' -import { toast } from 'react-hot-toast' -import { hashAtom, voiceAtom } from '@/state' -import { - Dialog, - DialogContent, - DialogDescription, - DialogFooter, - DialogHeader, - DialogTitle -} from '@/components/ui/dialog' -import { Button } from './ui/button' -import { Input } from './ui/input' -import { ChunkKeys, parseCookies, extraCurlFromCookie, encodeHeadersToCookie, getCookie, setCookie } from '@/lib/utils' -import { ExternalLink } from './external-link' -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' - - -export function Settings() { - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - const [loc, setLoc] = useAtom(hashAtom) - const [curlValue, setCurlValue] = useState(extraCurlFromCookie(parseCookies(document.cookie, ChunkKeys))) - const [imageOnly, setImageOnly] = useState(getCookie('IMAGE_ONLY') !== '0') - const [enableTTS, setEnableTTS] = useAtom(voiceAtom) - - useEffect(() => { - if (isCopied) { - toast.success('复制成功') - } - }, [isCopied]) - - if (loc === 'settings') { - return ( - setLoc('')} modal> - - - 设置你的用户信息 - - 请使用 Edge 浏览器 - - 打开并登录 Bing - - ,然后再打开 - Challenge 接口 - 右键 》检查。打开开发者工具,在网络里面找到 Create 接口 》右键复制》复制为 cURL(bash),粘贴到此处,然后保存。 -
        - 图文示例: - 如何获取 BING_HEADER - - -
        - -
        - setCurlValue(e.target.value)} - /> -
        - 身份信息仅用于画图(推荐) - setImageOnly(checked)} - > - - -
        - - - - - - - -
        - ) - } else if (loc === 'voice') { - return ( - setLoc('')} modal> - - - 语音设置 - - 目前仅支持 PC 端 Edge 及 Chrome 浏览器 - - - -
        - 启用语音回答 - setEnableTTS(checked)} - > - - -
        - - - - -
        -
        - ) - } - return null -} diff --git a/spaces/lterriel/YOLOv5_medieval_register/app.py b/spaces/lterriel/YOLOv5_medieval_register/app.py deleted file mode 100644 index 50ec04375aa3a982bb07891c7e7b00bd2f4bade4..0000000000000000000000000000000000000000 --- a/spaces/lterriel/YOLOv5_medieval_register/app.py +++ /dev/null @@ -1,52 +0,0 @@ -""" -Gradio APP for Yolov5 medieval registers layout analysis. - -Date: 20/12/2022 -Author: lterriel -""" -import glob - -import gradio as gr -import yolov5 -from PIL import Image -# import torch -from huggingface_hub import hf_hub_download - -# Add new models here: -model_names = [ - "lterriel/endp-yolov5x-35e-bs4", -] - -def load_model(model_name): - model_path = hf_hub_download(repo_id=model_name, filename="best.pt") - return model_path - -def yolo_inference(im, model_path, threshold=0.50): - model_loaded = load_model(model_path) - model = yolov5.load(model_loaded) - model.conf = threshold - results = model(im) # inference - numpy_image = results.render()[0] - output_image = Image.fromarray(numpy_image) - return output_image - -title = "YOLOv5 - Medieval Register Segmentation" -description = "
        YOLOv5 Gradio demo for medieval register layout analysis.

        " - -inputs = [gr.Image(type="pil", label="document image"), - gr.inputs.Dropdown(model_names, label="Model", default=model_names[0]), - gr.Slider(maximum=1, step=0.01, value=0.50)] - - -examples = [[str(file),model_names[0], 0.50] for file in glob.glob("./images_examples/*.jpg")] - -demo=gr.Interface(fn=yolo_inference, - inputs=inputs, - outputs=gr.Image(type="pil", label="annotated document").style(height=800), - title=title, - description=description, - theme="huggingface", - examples=examples) - -if __name__ == "__main__": - demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/ltgoslo/ssa-perin/utility/cross_entropy.py b/spaces/ltgoslo/ssa-perin/utility/cross_entropy.py deleted file mode 100644 index 656bc84ac52142ac8ce99fe67649956929d669ea..0000000000000000000000000000000000000000 --- a/spaces/ltgoslo/ssa-perin/utility/cross_entropy.py +++ /dev/null @@ -1,46 +0,0 @@ -#!/usr/bin/env python3 -# coding=utf-8 - -import torch -import torch.nn.functional as F - - -def masked_sum(loss, mask, label_weight=1, eps=1e-8, reduction=True): - if mask is not None: - loss = loss.masked_fill(mask, 0.0) - if reduction: - return loss.sum() / (((1 - mask.long()) * label_weight).sum() + eps) - - if reduction: - return loss.mean() - - return loss - - -def cross_entropy(log_prob, target, mask, focal=False, label_weight=None, reduction=True): - target = target.unsqueeze(-1) - if focal: - focal_coeff = log_prob.exp().gather(-1, target).squeeze(-1) - focal_coeff = (1.0 - focal_coeff) ** 2 - else: - focal_coeff = 1.0 - - loss = -focal_coeff * log_prob.gather(-1, target).squeeze(-1) - - if label_weight is not None: - loss = loss * label_weight - return masked_sum(loss, mask, label_weight=label_weight, reduction=reduction) - else: - return masked_sum(loss, mask, reduction=reduction) - - -def binary_cross_entropy(logits, target, mask, focal=False, reduction=True): - if focal: - prob = logits.sigmoid() - focal_coeff = target * prob + (1.0 - target) * (1.0 - prob) - focal_coeff = (1.0 - focal_coeff) ** 2 - else: - focal_coeff = 1.0 - - loss = focal_coeff * F.binary_cross_entropy_with_logits(logits, target, reduction="none") - return masked_sum(loss, mask, reduction=reduction) diff --git a/spaces/lychees/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/__init__.py b/spaces/lychees/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ma-xu/LIVE/pybind11/tools/FindPythonLibsNew.cmake b/spaces/ma-xu/LIVE/pybind11/tools/FindPythonLibsNew.cmake deleted file mode 100644 index c1c72c763c6cec6f2fa517f549a553d550ba49d0..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/tools/FindPythonLibsNew.cmake +++ /dev/null @@ -1,255 +0,0 @@ -# - Find python libraries -# This module finds the libraries corresponding to the Python interpreter -# FindPythonInterp provides. -# This code sets the following variables: -# -# PYTHONLIBS_FOUND - have the Python libs been found -# PYTHON_PREFIX - path to the Python installation -# PYTHON_LIBRARIES - path to the python library -# PYTHON_INCLUDE_DIRS - path to where Python.h is found -# PYTHON_MODULE_EXTENSION - lib extension, e.g. '.so' or '.pyd' -# PYTHON_MODULE_PREFIX - lib name prefix: usually an empty string -# PYTHON_SITE_PACKAGES - path to installation site-packages -# PYTHON_IS_DEBUG - whether the Python interpreter is a debug build -# -# Thanks to talljimbo for the patch adding the 'LDVERSION' config -# variable usage. - -#============================================================================= -# Copyright 2001-2009 Kitware, Inc. -# Copyright 2012 Continuum Analytics, Inc. -# -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# * Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# -# * Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in the -# documentation and/or other materials provided with the distribution. -# -# * Neither the names of Kitware, Inc., the Insight Software Consortium, -# nor the names of their contributors may be used to endorse or promote -# products derived from this software without specific prior written -# permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -# # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -# HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#============================================================================= - -# Checking for the extension makes sure that `LibsNew` was found and not just `Libs`. -if(PYTHONLIBS_FOUND AND PYTHON_MODULE_EXTENSION) - return() -endif() - -if(PythonLibsNew_FIND_QUIETLY) - set(_pythonlibs_quiet QUIET) -endif() - -if(PythonLibsNew_FIND_REQUIRED) - set(_pythonlibs_required REQUIRED) -endif() - -# Check to see if the `python` command is present and from a virtual -# environment, conda, or GHA activation - if it is, try to use that. - -if(NOT DEFINED PYTHON_EXECUTABLE) - if(DEFINED ENV{VIRTUAL_ENV}) - find_program( - PYTHON_EXECUTABLE python - PATHS "$ENV{VIRTUAL_ENV}" "$ENV{VIRTUAL_ENV}/bin" - NO_DEFAULT_PATH) - elseif(DEFINED ENV{CONDA_PREFIX}) - find_program( - PYTHON_EXECUTABLE python - PATHS "$ENV{CONDA_PREFIX}" "$ENV{CONDA_PREFIX}/bin" - NO_DEFAULT_PATH) - elseif(DEFINED ENV{pythonLocation}) - find_program( - PYTHON_EXECUTABLE python - PATHS "$ENV{pythonLocation}" "$ENV{pythonLocation}/bin" - NO_DEFAULT_PATH) - endif() - if(NOT PYTHON_EXECUTABLE) - unset(PYTHON_EXECUTABLE) - endif() -endif() - -# Use the Python interpreter to find the libs. -if(NOT PythonLibsNew_FIND_VERSION) - set(PythonLibsNew_FIND_VERSION "") -endif() - -find_package(PythonInterp ${PythonLibsNew_FIND_VERSION} ${_pythonlibs_required} - ${_pythonlibs_quiet}) - -if(NOT PYTHONINTERP_FOUND) - set(PYTHONLIBS_FOUND FALSE) - set(PythonLibsNew_FOUND FALSE) - return() -endif() - -# According to https://stackoverflow.com/questions/646518/python-how-to-detect-debug-interpreter -# testing whether sys has the gettotalrefcount function is a reliable, cross-platform -# way to detect a CPython debug interpreter. -# -# The library suffix is from the config var LDVERSION sometimes, otherwise -# VERSION. VERSION will typically be like "2.7" on unix, and "27" on windows. -execute_process( - COMMAND - "${PYTHON_EXECUTABLE}" "-c" "from distutils import sysconfig as s;import sys;import struct; -print('.'.join(str(v) for v in sys.version_info)); -print(sys.prefix); -print(s.get_python_inc(plat_specific=True)); -print(s.get_python_lib(plat_specific=True)); -print(s.get_config_var('SO')); -print(hasattr(sys, 'gettotalrefcount')+0); -print(struct.calcsize('@P')); -print(s.get_config_var('LDVERSION') or s.get_config_var('VERSION')); -print(s.get_config_var('LIBDIR') or ''); -print(s.get_config_var('MULTIARCH') or ''); -" - RESULT_VARIABLE _PYTHON_SUCCESS - OUTPUT_VARIABLE _PYTHON_VALUES - ERROR_VARIABLE _PYTHON_ERROR_VALUE) - -if(NOT _PYTHON_SUCCESS MATCHES 0) - if(PythonLibsNew_FIND_REQUIRED) - message(FATAL_ERROR "Python config failure:\n${_PYTHON_ERROR_VALUE}") - endif() - set(PYTHONLIBS_FOUND FALSE) - set(PythonLibsNew_FOUND FALSE) - return() -endif() - -# Convert the process output into a list -if(WIN32) - string(REGEX REPLACE "\\\\" "/" _PYTHON_VALUES ${_PYTHON_VALUES}) -endif() -string(REGEX REPLACE ";" "\\\\;" _PYTHON_VALUES ${_PYTHON_VALUES}) -string(REGEX REPLACE "\n" ";" _PYTHON_VALUES ${_PYTHON_VALUES}) -list(GET _PYTHON_VALUES 0 _PYTHON_VERSION_LIST) -list(GET _PYTHON_VALUES 1 PYTHON_PREFIX) -list(GET _PYTHON_VALUES 2 PYTHON_INCLUDE_DIR) -list(GET _PYTHON_VALUES 3 PYTHON_SITE_PACKAGES) -list(GET _PYTHON_VALUES 4 PYTHON_MODULE_EXTENSION) -list(GET _PYTHON_VALUES 5 PYTHON_IS_DEBUG) -list(GET _PYTHON_VALUES 6 PYTHON_SIZEOF_VOID_P) -list(GET _PYTHON_VALUES 7 PYTHON_LIBRARY_SUFFIX) -list(GET _PYTHON_VALUES 8 PYTHON_LIBDIR) -list(GET _PYTHON_VALUES 9 PYTHON_MULTIARCH) - -# Make sure the Python has the same pointer-size as the chosen compiler -# Skip if CMAKE_SIZEOF_VOID_P is not defined -if(CMAKE_SIZEOF_VOID_P AND (NOT "${PYTHON_SIZEOF_VOID_P}" STREQUAL "${CMAKE_SIZEOF_VOID_P}")) - if(PythonLibsNew_FIND_REQUIRED) - math(EXPR _PYTHON_BITS "${PYTHON_SIZEOF_VOID_P} * 8") - math(EXPR _CMAKE_BITS "${CMAKE_SIZEOF_VOID_P} * 8") - message(FATAL_ERROR "Python config failure: Python is ${_PYTHON_BITS}-bit, " - "chosen compiler is ${_CMAKE_BITS}-bit") - endif() - set(PYTHONLIBS_FOUND FALSE) - set(PythonLibsNew_FOUND FALSE) - return() -endif() - -# The built-in FindPython didn't always give the version numbers -string(REGEX REPLACE "\\." ";" _PYTHON_VERSION_LIST ${_PYTHON_VERSION_LIST}) -list(GET _PYTHON_VERSION_LIST 0 PYTHON_VERSION_MAJOR) -list(GET _PYTHON_VERSION_LIST 1 PYTHON_VERSION_MINOR) -list(GET _PYTHON_VERSION_LIST 2 PYTHON_VERSION_PATCH) -set(PYTHON_VERSION "${PYTHON_VERSION_MAJOR}.${PYTHON_VERSION_MINOR}.${PYTHON_VERSION_PATCH}") - -# Make sure all directory separators are '/' -string(REGEX REPLACE "\\\\" "/" PYTHON_PREFIX "${PYTHON_PREFIX}") -string(REGEX REPLACE "\\\\" "/" PYTHON_INCLUDE_DIR "${PYTHON_INCLUDE_DIR}") -string(REGEX REPLACE "\\\\" "/" PYTHON_SITE_PACKAGES "${PYTHON_SITE_PACKAGES}") - -if(CMAKE_HOST_WIN32) - set(PYTHON_LIBRARY "${PYTHON_PREFIX}/libs/python${PYTHON_LIBRARY_SUFFIX}.lib") - - # when run in a venv, PYTHON_PREFIX points to it. But the libraries remain in the - # original python installation. They may be found relative to PYTHON_INCLUDE_DIR. - if(NOT EXISTS "${PYTHON_LIBRARY}") - get_filename_component(_PYTHON_ROOT ${PYTHON_INCLUDE_DIR} DIRECTORY) - set(PYTHON_LIBRARY "${_PYTHON_ROOT}/libs/python${PYTHON_LIBRARY_SUFFIX}.lib") - endif() - - # if we are in MSYS & MINGW, and we didn't find windows python lib, look for system python lib - if(DEFINED ENV{MSYSTEM} - AND MINGW - AND NOT EXISTS "${PYTHON_LIBRARY}") - if(PYTHON_MULTIARCH) - set(_PYTHON_LIBS_SEARCH "${PYTHON_LIBDIR}/${PYTHON_MULTIARCH}" "${PYTHON_LIBDIR}") - else() - set(_PYTHON_LIBS_SEARCH "${PYTHON_LIBDIR}") - endif() - unset(PYTHON_LIBRARY) - find_library( - PYTHON_LIBRARY - NAMES "python${PYTHON_LIBRARY_SUFFIX}" - PATHS ${_PYTHON_LIBS_SEARCH} - NO_DEFAULT_PATH) - endif() - - # raise an error if the python libs are still not found. - if(NOT EXISTS "${PYTHON_LIBRARY}") - message(FATAL_ERROR "Python libraries not found") - endif() - -else() - if(PYTHON_MULTIARCH) - set(_PYTHON_LIBS_SEARCH "${PYTHON_LIBDIR}/${PYTHON_MULTIARCH}" "${PYTHON_LIBDIR}") - else() - set(_PYTHON_LIBS_SEARCH "${PYTHON_LIBDIR}") - endif() - #message(STATUS "Searching for Python libs in ${_PYTHON_LIBS_SEARCH}") - # Probably this needs to be more involved. It would be nice if the config - # information the python interpreter itself gave us were more complete. - find_library( - PYTHON_LIBRARY - NAMES "python${PYTHON_LIBRARY_SUFFIX}" - PATHS ${_PYTHON_LIBS_SEARCH} - NO_DEFAULT_PATH) - - # If all else fails, just set the name/version and let the linker figure out the path. - if(NOT PYTHON_LIBRARY) - set(PYTHON_LIBRARY python${PYTHON_LIBRARY_SUFFIX}) - endif() -endif() - -mark_as_advanced(PYTHON_LIBRARY PYTHON_INCLUDE_DIR) - -# We use PYTHON_INCLUDE_DIR, PYTHON_LIBRARY and PYTHON_DEBUG_LIBRARY for the -# cache entries because they are meant to specify the location of a single -# library. We now set the variables listed by the documentation for this -# module. -set(PYTHON_INCLUDE_DIRS "${PYTHON_INCLUDE_DIR}") -set(PYTHON_LIBRARIES "${PYTHON_LIBRARY}") -if(NOT PYTHON_DEBUG_LIBRARY) - set(PYTHON_DEBUG_LIBRARY "") -endif() -set(PYTHON_DEBUG_LIBRARIES "${PYTHON_DEBUG_LIBRARY}") - -find_package_message(PYTHON "Found PythonLibs: ${PYTHON_LIBRARY}" - "${PYTHON_EXECUTABLE}${PYTHON_VERSION_STRING}") - -set(PYTHONLIBS_FOUND TRUE) -set(PythonLibsNew_FOUND TRUE) - -if(NOT PYTHON_MODULE_PREFIX) - set(PYTHON_MODULE_PREFIX "") -endif() diff --git a/spaces/maisarah1109/stock_prediction/setup.sh b/spaces/maisarah1109/stock_prediction/setup.sh deleted file mode 100644 index c8650a8b74a58d9a5f53b185fd711c5668e1cd52..0000000000000000000000000000000000000000 --- a/spaces/maisarah1109/stock_prediction/setup.sh +++ /dev/null @@ -1,13 +0,0 @@ -mkdir -p ~/.streamlit/ - -echo "\ -[general]\n\ -email = \"your-email@domain.com\"\n\ -" > ~/.streamlit/credentials.toml - -echo "\ -[server]\n\ -headless = true\n\ -enableCORS=false\n\ -port = $PORT\n\ -" > ~/.streamlit/config.toml \ No newline at end of file diff --git a/spaces/maj34/Eye-Handicapped-Service/README.md b/spaces/maj34/Eye-Handicapped-Service/README.md deleted file mode 100644 index 2b0b040bf92af43fe258dc2173855511e33493eb..0000000000000000000000000000000000000000 --- a/spaces/maj34/Eye-Handicapped-Service/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Eye Handicapped Service -emoji: 📉 -colorFrom: pink -colorTo: indigo -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: cc-by-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/malper/unikud/app.py b/spaces/malper/unikud/app.py deleted file mode 100644 index b38e51b7c14e0f7854a3e07a66cd9b6a0c794922..0000000000000000000000000000000000000000 --- a/spaces/malper/unikud/app.py +++ /dev/null @@ -1,17 +0,0 @@ -import streamlit as st -from unikud.framework import Unikud - -with st.spinner('Loading UNIKUD framework...'): - u = Unikud() -st.success('Loaded!') - -text = st.text_area('Enter Hebrew text and press ctrl/command+enter to add nikud:') - -kwargs = { - 'v_thresh': st.sidebar.slider("Vowel addition threshold", min_value=0., max_value=1., value=0.5), - 'o_thresh': st.sidebar.slider("Other diacritic threshold", min_value=0., max_value=1., value=0.5), - 'd_thresh': st.sidebar.slider("Deletion threshold", min_value=0., max_value=1., value=0.5) -} - -if text: - st.write(u(text, **kwargs)) \ No newline at end of file diff --git a/spaces/marcusj83/MusicGenbruh/tests/models/test_encodec_model.py b/spaces/marcusj83/MusicGenbruh/tests/models/test_encodec_model.py deleted file mode 100644 index 2f9c1db3f69a45f02451b71da95f44356811acbb..0000000000000000000000000000000000000000 --- a/spaces/marcusj83/MusicGenbruh/tests/models/test_encodec_model.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import random - -import numpy as np -import torch - -from audiocraft.models import EncodecModel -from audiocraft.modules import SEANetEncoder, SEANetDecoder -from audiocraft.quantization import DummyQuantizer - - -class TestEncodecModel: - - def _create_encodec_model(self, - sample_rate: int, - channels: int, - dim: int = 5, - n_filters: int = 3, - n_residual_layers: int = 1, - ratios: list = [5, 4, 3, 2], - **kwargs): - frame_rate = np.prod(ratios) - encoder = SEANetEncoder(channels=channels, dimension=dim, n_filters=n_filters, - n_residual_layers=n_residual_layers, ratios=ratios) - decoder = SEANetDecoder(channels=channels, dimension=dim, n_filters=n_filters, - n_residual_layers=n_residual_layers, ratios=ratios) - quantizer = DummyQuantizer() - model = EncodecModel(encoder, decoder, quantizer, frame_rate=frame_rate, - sample_rate=sample_rate, channels=channels, **kwargs) - return model - - def test_model(self): - random.seed(1234) - sample_rate = 24_000 - channels = 1 - model = self._create_encodec_model(sample_rate, channels) - for _ in range(10): - length = random.randrange(1, 10_000) - x = torch.randn(2, channels, length) - res = model(x) - assert res.x.shape == x.shape - - def test_model_renorm(self): - random.seed(1234) - sample_rate = 24_000 - channels = 1 - model_nonorm = self._create_encodec_model(sample_rate, channels, renormalize=False) - model_renorm = self._create_encodec_model(sample_rate, channels, renormalize=True) - - for _ in range(10): - length = random.randrange(1, 10_000) - x = torch.randn(2, channels, length) - codes, scales = model_nonorm.encode(x) - codes, scales = model_renorm.encode(x) - assert scales is not None diff --git a/spaces/matthoffner/AudioCraft_Plus/audiocraft/grids/compression/encodec_musicgen_32khz.py b/spaces/matthoffner/AudioCraft_Plus/audiocraft/grids/compression/encodec_musicgen_32khz.py deleted file mode 100644 index 9da31daa5f009f46e753601a51a06391594b8f9b..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/AudioCraft_Plus/audiocraft/grids/compression/encodec_musicgen_32khz.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Grid search file, simply list all the exp you want in `explorer`. -Any new exp added there will be scheduled. -You can cancel and experiment by commenting its line. - -This grid shows how to train a MusicGen EnCodec model at 32 kHz. -""" - -from ._explorers import CompressionExplorer -from ...environment import AudioCraftEnvironment - - -@CompressionExplorer -def explorer(launcher): - partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global']) - launcher.slurm_(gpus=8, partition=partitions) - # use configuration for MusicGen's EnCodec model trained on monophonic audio sampled at 32 kHz - # MusicGen's EnCodec is trained with a total stride of 640 leading to a frame rate of 50 hz - launcher.bind_(solver='compression/encodec_musicgen_32khz') - # replace this by the desired music dataset - launcher.bind_(dset='internal/music_400k_32khz') - # launch xp - launcher() - launcher({ - 'metrics.visqol.bin': '/data/home/jadecopet/local/usr/opt/visqol', - 'label': 'visqol', - 'evaluate.metrics.visqol': True - }) diff --git a/spaces/matthoffner/chatbot/components/Promptbar/Promptbar.tsx b/spaces/matthoffner/chatbot/components/Promptbar/Promptbar.tsx deleted file mode 100644 index 7e3ac60da17610e1da195fd7f042dad96980c6a8..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot/components/Promptbar/Promptbar.tsx +++ /dev/null @@ -1,152 +0,0 @@ -import { useContext, useEffect, useState } from 'react'; -import { useTranslation } from 'react-i18next'; - -import { useCreateReducer } from '@/hooks/useCreateReducer'; - -import { savePrompts } from '@/utils/app/prompts'; - -import { OpenAIModels } from '@/types/openai'; -import { Prompt } from '@/types/prompt'; - -import HomeContext from '@/pages/api/home/home.context'; - -import { PromptFolders } from './components/PromptFolders'; -import { PromptbarSettings } from './components/PromptbarSettings'; -import { Prompts } from './components/Prompts'; - -import Sidebar from '../Sidebar'; -import PromptbarContext from './PromptBar.context'; -import { PromptbarInitialState, initialState } from './Promptbar.state'; - -import { v4 as uuidv4 } from 'uuid'; - -const Promptbar = () => { - const { t } = useTranslation('promptbar'); - - const promptBarContextValue = useCreateReducer({ - initialState, - }); - - const { - state: { prompts, defaultModelId, showPromptbar }, - dispatch: homeDispatch, - handleCreateFolder, - } = useContext(HomeContext); - - const { - state: { searchTerm, filteredPrompts }, - dispatch: promptDispatch, - } = promptBarContextValue; - - const handleTogglePromptbar = () => { - homeDispatch({ field: 'showPromptbar', value: !showPromptbar }); - localStorage.setItem('showPromptbar', JSON.stringify(!showPromptbar)); - }; - - const handleCreatePrompt = () => { - if (defaultModelId) { - const newPrompt: Prompt = { - id: uuidv4(), - name: `Prompt ${prompts.length + 1}`, - description: '', - content: '', - model: OpenAIModels[defaultModelId], - folderId: null, - }; - - const updatedPrompts = [...prompts, newPrompt]; - - homeDispatch({ field: 'prompts', value: updatedPrompts }); - - savePrompts(updatedPrompts); - } - }; - - const handleDeletePrompt = (prompt: Prompt) => { - const updatedPrompts = prompts.filter((p) => p.id !== prompt.id); - - homeDispatch({ field: 'prompts', value: updatedPrompts }); - savePrompts(updatedPrompts); - }; - - const handleUpdatePrompt = (prompt: Prompt) => { - const updatedPrompts = prompts.map((p) => { - if (p.id === prompt.id) { - return prompt; - } - - return p; - }); - homeDispatch({ field: 'prompts', value: updatedPrompts }); - - savePrompts(updatedPrompts); - }; - - const handleDrop = (e: any) => { - if (e.dataTransfer) { - const prompt = JSON.parse(e.dataTransfer.getData('prompt')); - - const updatedPrompt = { - ...prompt, - folderId: e.target.dataset.folderId, - }; - - handleUpdatePrompt(updatedPrompt); - - e.target.style.background = 'none'; - } - }; - - useEffect(() => { - if (searchTerm) { - promptDispatch({ - field: 'filteredPrompts', - value: prompts.filter((prompt) => { - const searchable = - prompt.name.toLowerCase() + - ' ' + - prompt.description.toLowerCase() + - ' ' + - prompt.content.toLowerCase(); - return searchable.includes(searchTerm.toLowerCase()); - }), - }); - } else { - promptDispatch({ field: 'filteredPrompts', value: prompts }); - } - }, [searchTerm, prompts]); - - return ( - - - side={'right'} - isOpen={showPromptbar} - addItemButtonTitle={t('New prompt')} - itemComponent={ - !prompt.folderId)} - /> - } - folderComponent={} - items={filteredPrompts} - searchTerm={searchTerm} - handleSearchTerm={(searchTerm: string) => - promptDispatch({ field: 'searchTerm', value: searchTerm }) - } - toggleOpen={handleTogglePromptbar} - handleCreateItem={handleCreatePrompt} - handleCreateFolder={() => handleCreateFolder(t('New folder'), 'prompt')} - handleDrop={handleDrop} - /> - - ); -}; - -export default Promptbar; diff --git a/spaces/matthoffner/chatbot/utils/app/conversation.ts b/spaces/matthoffner/chatbot/utils/app/conversation.ts deleted file mode 100644 index 3fdfbcf2368802a8d867a99cf40ade44d11efba7..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot/utils/app/conversation.ts +++ /dev/null @@ -1,30 +0,0 @@ -import { Conversation } from '@/types/chat'; - -export const updateConversation = ( - updatedConversation: Conversation, - allConversations: Conversation[], -) => { - const updatedConversations = allConversations.map((c) => { - if (c.id === updatedConversation.id) { - return updatedConversation; - } - - return c; - }); - - saveConversation(updatedConversation); - saveConversations(updatedConversations); - - return { - single: updatedConversation, - all: updatedConversations, - }; -}; - -export const saveConversation = (conversation: Conversation) => { - localStorage.setItem('selectedConversation', JSON.stringify(conversation)); -}; - -export const saveConversations = (conversations: Conversation[]) => { - localStorage.setItem('conversationHistory', JSON.stringify(conversations)); -}; diff --git a/spaces/maxmax20160403/sovits5.0/vits_decoder/bigv.py b/spaces/maxmax20160403/sovits5.0/vits_decoder/bigv.py deleted file mode 100644 index 029362c34b2c850cc2d59eea4410f77380d84bbe..0000000000000000000000000000000000000000 --- a/spaces/maxmax20160403/sovits5.0/vits_decoder/bigv.py +++ /dev/null @@ -1,64 +0,0 @@ -import torch -import torch.nn as nn - -from torch.nn import Conv1d -from torch.nn.utils import weight_norm, remove_weight_norm -from .alias.act import SnakeAlias - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -class AMPBlock(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(AMPBlock, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - # total number of conv layers - self.num_layers = len(self.convs1) + len(self.convs2) - - # periodic nonlinearity with snakebeta function and anti-aliasing - self.activations = nn.ModuleList([ - SnakeAlias(channels) for _ in range(self.num_layers) - ]) - - def forward(self, x): - acts1, acts2 = self.activations[::2], self.activations[1::2] - for c1, c2, a1, a2 in zip(self.convs1, self.convs2, acts1, acts2): - xt = a1(x) - xt = c1(xt) - xt = a2(xt) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) \ No newline at end of file diff --git a/spaces/merve/anonymization/source/_posts/2019-10-03-fairness.html b/spaces/merve/anonymization/source/_posts/2019-10-03-fairness.html deleted file mode 100644 index e87b79e7fec2d286610661ddae8970bb7c9fe1dc..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/source/_posts/2019-10-03-fairness.html +++ /dev/null @@ -1,219 +0,0 @@ - ---- -permalink: /measuring-fairness/ -template: post.html - -title: Considering Model Fairness -title: Measuring Fairness -summary: There are multiple ways to measure accuracy. No matter how we build our model, accuracy across these measures will vary when applied to different groups of people. -summaryalt: There are multiple ways to assess machine learning models, such as its overall accuracy. Another important perspective to consider is the fairness of the model with respect to different groups of people or different contexts of use. -shareimg: https://pair.withgoogle.com/explorables/images/measuring-fairness.png -date: 2021-05-01 ---- - - - - - -
        -
        -
        - - -
        -

        Measuring Fairness

        - -

        How do you make sure a model works equally well for different groups of people? It turns out that in many situations, this is harder than you might think. - -

        The problem is that there are different ways to measure the accuracy of a model, and often it's mathematically impossible for them all to be equal across groups. - -

        We'll illustrate how this happens by creating a (fake) medical model to screen these people for a disease. -

        - - -
        -

        Ground Truth

        - -

        About half of these people actually have the disease a; half of them don't b. -

        - - -
        -

        Model Predictions

        - -

        In a perfect world, only sick people would test positive for the disease and only healthy people would test negative. -

        - - -
        -

        Model Mistakes

        - -

        But models and tests aren't perfect. - -

        The model might make a mistake and mark a sick person as healthy c. - -

        Or the opposite: marking a healthy person as sick f. -

        - - -

        Never Miss the Disease...

        - -

        If there's a simple follow-up test, we could have the model aggressively call close cases so it rarely misses the disease. - -

        We can quantify this by measuring the percentage of sick people a who test positive g. - -

        -
        - - -
        -

        ...Or Avoid Overcalling?

        - -

        On the other hand, if there isn't a secondary test, or the treatment uses a drug with a limited supply, we might care more about the percentage of people with positive tests who are actually sick g . - -

        - -

        These issues and trade-offs in model optimization aren't new, but they're brought into focus when we have the ability to fine-tune exactly how aggressively disease is diagnosed. - -

        - - Try adjusting how aggressive the model is in diagnosing the disease -
        - - -
        -

        Subgroup Analysis

        - -

        Things get even more complicated when we check if the model treats different groups fairly.¹ - -

        Whatever we decide on in terms of trade-offs between these metrics, we'd probably like them to be roughly even across different groups of people. - -

        If we're trying to evenly allocate resources, having the model miss more cases in children than adults would be bad! ² -

        - - -
        -

        Base Rates

        - -

        If you look carefully, you'll see that the disease is more prevalent in children. That is, the "base rate" of the disease is different across groups. - -

        The fact that the base rates are different makes the situation surprisingly tricky. For one thing, even though the test catches the same percentage of sick adults and sick children, an adult who tests positive is less likely to have the disease than a child who tests positive. -

        - - -
        -

        Imbalanced Metrics

        - -

        Why is there a disparity in diagnosing between children and adults? There is a higher proportion of well adults, so mistakes in the test will cause more well adults to be marked "positive" than well children (and similarly with mistaken negatives). - -


        -
        - -

        To fix this, we could have the model take age into account. - -

        -
        -
        - -
        -

        Try adjusting the slider to make the model grade adults less aggressively than children.
        - -
        -

        This allows us to align one metric. But now adults who have the disease are less likely to be diagnosed with it! - -

        -
        -
        - -

        No matter how you move the sliders, you won't be able to make both metrics fair at once. It turns out this is inevitable any time the base rates are different, and the test isn't perfect. - -

        There are multiple ways to define fairness mathematically. It usually isn't possible to satisfy all of them.³ -

        -
        - - -
        -
        -
        -
        - -

        Conclusion

        - -

        Thankfully, the notion of fairness you choose to satisfy will depend on the context of your model, so while it may not be possible to satisfy every definition of fairness, you can focus on the notions of fairness that make sense for your use case. - -

        Even if fairness along every dimension isn't possible, we shouldn't stop checking for bias. The Hidden Bias explorable outlines different ways human bias can feed into an ML model. - -

        More Reading

        - -

        In some contexts, setting different thresholds for different populations might not be acceptable. Can you make AI fairer than a judge? explores an algorithm that can send people to jail. - -

        There are lots of different metrics you might use to determine if an algorithm is fair. Attacking discrimination with smarter machine learning shows how several of them work. Using Fairness Indicators in conjunction with the What-If Tool and other fairness tools, you can test your own model against commonly used fairness metrics. - -

        Machine learning practitioners use words like “recall” to describe the percentage of sick people who test positive. Checkout the PAIR Guidebook Glossary to learn how to learn how to talk to the people building the models. - -

        Appendix

        - -

        ¹ This essay uses very academic, mathematical standards for fairness that don't encompass everything we might include in the colloquial meaning of fairness. There's a gap between the technical descriptions of algorithms here and the social context that they're deployed in. - -

        ² Sometimes we might care more about different error modes in different populations. If treatment is riskier for children, we'd probably want the model to be less aggressive in diagnosing. - -

        ³The above example assumes the model sorts and scores people based on how likely it is that they are sick. With complete control over the model's exact rate of under- and over-diagnosing in both groups, it's actually possible to align both of the metrics we've discussed so far. Try tweaking the model below to get both of them to line up. - -

        Adding a third metric, the percentage of well people a who test negative e, makes perfect fairness impossible. Can you see why all three metrics won't align unless the base rate of the disease is the same in both populations? - -

        - -
        Drag ⁠— to adjust model accuracy and ⁠| to adjust the occurrence of disease
        -
        - -

        Credits

        - -

        Adam Pearce // May 2020 - -

        Thanks to Carey Radebaugh, Dan Nanas, David Weinberger, Emily Denton, Emily Reif, Fernanda Viégas, Hal Abelson, James Wexler, Kristen Olson, Lucas Dixon, Mahima Pushkarna, Martin Wattenberg, Michael Terry, Rebecca Salois, Timnit Gebru, Tulsee Doshi, Yannick Assogba, Yoni Halpern, Zan Armstrong, and my other colleagues at Google for their help with this piece. - -

        Silhouettes from ProPublica's Wee People. - -

        More Explorables

        - -

        - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/spaces/merve/data-leak/source/_posts/2021-03-03-fill-in-the-blank.md b/spaces/merve/data-leak/source/_posts/2021-03-03-fill-in-the-blank.md deleted file mode 100644 index c5a251a9297e84f8b3ed4e504ff25f19793a57c2..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/source/_posts/2021-03-03-fill-in-the-blank.md +++ /dev/null @@ -1,136 +0,0 @@ ---- -template: post.html -title: What Have Language Models Learned? -summary: By asking language models to fill in the blank, we can probe their understanding of the world. -shareimg: https://pair.withgoogle.com/explorables/images/fill-in-the-blank.png -shareimgabstract: https://pair.withgoogle.com/explorables/images/fill-in-the-blank-abstract.png -permalink: /fill-in-the-blank/ -date: 2021-07-28 ---- - -Large language models are making it possible for computers to [write stories](https://openai.com/blog/better-language-models/), [program a website](https://twitter.com/sharifshameem/status/1282676454690451457) and [turn captions into images](https://openai.com/blog/dall-e/). - -One of the first of these models, [BERT](https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html), is trained by taking sentences, splitting them into individual words, randomly hiding some of them, and predicting what the hidden words are. After doing this millions of times, BERT has "read" enough Shakespeare to predict how this phrase usually ends: - -
        - -This page is hooked up to a version of BERT trained on Wikipedia and books.¹ Try clicking on different words to see how they'd be filled in or typing in another sentence to see what else has BERT picked up on. - -
        - -### Cattle or Clothes? - -Besides Hamlet's existential dread, the text BERT was trained on also contains more patterns: - -
        - -Cattle and horses aren't top purchase predictions in every state, though! In New York, some of the most likely words are clothes, books and art: - -
        - -There are more than 30,000 words, punctuation marks and word fragments in BERT's [vocabulary](https://huggingface.co/transformers/tokenizer_summary.html). Every time BERT fills in a hidden word, it assigns each of them a probability. By looking at how slightly different sentences shift those probabilities, we can get a glimpse at how purchasing patterns in different places are understood. - -
        - -You can **edit these sentences**. Or try one of these comparisons to get started: - -To the extent that a computer program can "know" something, what does BERT know about where you live? -### What's in a Name? - -This technique can also probe what associations BERT has learned about different groups of people. For example, it predicts people named Elsie are older than people named Lauren: - -
        - -It's also learned that people named Jim have more [typically masculine](https://flowingdata.com/2017/09/11/most-female-and-male-occupations-since-1950/) jobs than people named Jane: - -
        - -These aren't just spurious correlations — Elsies really are more likely to be [older](https://rhiever.github.io/name-age-calculator/) than Laurens. And occupations the model associates with feminine names are held by a [higher percentage](https://purehost.bath.ac.uk/ws/portalfiles/portal/168480066/CaliskanEtAl_authors_full.pdf ) of women. - -Should we be concerned about these correlations? BERT was trained to fill in blanks in Wikipedia articles and books — it does a great job at that! The problem is that the internal representations of language these models have learned are used for much more – by some [measures](https://super.gluebenchmark.com/leaderboard), they're the best way we have of getting computers to understand and manipulate text. - -We wouldn't hesitate to call a conversation partner or recruiter who blithely assumed that doctors are men sexist, but that's exactly what BERT might do if heedlessly incorporated into a chatbot or HR software: - -
        - -Adjusting for assumptions like this isn't trivial. *Why* machine learning systems produce a given output still isn't well understood – determining if a credit model built on top of BERT rejected a loan application because of [gender discrimation](https://pair.withgoogle.com/explorables/hidden-bias/) might be quite difficult. - -Deploying large language models at scale also risks [amplifying](https://machinesgonewrong.com/bias_i/#harms-of-representation) and [perpetuating](http://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf) today's harmful stereotypes. When [prompted](https://arxiv.org/pdf/2101.05783v1.pdf#page=3) with "Two Muslims walked into a…", for example, [GPT-3](https://en.wikipedia.org/wiki/GPT-3) typically finishes the sentence with descriptions of violence. -### How Can We Fix This? - -One conceptually straightforward approach: reduce unwanted correlations from the training data to [mitigate](https://arxiv.org/abs/1906.08976) model [bias](https://arxiv.org/abs/2005.14050). - -Last year a version of BERT called [Zari](https://ai.googleblog.com/2020/10/measuring-gendered-correlations-in-pre.html) was [trained](https://arxiv.org/pdf/2010.06032.pdf#page=6) with an additional set of generated sentences. For every sentence with a [gendered noun](https://github.com/uclanlp/corefBias/blob/master/WinoBias/wino/generalized_swaps.txt), like boy or aunt, another sentence that replaced the noun with its gender-partner was added to the training data: in addition to "The *lady* doth protest too much," Zari was also trained on "The *gentleman* doth protest too much." - -
        - -Unlike BERT, Zari assigns nurses and doctors an equal probability of being a "she" or a "he" after being trained on the swapped sentences. This approach hasn't removed all the gender correlations; because names weren't swapped, Zari's association between masculine names and doctors has only slightly decreased from BERT's. And the retraining doesn't change how the model understands nonbinary gender. - -Something similar happened with [other attempts](https://arxiv.org/abs/1607.06520) to remove gender bias from models' representations of words. It's possible to mathematically define bias and perform "brain surgery" on a model to remove it, but language is steeped in gender. Large models can have billions of parameters in which to learn stereotypes — slightly different measures of bias have found the retrained models only [shifted the stereotypes](https://www.aclweb.org/anthology/N19-1061/) around to be undetectable by the initial measure. - -As with [other applications](https://pair.withgoogle.com/explorables/measuring-fairness/) of machine learning, it's helpful to focus instead on the actual harms that could occur. Tools like [AllenNLP](https://allennlp.org/), [LMdiff](http://lmdiff.net/) and the [Language Interpretability Tool](https://pair-code.github.io/lit/) make it easier to interact with language models to find where they might be falling short. Once those shortcomings are spotted, [task specific](https://arxiv.org/abs/2004.07667) mitigation measures can be simpler to apply than modifying the entire model. - -It's also possible that as models grow more capable, they might be able to [explain](https://arxiv.org/abs/2004.14546) and perform some of this debiasing themselves. Instead of forcing the model to tell us the gender of "the doctor," we could let it respond with [uncertainty](https://arr.am/2020/07/25/gpt-3-uncertainty-prompts/) that's [shown to the user](https://ai.googleblog.com/2018/12/providing-gender-specific-translations.html) and controls to override assumptions. - -### Credits - -Adam Pearce // July 2021 - -Thanks to Ben Wedin, Emily Reif, James Wexler, Fernanda Viégas, Ian Tenney, Kellie Webster, Kevin Robinson, Lucas Dixon, Ludovic Peran, Martin Wattenberg, Michael Terry, Tolga Bolukbasi, Vinodkumar Prabhakaran, Xuezhi Wang, Yannick Assogba, and Zan Armstrong for their help with this piece. - -### Footnotes - - The BERT model used on this page is the Hugging Face version of [bert-large-uncased-whole-word-masking](https://huggingface.co/bert-large-uncased-whole-word-masking). "BERT" also refers to a type of model architecture; hundreds of BERT models have been [trained and published](https://huggingface.co/models?filter=bert). The model and chart code used here are available on [GitHub](https://github.com/PAIR-code/ai-explorables). - - Notice that "1800", "1900" and "2000" are some of the top predictions, though. People aren't actually more likely to be born at the start of a century, but in BERT's training corpus of books and Wikipedia articles round numbers are [more common](https://blocks.roadtolarissa.com/1wheel/cea123a8c17d51d9dacbd1c17e6fe601).

        - -Comparing BERT and Zari in this interface requires carefully tracking tokens during a transition. The [BERT Difference Plots](https://colab.research.google.com/drive/1xfPGKqjdE635cVSi-Ggt-cRBU5pyJNWP) colab has ideas for extensions to systemically look at differences between the models' output. - - This analysis shouldn't stop once a model is deployed — as language and model usage shifts, it's important to continue studying and mitigating potential harms. - - -### Appendix: Differences Over Time - -In addition to looking at how predictions for men and women are different for a given sentence, we can also chart how those differences have changed over time: - -
        - -The convergence in more recent years suggests another potential mitigation technique: using a prefix to steer the model away from unwanted correlations while preserving its understanding of natural language. - -Using "In $year" as the prefix is quite limited, though, as it doesn't handle gender-neutral pronouns and potentially [increases](https://www.pnas.org/content/pnas/115/16/E3635.full.pdf#page=8) other correlations. However, it may be possible to [find a better prefix](https://arxiv.org/abs/2104.08691) that mitigates a specific type of bias with just a [couple of dozen examples](https://www.openai.com/blog/improving-language-model-behavior/ ). - -
        - -Closer examination of these differences in differences also shows there's a limit to the facts we can pull out of BERT this way. - -Below, the top row of charts shows how predicted differences in occupations between men and women change between 1908 and 2018. The rightmost chart shows the he/she difference in 1908 against the he/she difference in 2018. - -The flat slope of the rightmost chart indicates that the he/she difference has decreased for each job by about the same amount. But in reality, [shifts in occupation](https://www.weforum.org/agenda/2016/03/a-visual-history-of-gender-and-employment) weren't nearly so smooth and some occupations, like accounting, switched from being majority male to majority female. - -
        - -This reality-prediction mismatch could be caused by lack of training data, model size or the coarseness of the probing method. There's an immense amount of general knowledge inside of these models — with a little bit of focused training, they can even become expert [trivia](https://t5-trivia.glitch.me/) players. -### More Explorables - -

        - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/mfrashad/CharacterGAN/netdissect/evalablate.py b/spaces/mfrashad/CharacterGAN/netdissect/evalablate.py deleted file mode 100644 index 2079ffdb303b288df77678109f701e40fdf5779b..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/CharacterGAN/netdissect/evalablate.py +++ /dev/null @@ -1,248 +0,0 @@ -import torch, sys, os, argparse, textwrap, numbers, numpy, json, PIL -from torchvision import transforms -from torch.utils.data import TensorDataset -from netdissect.progress import default_progress, post_progress, desc_progress -from netdissect.progress import verbose_progress, print_progress -from netdissect.nethook import edit_layers -from netdissect.zdataset import standard_z_sample -from netdissect.autoeval import autoimport_eval -from netdissect.easydict import EasyDict -from netdissect.modelconfig import create_instrumented_model - -help_epilog = '''\ -Example: - -python -m netdissect.evalablate \ - --segmenter "netdissect.segmenter.UnifiedParsingSegmenter(segsizes=[256], segdiv='quad')" \ - --model "proggan.from_pth_file('models/lsun_models/${SCENE}_lsun.pth')" \ - --outdir dissect/dissectdir \ - --classes mirror coffeetable tree \ - --layers layer4 \ - --size 1000 - -Output layout: -dissectdir/layer5/ablation/mirror-iqr.json -{ class: "mirror", - classnum: 43, - pixel_total: 41342300, - class_pixels: 1234531, - layer: "layer5", - ranking: "mirror-iqr", - ablation_units: [341, 23, 12, 142, 83, ...] - ablation_pixels: [143242, 132344, 429931, ...] -} - -''' - -def main(): - # Training settings - def strpair(arg): - p = tuple(arg.split(':')) - if len(p) == 1: - p = p + p - return p - - parser = argparse.ArgumentParser(description='Ablation eval', - epilog=textwrap.dedent(help_epilog), - formatter_class=argparse.RawDescriptionHelpFormatter) - parser.add_argument('--model', type=str, default=None, - help='constructor for the model to test') - parser.add_argument('--pthfile', type=str, default=None, - help='filename of .pth file for the model') - parser.add_argument('--outdir', type=str, default='dissect', required=True, - help='directory for dissection output') - parser.add_argument('--layers', type=strpair, nargs='+', - help='space-separated list of layer names to edit' + - ', in the form layername[:reportedname]') - parser.add_argument('--classes', type=str, nargs='+', - help='space-separated list of class names to ablate') - parser.add_argument('--metric', type=str, default='iou', - help='ordering metric for selecting units') - parser.add_argument('--unitcount', type=int, default=30, - help='number of units to ablate') - parser.add_argument('--segmenter', type=str, - help='directory containing segmentation dataset') - parser.add_argument('--netname', type=str, default=None, - help='name for network in generated reports') - parser.add_argument('--batch_size', type=int, default=5, - help='batch size for forward pass') - parser.add_argument('--size', type=int, default=200, - help='number of images to test') - parser.add_argument('--no-cuda', action='store_true', default=False, - help='disables CUDA usage') - parser.add_argument('--quiet', action='store_true', default=False, - help='silences console output') - if len(sys.argv) == 1: - parser.print_usage(sys.stderr) - sys.exit(1) - args = parser.parse_args() - - # Set up console output - verbose_progress(not args.quiet) - - # Speed up pytorch - torch.backends.cudnn.benchmark = True - - # Set up CUDA - args.cuda = not args.no_cuda and torch.cuda.is_available() - if args.cuda: - torch.backends.cudnn.benchmark = True - - # Take defaults for model constructor etc from dissect.json settings. - with open(os.path.join(args.outdir, 'dissect.json')) as f: - dissection = EasyDict(json.load(f)) - if args.model is None: - args.model = dissection.settings.model - if args.pthfile is None: - args.pthfile = dissection.settings.pthfile - if args.segmenter is None: - args.segmenter = dissection.settings.segmenter - - # Instantiate generator - model = create_instrumented_model(args, gen=True, edit=True) - if model is None: - print('No model specified') - sys.exit(1) - - # Instantiate model - device = next(model.parameters()).device - input_shape = model.input_shape - - # 4d input if convolutional, 2d input if first layer is linear. - raw_sample = standard_z_sample(args.size, input_shape[1], seed=2).view( - (args.size,) + input_shape[1:]) - dataset = TensorDataset(raw_sample) - - # Create the segmenter - segmenter = autoimport_eval(args.segmenter) - - # Now do the actual work. - labelnames, catnames = ( - segmenter.get_label_and_category_names(dataset)) - label_category = [catnames.index(c) if c in catnames else 0 - for l, c in labelnames] - labelnum_from_name = {n[0]: i for i, n in enumerate(labelnames)} - - segloader = torch.utils.data.DataLoader(dataset, - batch_size=args.batch_size, num_workers=10, - pin_memory=(device.type == 'cuda')) - - # Index the dissection layers by layer name. - dissect_layer = {lrec.layer: lrec for lrec in dissection.layers} - - # First, collect a baseline - for l in model.ablation: - model.ablation[l] = None - - # For each sort-order, do an ablation - progress = default_progress() - for classname in progress(args.classes): - post_progress(c=classname) - for layername in progress(model.ablation): - post_progress(l=layername) - rankname = '%s-%s' % (classname, args.metric) - classnum = labelnum_from_name[classname] - try: - ranking = next(r for r in dissect_layer[layername].rankings - if r.name == rankname) - except: - print('%s not found' % rankname) - sys.exit(1) - ordering = numpy.argsort(ranking.score) - # Check if already done - ablationdir = os.path.join(args.outdir, layername, 'pixablation') - if os.path.isfile(os.path.join(ablationdir, '%s.json'%rankname)): - with open(os.path.join(ablationdir, '%s.json'%rankname)) as f: - data = EasyDict(json.load(f)) - # If the unit ordering is not the same, something is wrong - if not all(a == o - for a, o in zip(data.ablation_units, ordering)): - continue - if len(data.ablation_effects) >= args.unitcount: - continue # file already done. - measurements = data.ablation_effects - measurements = measure_ablation(segmenter, segloader, - model, classnum, layername, ordering[:args.unitcount]) - measurements = measurements.cpu().numpy().tolist() - os.makedirs(ablationdir, exist_ok=True) - with open(os.path.join(ablationdir, '%s.json'%rankname), 'w') as f: - json.dump(dict( - classname=classname, - classnum=classnum, - baseline=measurements[0], - layer=layername, - metric=args.metric, - ablation_units=ordering.tolist(), - ablation_effects=measurements[1:]), f) - -def measure_ablation(segmenter, loader, model, classnum, layer, ordering): - total_bincount = 0 - data_size = 0 - device = next(model.parameters()).device - progress = default_progress() - for l in model.ablation: - model.ablation[l] = None - feature_units = model.feature_shape[layer][1] - feature_shape = model.feature_shape[layer][2:] - repeats = len(ordering) - total_scores = torch.zeros(repeats + 1) - for i, batch in enumerate(progress(loader)): - z_batch = batch[0] - model.ablation[layer] = None - tensor_images = model(z_batch.to(device)) - seg = segmenter.segment_batch(tensor_images, downsample=2) - mask = (seg == classnum).max(1)[0] - downsampled_seg = torch.nn.functional.adaptive_avg_pool2d( - mask.float()[:,None,:,:], feature_shape)[:,0,:,:] - total_scores[0] += downsampled_seg.sum().cpu() - # Now we need to do an intervention for every location - # that had a nonzero downsampled_seg, if any. - interventions_needed = downsampled_seg.nonzero() - location_count = len(interventions_needed) - if location_count == 0: - continue - interventions_needed = interventions_needed.repeat(repeats, 1) - inter_z = batch[0][interventions_needed[:,0]].to(device) - inter_chan = torch.zeros(repeats, location_count, feature_units, - device=device) - for j, u in enumerate(ordering): - inter_chan[j:, :, u] = 1 - inter_chan = inter_chan.view(len(inter_z), feature_units) - inter_loc = interventions_needed[:,1:] - scores = torch.zeros(len(inter_z)) - batch_size = len(batch[0]) - for j in range(0, len(inter_z), batch_size): - ibz = inter_z[j:j+batch_size] - ibl = inter_loc[j:j+batch_size].t() - imask = torch.zeros((len(ibz),) + feature_shape, device=ibz.device) - imask[(torch.arange(len(ibz)),) + tuple(ibl)] = 1 - ibc = inter_chan[j:j+batch_size] - model.ablation[layer] = ( - imask.float()[:,None,:,:] * ibc[:,:,None,None]) - tensor_images = model(ibz) - seg = segmenter.segment_batch(tensor_images, downsample=2) - mask = (seg == classnum).max(1)[0] - downsampled_iseg = torch.nn.functional.adaptive_avg_pool2d( - mask.float()[:,None,:,:], feature_shape)[:,0,:,:] - scores[j:j+batch_size] = downsampled_iseg[ - (torch.arange(len(ibz)),) + tuple(ibl)] - scores = scores.view(repeats, location_count).sum(1) - total_scores[1:] += scores - return total_scores - -def count_segments(segmenter, loader, model): - total_bincount = 0 - data_size = 0 - progress = default_progress() - for i, batch in enumerate(progress(loader)): - tensor_images = model(z_batch.to(device)) - seg = segmenter.segment_batch(tensor_images, downsample=2) - bc = (seg + index[:, None, None, None] * self.num_classes).view(-1 - ).bincount(minlength=z_batch.shape[0] * self.num_classes) - data_size += seg.shape[0] * seg.shape[2] * seg.shape[3] - total_bincount += batch_label_counts.float().sum(0) - normalized_bincount = total_bincount / data_size - return normalized_bincount - -if __name__ == '__main__': - main() diff --git a/spaces/micahCastillo/gpt-report-analysis/app.py b/spaces/micahCastillo/gpt-report-analysis/app.py deleted file mode 100644 index cc399e92180553536ab43aec516328009ad65511..0000000000000000000000000000000000000000 --- a/spaces/micahCastillo/gpt-report-analysis/app.py +++ /dev/null @@ -1,207 +0,0 @@ -import openai -import gradio as gr -from operator import itemgetter -import fitz -import json - -headers = ["Misc"] -texts = [""] -sections = {} -btn_list = [] -comp_list = [] -memory = [] -def fonts(doc, granularity=False): - """Extracts fonts and their usage in PDF documents. - :param doc: PDF document to iterate through - :type doc: - :param granularity: also use 'font', 'flags' and 'color' to discriminate text - :type granularity: bool - :rtype: [(font_size, count), (font_size, count}], dict - :return: most used fonts sorted by count, font style information - """ - styles = {} - font_counts = {} - - for page in doc: - blocks = page.get_text("dict")["blocks"] - for b in blocks: # iterate through the text blocks - if b['type'] == 0: # block contains text - for l in b["lines"]: # iterate through the text lines - for s in l["spans"]: # iterate through the text spans - if granularity: - identifier = "{0}_{1}_{2}_{3}".format(s['size'], s['flags'], s['font'], s['color']) - styles[identifier] = {'size': s['size'], 'flags': s['flags'], 'font': s['font'], - 'color': s['color']} - else: - identifier = "{0}".format(s['size']) - styles[identifier] = {'size': s['size'], 'font': s['font']} - - font_counts[identifier] = font_counts.get(identifier, 0) + 1 # count the fonts usage - - font_counts = sorted(font_counts.items(), key=itemgetter(1), reverse=True) - - if len(font_counts) < 1: - raise ValueError("Zero discriminating fonts found!") - - return font_counts, styles - -def font_tags(font_counts, styles): - """Returns dictionary with font sizes as keys and tags as value. - :param font_counts: (font_size, count) for all fonts occuring in document - :type font_counts: list - :param styles: all styles found in the document - :type styles: dict - :rtype: dict - :return: all element tags based on font-sizes - """ - p_style = styles[font_counts[0][0]] # get style for most used font by count (paragraph) - p_size = p_style['size'] # get the paragraph's size - - # sorting the font sizes high to low, so that we can append the right integer to each tag - font_sizes = [] - for (font_size, count) in font_counts: - font_sizes.append(float(font_size)) - font_sizes.sort(reverse=True) - - # aggregating the tags for each font size - idx = 0 - size_tag = {} - for size in font_sizes: - idx += 1 - if size == p_size: - idx = 0 - size_tag[size] = '

        ' - if size > p_size: - size_tag[size] = ''.format(idx) - elif size < p_size: - size_tag[size] = ''.format(idx) - - return size_tag - - -def headers_para(doc, size_tag): - """Scrapes headers & paragraphs from PDF and return texts with element tags. - :param doc: PDF document to iterate through - :type doc: - :param size_tag: textual element tags for each size - :type size_tag: dict - :rtype: list - :return: texts with pre-prended element tags - """ - header_para = [] # list with headers and paragraphs - first = True # boolean operator for first header - previous_s = {} # previous span - - for page in doc: - blocks = page.get_text("dict")["blocks"] - for b in blocks: # iterate through the text blocks - if b['type'] == 0: # this block contains text - - # REMEMBER: multiple fonts and sizes are possible IN one block - - block_string = "" # text found in block - for l in b["lines"]: # iterate through the text lines - for s in l["spans"]: # iterate through the text spans - if s['text'].strip(): # removing whitespaces: - if first: - previous_s = s - first = False - block_string = size_tag[s['size']] + s['text'] - else: - if s['size'] == previous_s['size']: - - if block_string and all((c == "|") for c in block_string): - # block_string only contains pipes - block_string = size_tag[s['size']] + s['text'] - if block_string == "": - # new block has started, so append size tag - block_string = size_tag[s['size']] + s['text'] - else: # in the same block, so concatenate strings - block_string += " " + s['text'] - - else: - header_para.append(block_string) - block_string = size_tag[s['size']] + s['text'] - - previous_s = s - - # new block started, indicating with a pipe - block_string += "|" - - header_para.append(block_string) - - return header_para -def main(grFile): - doc = fitz.open(grFile.name) - font_counts, styles = fonts(doc, granularity=False) - size_tag = font_tags(font_counts, styles) - - elements = headers_para(doc, size_tag) - for element in elements: - if element[0:2] == '': - texts[-1] = texts[-1] + element[3:].replace('|','') - for header in headers.copy(): - if header == 'Abstract' or header == 'Introduction': - break - else: - headers.pop(0) - texts.pop(0) - sections.update(zip(headers, texts)) - -def upload_file(file_input): - # print("yes, file_selected is invoked") - main(file_input) - return gr.Dropdown.update( - choices=headers, value=headers[0], visible=True, interactive=True - ) - -def modify_text(drop_input): - if(sections[drop_input] != ""): - return gr.Textbox.update(value=sections[drop_input], interactive=False) - return gr.Textbox.update(value="No text was found in this section. Please try another one.") - -def stop_uploads(): - return gr.update(interactive=False) - -def gpt_magic(textbox, gptbox): - newline = "\n---\n" - if(textbox == "No text was found in this section. Please try another one." or textbox == ""): - return gr.update(value="No text was found in either textbox. Please try another section, or make sure you have a question typed for ChatGPT.") - msgList = [] - msgList.append({"role": "system", "content": "You are a linguist working with a pharmaceutical company. Using your knowledge of pragmatics and topic-relevance, please write brief analyses of the content provided, focusing on whether or not its topics are relevant or explicit, and then write a summary that follows suggestions made in your analysis. Also consider making comments on how word order affects how explicit a topic is."}) - memory.append(textbox) - prompt = {"role": "user", "content": ''.join(memory)} - msgList.append(prompt) - completion = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=msgList - ) - print(completion.usage['total_tokens']) - memory.append(completion.choices[0].message.content) - return gr.update(value=gptbox + newline + completion.choices[0].message.content) - -with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - file_input = gr.File(label="Upload PDF", file_types=[".pdf"]) - drop = gr.Dropdown(label="Section for Analysis", choices=[], visible=False) - file_input.change(upload_file, inputs=file_input, outputs=drop) - file_input.change(stop_uploads,None,file_input) - b = gr.Button("Analyze Selected Section for Topic") - userText = gr.Textbox(label="Ask a question", placeholder="Ask ChatGPT a question about its response.") - s = gr.Button("Submit") - with gr.Column(): - tbox = gr.Textbox(placeholder="Upload a PDF to begin. Select a section on the left to show its contents here. WARNING: Not all PDFs are fully compatible. ", label="Source text", max_lines=15, interactive=False) - drop.change(modify_text, drop, tbox) - gptbox = gr.Textbox( - placeholder="Press the button to source an analysis and summary from ChatGPT that makes the passage more topic-relevant.", - label="ChatGPT Analysis/Summary", max_lines=15, interactive=False) - gptbox.style(show_copy_button=True) - b.click(gpt_magic, [tbox, gptbox], gptbox) - s.click(gpt_magic, [userText, gptbox], gptbox) - -demo.launch() \ No newline at end of file diff --git a/spaces/micole66/ugly-or-sexy/README.md b/spaces/micole66/ugly-or-sexy/README.md deleted file mode 100644 index cdb7a2d7ec704410dd5ad0c6df67aa3a181322af..0000000000000000000000000000000000000000 --- a/spaces/micole66/ugly-or-sexy/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Ugly Or Sexy -emoji: 🏢 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/microsoft/HuggingGPT/models_server.py b/spaces/microsoft/HuggingGPT/models_server.py deleted file mode 100644 index a4a3806a776eca0f2c95aa2173d58a7ab6e93956..0000000000000000000000000000000000000000 --- a/spaces/microsoft/HuggingGPT/models_server.py +++ /dev/null @@ -1,618 +0,0 @@ -import argparse -import logging -import random -import uuid -import numpy as np -from transformers import pipeline -from diffusers import DiffusionPipeline, StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler -from diffusers.utils import load_image -from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler -from diffusers.utils import export_to_video -from transformers import SpeechT5Processor, SpeechT5ForTextToSpeech, SpeechT5HifiGan, SpeechT5ForSpeechToSpeech -from transformers import BlipProcessor, BlipForConditionalGeneration -from transformers import TrOCRProcessor, VisionEncoderDecoderModel, ViTImageProcessor, AutoTokenizer -from datasets import load_dataset -from PIL import Image -import io -from torchvision import transforms -import torch -import torchaudio -from speechbrain.pretrained import WaveformEnhancement -import joblib -from huggingface_hub import hf_hub_url, cached_download -from transformers import AutoImageProcessor, TimesformerForVideoClassification -from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation, AutoFeatureExtractor -from controlnet_aux import OpenposeDetector, MLSDdetector, HEDdetector, CannyDetector, MidasDetector -from controlnet_aux.open_pose.body import Body -from controlnet_aux.mlsd.models.mbv2_mlsd_large import MobileV2_MLSD_Large -from controlnet_aux.hed import Network -from transformers import DPTForDepthEstimation, DPTFeatureExtractor -import warnings -import time -from espnet2.bin.tts_inference import Text2Speech -import soundfile as sf -from asteroid.models import BaseModel -import traceback -import os -import yaml - -warnings.filterwarnings("ignore") - -parser = argparse.ArgumentParser() -parser.add_argument("--config", type=str, default="config.yaml") -args = parser.parse_args() - -if __name__ != "__main__": - args.config = "config.gradio.yaml" - -logger = logging.getLogger(__name__) -logger.setLevel(logging.INFO) -handler = logging.StreamHandler() -handler.setLevel(logging.INFO) -formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') -handler.setFormatter(formatter) -logger.addHandler(handler) - -config = yaml.load(open(args.config, "r"), Loader=yaml.FullLoader) - -local_deployment = config["local_deployment"] -if config["inference_mode"] == "huggingface": - local_deployment = "none" - -PROXY = None -if config["proxy"]: - PROXY = { - "https": config["proxy"], - } - -start = time.time() - -# local_models = "models/" -local_models = "" - - -def load_pipes(local_deployment): - other_pipes = {} - standard_pipes = {} - controlnet_sd_pipes = {} - if local_deployment in ["full"]: - other_pipes = { - - # "Salesforce/blip-image-captioning-large": { - # "model": BlipForConditionalGeneration.from_pretrained(f"Salesforce/blip-image-captioning-large"), - # "processor": BlipProcessor.from_pretrained(f"Salesforce/blip-image-captioning-large"), - # "device": "cuda:0" - # }, - "damo-vilab/text-to-video-ms-1.7b": { - "model": DiffusionPipeline.from_pretrained(f"{local_models}damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16"), - "device": "cuda:0" - }, - # "facebook/maskformer-swin-large-ade": { - # "model": MaskFormerForInstanceSegmentation.from_pretrained(f"facebook/maskformer-swin-large-ade"), - # "feature_extractor" : AutoFeatureExtractor.from_pretrained("facebook/maskformer-swin-large-ade"), - # "device": "cuda:0" - # }, - # "microsoft/trocr-base-printed": { - # "processor": TrOCRProcessor.from_pretrained(f"microsoft/trocr-base-printed"), - # "model": VisionEncoderDecoderModel.from_pretrained(f"microsoft/trocr-base-printed"), - # "device": "cuda:0" - # }, - # "microsoft/trocr-base-handwritten": { - # "processor": TrOCRProcessor.from_pretrained(f"microsoft/trocr-base-handwritten"), - # "model": VisionEncoderDecoderModel.from_pretrained(f"microsoft/trocr-base-handwritten"), - # "device": "cuda:0" - # }, - "JorisCos/DCCRNet_Libri1Mix_enhsingle_16k": { - "model": BaseModel.from_pretrained("JorisCos/DCCRNet_Libri1Mix_enhsingle_16k"), - "device": "cuda:0" - }, - - # "CompVis/stable-diffusion-v1-4": { - # "model": DiffusionPipeline.from_pretrained(f"CompVis/stable-diffusion-v1-4"), - # "device": "cuda:0" - # }, - # "stabilityai/stable-diffusion-2-1": { - # "model": DiffusionPipeline.from_pretrained(f"stabilityai/stable-diffusion-2-1"), - # "device": "cuda:0" - # }, - - # "microsoft/speecht5_tts":{ - # "processor": SpeechT5Processor.from_pretrained(f"microsoft/speecht5_tts"), - # "model": SpeechT5ForTextToSpeech.from_pretrained(f"microsoft/speecht5_tts"), - # "vocoder": SpeechT5HifiGan.from_pretrained(f"microsoft/speecht5_hifigan"), - # "embeddings_dataset": load_dataset(f"Matthijs/cmu-arctic-xvectors", split="validation"), - # "device": "cuda:0" - # }, - # "speechbrain/mtl-mimic-voicebank": { - # "model": WaveformEnhancement.from_hparams(source="speechbrain/mtl-mimic-voicebank", savedir="models/mtl-mimic-voicebank"), - # "device": "cuda:0" - # }, - "microsoft/speecht5_vc":{ - "processor": SpeechT5Processor.from_pretrained(f"{local_models}microsoft/speecht5_vc"), - "model": SpeechT5ForSpeechToSpeech.from_pretrained(f"{local_models}microsoft/speecht5_vc"), - "vocoder": SpeechT5HifiGan.from_pretrained(f"{local_models}microsoft/speecht5_hifigan"), - "embeddings_dataset": load_dataset(f"{local_models}Matthijs/cmu-arctic-xvectors", split="validation"), - "device": "cuda:0" - }, - # "julien-c/wine-quality": { - # "model": joblib.load(cached_download(hf_hub_url("julien-c/wine-quality", "sklearn_model.joblib"))) - # }, - # "facebook/timesformer-base-finetuned-k400": { - # "processor": AutoImageProcessor.from_pretrained(f"facebook/timesformer-base-finetuned-k400"), - # "model": TimesformerForVideoClassification.from_pretrained(f"facebook/timesformer-base-finetuned-k400"), - # "device": "cuda:0" - # }, - "facebook/maskformer-swin-base-coco": { - "feature_extractor": MaskFormerFeatureExtractor.from_pretrained(f"{local_models}facebook/maskformer-swin-base-coco"), - "model": MaskFormerForInstanceSegmentation.from_pretrained(f"{local_models}facebook/maskformer-swin-base-coco"), - "device": "cuda:0" - }, - "Intel/dpt-hybrid-midas": { - "model": DPTForDepthEstimation.from_pretrained(f"{local_models}Intel/dpt-hybrid-midas", low_cpu_mem_usage=True), - "feature_extractor": DPTFeatureExtractor.from_pretrained(f"{local_models}Intel/dpt-hybrid-midas"), - "device": "cuda:0" - } - } - - if local_deployment in ["full", "standard"]: - standard_pipes = { - # "nlpconnect/vit-gpt2-image-captioning":{ - # "model": VisionEncoderDecoderModel.from_pretrained(f"{local_models}nlpconnect/vit-gpt2-image-captioning"), - # "feature_extractor": ViTImageProcessor.from_pretrained(f"{local_models}nlpconnect/vit-gpt2-image-captioning"), - # "tokenizer": AutoTokenizer.from_pretrained(f"{local_models}nlpconnect/vit-gpt2-image-captioning"), - # "device": "cuda:0" - # }, - "espnet/kan-bayashi_ljspeech_vits": { - "model": Text2Speech.from_pretrained("espnet/kan-bayashi_ljspeech_vits"), - "device": "cuda:0" - }, - # "lambdalabs/sd-image-variations-diffusers": { - # "model": DiffusionPipeline.from_pretrained(f"{local_models}lambdalabs/sd-image-variations-diffusers"), #torch_dtype=torch.float16 - # "device": "cuda:0" - # }, - "runwayml/stable-diffusion-v1-5": { - "model": DiffusionPipeline.from_pretrained(f"{local_models}runwayml/stable-diffusion-v1-5"), - "device": "cuda:0" - }, - # "superb/wav2vec2-base-superb-ks": { - # "model": pipeline(task="audio-classification", model=f"superb/wav2vec2-base-superb-ks"), - # "device": "cuda:0" - # }, - "openai/whisper-base": { - "model": pipeline(task="automatic-speech-recognition", model=f"{local_models}openai/whisper-base"), - "device": "cuda:0" - }, - # "microsoft/speecht5_asr": { - # "model": pipeline(task="automatic-speech-recognition", model=f"{local_models}microsoft/speecht5_asr"), - # "device": "cuda:0" - # }, - "Intel/dpt-large": { - "model": pipeline(task="depth-estimation", model=f"{local_models}Intel/dpt-large"), - "device": "cuda:0" - }, - # "microsoft/beit-base-patch16-224-pt22k-ft22k": { - # "model": pipeline(task="image-classification", model=f"microsoft/beit-base-patch16-224-pt22k-ft22k"), - # "device": "cuda:0" - # }, - "facebook/detr-resnet-50-panoptic": { - "model": pipeline(task="image-segmentation", model=f"{local_models}facebook/detr-resnet-50-panoptic"), - "device": "cuda:0" - }, - "facebook/detr-resnet-101": { - "model": pipeline(task="object-detection", model=f"{local_models}facebook/detr-resnet-101"), - "device": "cuda:0" - }, - # "openai/clip-vit-large-patch14": { - # "model": pipeline(task="zero-shot-image-classification", model=f"openai/clip-vit-large-patch14"), - # "device": "cuda:0" - # }, - # "google/owlvit-base-patch32": { - # "model": pipeline(task="zero-shot-object-detection", model=f"{local_models}google/owlvit-base-patch32"), - # "device": "cuda:0" - # }, - # "microsoft/DialoGPT-medium": { - # "model": pipeline(task="conversational", model=f"microsoft/DialoGPT-medium"), - # "device": "cuda:0" - # }, - # "bert-base-uncased": { - # "model": pipeline(task="fill-mask", model=f"bert-base-uncased"), - # "device": "cuda:0" - # }, - # "deepset/roberta-base-squad2": { - # "model": pipeline(task = "question-answering", model=f"deepset/roberta-base-squad2"), - # "device": "cuda:0" - # }, - # "facebook/bart-large-cnn": { - # "model": pipeline(task="summarization", model=f"facebook/bart-large-cnn"), - # "device": "cuda:0" - # }, - # "google/tapas-base-finetuned-wtq": { - # "model": pipeline(task="table-question-answering", model=f"google/tapas-base-finetuned-wtq"), - # "device": "cuda:0" - # }, - # "distilbert-base-uncased-finetuned-sst-2-english": { - # "model": pipeline(task="text-classification", model=f"distilbert-base-uncased-finetuned-sst-2-english"), - # "device": "cuda:0" - # }, - # "gpt2": { - # "model": pipeline(task="text-generation", model="gpt2"), - # "device": "cuda:0" - # }, - # "mrm8488/t5-base-finetuned-question-generation-ap": { - # "model": pipeline(task="text2text-generation", model=f"mrm8488/t5-base-finetuned-question-generation-ap"), - # "device": "cuda:0" - # }, - # "Jean-Baptiste/camembert-ner": { - # "model": pipeline(task="token-classification", model=f"Jean-Baptiste/camembert-ner", aggregation_strategy="simple"), - # "device": "cuda:0" - # }, - # "t5-base": { - # "model": pipeline(task="translation", model=f"t5-base"), - # "device": "cuda:0" - # }, - # "impira/layoutlm-document-qa": { - # "model": pipeline(task="document-question-answering", model=f"{local_models}impira/layoutlm-document-qa"), - # "device": "cuda:0" - # }, - "ydshieh/vit-gpt2-coco-en": { - "model": pipeline(task="image-to-text", model=f"{local_models}ydshieh/vit-gpt2-coco-en"), - "device": "cuda:0" - }, - "dandelin/vilt-b32-finetuned-vqa": { - "model": pipeline(task="visual-question-answering", model=f"{local_models}dandelin/vilt-b32-finetuned-vqa"), - "device": "cuda:0" - } - } - - if local_deployment in ["full", "standard", "minimal"]: - - controlnet = ControlNetModel.from_pretrained(f"{local_models}lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) - controlnetpipe = StableDiffusionControlNetPipeline.from_pretrained( - f"{local_models}runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 - ) - - - hed_network = HEDdetector.from_pretrained('lllyasviel/ControlNet') - - controlnet_sd_pipes = { - "openpose-control": { - "model": OpenposeDetector.from_pretrained('lllyasviel/ControlNet') - }, - "mlsd-control": { - "model": MLSDdetector.from_pretrained('lllyasviel/ControlNet') - }, - "hed-control": { - "model": hed_network - }, - "scribble-control": { - "model": hed_network - }, - "midas-control": { - "model": MidasDetector.from_pretrained('lllyasviel/ControlNet') - }, - "canny-control": { - "model": CannyDetector() - }, - "lllyasviel/sd-controlnet-canny":{ - "control": controlnet, - "model": controlnetpipe, - "device": "cuda:0" - }, - "lllyasviel/sd-controlnet-depth":{ - "control": ControlNetModel.from_pretrained(f"{local_models}lllyasviel/sd-controlnet-depth", torch_dtype=torch.float16), - "model": controlnetpipe, - "device": "cuda:0" - }, - "lllyasviel/sd-controlnet-hed":{ - "control": ControlNetModel.from_pretrained(f"{local_models}lllyasviel/sd-controlnet-hed", torch_dtype=torch.float16), - "model": controlnetpipe, - "device": "cuda:0" - }, - "lllyasviel/sd-controlnet-mlsd":{ - "control": ControlNetModel.from_pretrained(f"{local_models}lllyasviel/sd-controlnet-mlsd", torch_dtype=torch.float16), - "model": controlnetpipe, - "device": "cuda:0" - }, - "lllyasviel/sd-controlnet-openpose":{ - "control": ControlNetModel.from_pretrained(f"{local_models}lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16), - "model": controlnetpipe, - "device": "cuda:0" - }, - "lllyasviel/sd-controlnet-scribble":{ - "control": ControlNetModel.from_pretrained(f"{local_models}lllyasviel/sd-controlnet-scribble", torch_dtype=torch.float16), - "model": controlnetpipe, - "device": "cuda:0" - }, - "lllyasviel/sd-controlnet-seg":{ - "control": ControlNetModel.from_pretrained(f"{local_models}lllyasviel/sd-controlnet-seg", torch_dtype=torch.float16), - "model": controlnetpipe, - "device": "cuda:0" - } - } - pipes = {**standard_pipes, **other_pipes, **controlnet_sd_pipes} - return pipes - -pipes = load_pipes(local_deployment) - -end = time.time() -during = end - start - -print(f"[ ready ] {during}s") - -def running(): - return {"running": True} - -def status(model_id): - disabled_models = ["microsoft/trocr-base-printed", "microsoft/trocr-base-handwritten"] - if model_id in pipes.keys() and model_id not in disabled_models: - print(f"[ check {model_id} ] success") - return {"loaded": True} - else: - print(f"[ check {model_id} ] failed") - return {"loaded": False} - -def models(model_id, data): - while "using" in pipes[model_id] and pipes[model_id]["using"]: - print(f"[ inference {model_id} ] waiting") - time.sleep(0.1) - pipes[model_id]["using"] = True - print(f"[ inference {model_id} ] start") - - start = time.time() - - pipe = pipes[model_id]["model"] - - if "device" in pipes[model_id]: - try: - pipe.to(pipes[model_id]["device"]) - except: - pipe.device = torch.device(pipes[model_id]["device"]) - pipe.model.to(pipes[model_id]["device"]) - - result = None - try: - # text to video - if model_id == "damo-vilab/text-to-video-ms-1.7b": - pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) - # pipe.enable_model_cpu_offload() - prompt = data["text"] - video_frames = pipe(prompt, num_inference_steps=50, num_frames=40).frames - file_name = str(uuid.uuid4())[:4] - video_path = export_to_video(video_frames, f"public/videos/{file_name}.mp4") - - new_file_name = str(uuid.uuid4())[:4] - os.system(f"ffmpeg -i {video_path} -vcodec libx264 public/videos/{new_file_name}.mp4") - - if os.path.exists(f"public/videos/{new_file_name}.mp4"): - result = {"path": f"/videos/{new_file_name}.mp4"} - else: - result = {"path": f"/videos/{file_name}.mp4"} - - # controlnet - if model_id.startswith("lllyasviel/sd-controlnet-"): - pipe.controlnet.to('cpu') - pipe.controlnet = pipes[model_id]["control"].to(pipes[model_id]["device"]) - pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) - control_image = load_image(data["img_url"]) - # generator = torch.manual_seed(66) - out_image: Image = pipe(data["text"], num_inference_steps=20, image=control_image).images[0] - file_name = str(uuid.uuid4())[:4] - out_image.save(f"public/images/{file_name}.png") - result = {"path": f"/images/{file_name}.png"} - - if model_id.endswith("-control"): - image = load_image(data["img_url"]) - if "scribble" in model_id: - control = pipe(image, scribble = True) - elif "canny" in model_id: - control = pipe(image, low_threshold=100, high_threshold=200) - else: - control = pipe(image) - file_name = str(uuid.uuid4())[:4] - control.save(f"public/images/{file_name}.png") - result = {"path": f"/images/{file_name}.png"} - - # image to image - if model_id == "lambdalabs/sd-image-variations-diffusers": - im = load_image(data["img_url"]) - file_name = str(uuid.uuid4())[:4] - with open(f"public/images/{file_name}.png", "wb") as f: - f.write(data) - tform = transforms.Compose([ - transforms.ToTensor(), - transforms.Resize( - (224, 224), - interpolation=transforms.InterpolationMode.BICUBIC, - antialias=False, - ), - transforms.Normalize( - [0.48145466, 0.4578275, 0.40821073], - [0.26862954, 0.26130258, 0.27577711]), - ]) - inp = tform(im).to(pipes[model_id]["device"]).unsqueeze(0) - out = pipe(inp, guidance_scale=3) - out["images"][0].save(f"public/images/{file_name}.jpg") - result = {"path": f"/images/{file_name}.jpg"} - - # image to text - if model_id == "Salesforce/blip-image-captioning-large": - raw_image = load_image(data["img_url"]).convert('RGB') - text = data["text"] - inputs = pipes[model_id]["processor"](raw_image, return_tensors="pt").to(pipes[model_id]["device"]) - out = pipe.generate(**inputs) - caption = pipes[model_id]["processor"].decode(out[0], skip_special_tokens=True) - result = {"generated text": caption} - if model_id == "ydshieh/vit-gpt2-coco-en": - img_url = data["img_url"] - generated_text = pipe(img_url)[0]['generated_text'] - result = {"generated text": generated_text} - if model_id == "nlpconnect/vit-gpt2-image-captioning": - image = load_image(data["img_url"]).convert("RGB") - pixel_values = pipes[model_id]["feature_extractor"](images=image, return_tensors="pt").pixel_values - pixel_values = pixel_values.to(pipes[model_id]["device"]) - generated_ids = pipe.generate(pixel_values, **{"max_length": 200, "num_beams": 1}) - generated_text = pipes[model_id]["tokenizer"].batch_decode(generated_ids, skip_special_tokens=True)[0] - result = {"generated text": generated_text} - # image to text: OCR - if model_id == "microsoft/trocr-base-printed" or model_id == "microsoft/trocr-base-handwritten": - image = load_image(data["img_url"]).convert("RGB") - pixel_values = pipes[model_id]["processor"](image, return_tensors="pt").pixel_values - pixel_values = pixel_values.to(pipes[model_id]["device"]) - generated_ids = pipe.generate(pixel_values) - generated_text = pipes[model_id]["processor"].batch_decode(generated_ids, skip_special_tokens=True)[0] - result = {"generated text": generated_text} - - # text to image - if model_id == "runwayml/stable-diffusion-v1-5": - file_name = str(uuid.uuid4())[:4] - text = data["text"] - out = pipe(prompt=text) - out["images"][0].save(f"public/images/{file_name}.jpg") - result = {"path": f"/images/{file_name}.jpg"} - - # object detection - if model_id == "google/owlvit-base-patch32" or model_id == "facebook/detr-resnet-101": - img_url = data["img_url"] - open_types = ["cat", "couch", "person", "car", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird"] - result = pipe(img_url, candidate_labels=open_types) - - # VQA - if model_id == "dandelin/vilt-b32-finetuned-vqa": - question = data["text"] - img_url = data["img_url"] - result = pipe(question=question, image=img_url) - - #DQA - if model_id == "impira/layoutlm-document-qa": - question = data["text"] - img_url = data["img_url"] - result = pipe(img_url, question) - - # depth-estimation - if model_id == "Intel/dpt-large": - output = pipe(data["img_url"]) - image = output['depth'] - name = str(uuid.uuid4())[:4] - image.save(f"public/images/{name}.jpg") - result = {"path": f"/images/{name}.jpg"} - - if model_id == "Intel/dpt-hybrid-midas" and model_id == "Intel/dpt-large": - image = load_image(data["img_url"]) - inputs = pipes[model_id]["feature_extractor"](images=image, return_tensors="pt") - with torch.no_grad(): - outputs = pipe(**inputs) - predicted_depth = outputs.predicted_depth - prediction = torch.nn.functional.interpolate( - predicted_depth.unsqueeze(1), - size=image.size[::-1], - mode="bicubic", - align_corners=False, - ) - output = prediction.squeeze().cpu().numpy() - formatted = (output * 255 / np.max(output)).astype("uint8") - image = Image.fromarray(formatted) - name = str(uuid.uuid4())[:4] - image.save(f"public/images/{name}.jpg") - result = {"path": f"/images/{name}.jpg"} - - # TTS - if model_id == "espnet/kan-bayashi_ljspeech_vits": - text = data["text"] - wav = pipe(text)["wav"] - name = str(uuid.uuid4())[:4] - sf.write(f"public/audios/{name}.wav", wav.cpu().numpy(), pipe.fs, "PCM_16") - result = {"path": f"/audios/{name}.wav"} - - if model_id == "microsoft/speecht5_tts": - text = data["text"] - inputs = pipes[model_id]["processor"](text=text, return_tensors="pt") - embeddings_dataset = pipes[model_id]["embeddings_dataset"] - speaker_embeddings = torch.tensor(embeddings_dataset[7306]["xvector"]).unsqueeze(0).to(pipes[model_id]["device"]) - pipes[model_id]["vocoder"].to(pipes[model_id]["device"]) - speech = pipe.generate_speech(inputs["input_ids"].to(pipes[model_id]["device"]), speaker_embeddings, vocoder=pipes[model_id]["vocoder"]) - name = str(uuid.uuid4())[:4] - sf.write(f"public/audios/{name}.wav", speech.cpu().numpy(), samplerate=16000) - result = {"path": f"/audios/{name}.wav"} - - # ASR - if model_id == "openai/whisper-base" or model_id == "microsoft/speecht5_asr": - audio_url = data["audio_url"] - result = { "text": pipe(audio_url)["text"]} - - # audio to audio - if model_id == "JorisCos/DCCRNet_Libri1Mix_enhsingle_16k": - audio_url = data["audio_url"] - wav, sr = torchaudio.load(audio_url) - with torch.no_grad(): - result_wav = pipe(wav.to(pipes[model_id]["device"])) - name = str(uuid.uuid4())[:4] - sf.write(f"public/audios/{name}.wav", result_wav.cpu().squeeze().numpy(), sr) - result = {"path": f"/audios/{name}.wav"} - - if model_id == "microsoft/speecht5_vc": - audio_url = data["audio_url"] - wav, sr = torchaudio.load(audio_url) - inputs = pipes[model_id]["processor"](audio=wav, sampling_rate=sr, return_tensors="pt") - embeddings_dataset = pipes[model_id]["embeddings_dataset"] - speaker_embeddings = torch.tensor(embeddings_dataset[7306]["xvector"]).unsqueeze(0) - pipes[model_id]["vocoder"].to(pipes[model_id]["device"]) - speech = pipe.generate_speech(inputs["input_ids"].to(pipes[model_id]["device"]), speaker_embeddings, vocoder=pipes[model_id]["vocoder"]) - name = str(uuid.uuid4())[:4] - sf.write(f"public/audios/{name}.wav", speech.cpu().numpy(), samplerate=16000) - result = {"path": f"/audios/{name}.wav"} - - # segmentation - if model_id == "facebook/detr-resnet-50-panoptic": - result = [] - segments = pipe(data["img_url"]) - image = load_image(data["img_url"]) - - colors = [] - for i in range(len(segments)): - colors.append((random.randint(100, 255), random.randint(100, 255), random.randint(100, 255), 50)) - - for segment in segments: - mask = segment["mask"] - mask = mask.convert('L') - layer = Image.new('RGBA', mask.size, colors[i]) - image.paste(layer, (0, 0), mask) - name = str(uuid.uuid4())[:4] - image.save(f"public/images/{name}.jpg") - result = {"path": f"/images/{name}.jpg"} - - if model_id == "facebook/maskformer-swin-base-coco" or model_id == "facebook/maskformer-swin-large-ade": - image = load_image(data["img_url"]) - inputs = pipes[model_id]["feature_extractor"](images=image, return_tensors="pt").to(pipes[model_id]["device"]) - outputs = pipe(**inputs) - result = pipes[model_id]["feature_extractor"].post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] - predicted_panoptic_map = result["segmentation"].cpu().numpy() - predicted_panoptic_map = Image.fromarray(predicted_panoptic_map.astype(np.uint8)) - name = str(uuid.uuid4())[:4] - predicted_panoptic_map.save(f"public/images/{name}.jpg") - result = {"path": f"/images/{name}.jpg"} - - except Exception as e: - print(e) - traceback.print_exc() - result = {"error": {"message": "Error when running the model inference."}} - - if "device" in pipes[model_id]: - try: - pipe.to("cpu") - torch.cuda.empty_cache() - except: - pipe.device = torch.device("cpu") - pipe.model.to("cpu") - torch.cuda.empty_cache() - - pipes[model_id]["using"] = False - - if result is None: - result = {"error": {"message": "model not found"}} - - end = time.time() - during = end - start - print(f"[ complete {model_id} ] {during}s") - print(f"[ result {model_id} ] {result}") - - return result diff --git a/spaces/mikeee/ttw/gradiobee/smatrix.py b/spaces/mikeee/ttw/gradiobee/smatrix.py deleted file mode 100644 index fd4bbf2966ea575cc48ba337035634c1f3e343f7..0000000000000000000000000000000000000000 --- a/spaces/mikeee/ttw/gradiobee/smatrix.py +++ /dev/null @@ -1,100 +0,0 @@ -"""Generate a similarity matrix (doc-term score matrix) based on textacy.representation.Vectorizer. - -refer also to fast-scores fast_scores.py and gen_model.py (sklearn.feature_extraction.text.TfidfVectorizer). -originally docterm_scores.py. -""" -from typing import Dict, Iterable, List, Optional, Union -import numpy as np -from itertools import chain -from psutil import virtual_memory -from more_itertools import ilen - -from textacy.representations import Vectorizer -# from textacy.representations.vectorizers import Vectorizer -from logzero import logger - -# from smatrix.gen_model import gen_model -from gradiobee.gen_model import gen_model - - -# fmt: off -def smatrix( - doc1: Iterable[Iterable[str]], # List[List[str]], - doc2: Iterable[Iterable[str]], - model: Vectorizer = None, - tf_type: str = 'linear', - idf_type: Optional[str] = "smooth", - # dl_type: Optional[str] = "sqrt", # "lucene-style tfidf" - dl_type: Optional[str] = None, # - norm: Optional[str] = "l2", # + "l2" - min_df: Union[int, float] = 1, - max_df: Union[int, float] = 1.0, - max_n_terms: Optional[int] = None, - vocabulary_terms: Optional[Union[Dict[str, int], Iterable[str]]] = None -) -> np.ndarray: - # fmt: on - """Generate a doc-term score matrix based on textacy.representation.Vectorizer. - - Args - doc1: tokenized doc of n1 - doc2: tokenized doc of n2 - model: if None, generate one ad hoc from doc1 and doc2 ("lucene-style tfidf"). - rest: refer to textacy.representation.Vectorizer - Attributes - vectorizer - - Returns - n1 x n2 similarity matrix of float numbers - """ - # make sure doc1/doc2 is of the right typing - try: - for xelm in iter(doc1): - for elm in iter(xelm): - assert isinstance(elm, str) - except AssertionError: - raise AssertionError(" doc1 is not of the typing Iterable[Iterable[str]] ") - except Exception as e: - logger.error(e) - raise - try: - for xelm in iter(doc2): - for elm in iter(xelm): - assert isinstance(elm, str) - except AssertionError: - raise AssertionError(" doc2 is not of the typing Iterable[Iterable[str]] ") - except Exception as e: - logger.error(e) - raise - - if model is None: - model = gen_model( - [*chain(doc1, doc2)], - tf_type=tf_type, - idf_type=idf_type, - dl_type=dl_type, - norm=norm, - min_df=min_df, - max_df=max_df, - max_n_terms=max_n_terms, - vocabulary_terms=vocabulary_terms - ) - # docterm_scores.model = model - smatrix.model = model - - # a1 = dt.toarray(), a2 = doc_term_matrix.toarray() - # np.all(np.isclose(a1, a2)) - - dt1 = model.transform(doc1) - dt2 = model.transform(doc2) - - # virtual_memory().available / 8: 64bits float - require_ram = ilen(iter(doc1)) * ilen(iter(doc2)) * 8 - if require_ram > virtual_memory().available: - logger.warning("virtual_memory().available: %s", virtual_memory().available) - logger.warning("memory required: %s", require_ram) - - if require_ram > virtual_memory().available * 10: - logger.warning("You're likely to encounter memory problem, such as slowing down response and/or OOM.") - - # return dt1.doc(dt2.T) - return dt2.toarray().dot(dt1.toarray().T) diff --git a/spaces/mikeee/ultimatumbee/dev-git-push-hf-dev-main.bat b/spaces/mikeee/ultimatumbee/dev-git-push-hf-dev-main.bat deleted file mode 100644 index 7675efc2397b0aac4de0b9eb948b5872713b204e..0000000000000000000000000000000000000000 --- a/spaces/mikeee/ultimatumbee/dev-git-push-hf-dev-main.bat +++ /dev/null @@ -1 +0,0 @@ -git push hf-dev dev:main \ No newline at end of file diff --git a/spaces/milyiyo/reimagine-it/retrieval/clip_model.py b/spaces/milyiyo/reimagine-it/retrieval/clip_model.py deleted file mode 100644 index 83d35620683bd11d3c9e6ac38bf76acbcd364e21..0000000000000000000000000000000000000000 --- a/spaces/milyiyo/reimagine-it/retrieval/clip_model.py +++ /dev/null @@ -1,350 +0,0 @@ -from transformers import CLIPModel, CLIPTokenizer -import os -import json -import argparse -from random import shuffle, seed -import string -# non-standard dependencies: -import h5py -from six.moves import cPickle -import numpy as np -import torch -import torchvision.models as models -import skimage.io - -from torchvision.transforms import Compose, Resize, CenterCrop, ToTensor, Normalize -from PIL import Image -from torch import nn - - -class CLIPScore(nn.Module): - def __init__(self, clipscore_w=2.5, image_size=224, mode='clip_s', use_grammar=False, joint_out=False): - super(CLIPScore, self).__init__() - # from transformers import CLIPModel, CLIPTokenizer - self.clip_model = CLIPModel.from_pretrained( - 'openai/clip-vit-base-patch32') - self.tokenizer = CLIPTokenizer.from_pretrained( - 'openai/clip-vit-base-patch32') - - self.clip_model.eval() - - self.clipscore_w = clipscore_w - - self.image_transform = self._transform(image_size) - - self.mode = mode - assert mode in ['clip_s', 'refclip_s'] - - self.use_grammar = use_grammar - self.joint_out = joint_out - - if self.use_grammar and self.joint_out is False: - self.grammar_score_head = nn.Sequential( - nn.Linear(self.clip_model.text_embed_dim, self.clip_model.projection_dim, bias=False), - nn.ReLU(), - nn.Linear(self.clip_model.projection_dim, 2, bias=False) - ) - - def _transform(self, n_px): - return Compose([ - Resize(n_px, interpolation=Image.BICUBIC), - CenterCrop(n_px), - lambda image: image.convert("RGB"), - ToTensor(), - Normalize((0.48145466, 0.4578275, 0.40821073), - (0.26862954, 0.26130258, 0.27577711)), - ]) - - def load_image(self, image_path): - image = Image.open(image_path) - return image - - # @torch.no_grad() - def image_extract(self, image): - if isinstance(image, str): - image = self.load_image(image) - if not isinstance(image, torch.Tensor): - image = self.image_transform(image) - - img_tensor = image.view(-1, 3, 224, 224) - device = next(self.clip_model.parameters()).device - img_tensor = img_tensor.to(device) - - clip_model = self.clip_model - - img_feat = clip_model.vision_model(img_tensor).pooler_output - img_feat = clip_model.visual_projection(img_feat) - img_feat = img_feat / img_feat.norm(dim=-1, keepdim=True) - - return img_feat - - # @torch.no_grad() - def text_extract(self, text, prompt="A photo depicts", proj_norm=True): - if isinstance(text, str): - text_batch = [" ".join([prompt, text])] - elif isinstance(text, list): - text_batch = [" ".join([prompt, txt]) for txt in text] - - if isinstance(text, tuple) and isinstance(text[0], torch.Tensor): - input_ids, attention_mask = text - else: - input_text = text_batch - - tokenized = self.tokenizer( - input_text, return_tensors='pt', padding=True) - - input_ids = tokenized.input_ids - attention_mask = tokenized.attention_mask - - clip_model = self.clip_model - device = next(self.clip_model.parameters()).device - input_ids = input_ids.to(device) - attention_mask = attention_mask.to(device) - - text_feat = clip_model.text_model(input_ids, attention_mask).pooler_output - - if proj_norm: - text_feat = clip_model.text_projection(text_feat) - text_feat = text_feat / text_feat.norm(dim=-1, keepdim=True) - - return text_feat - - # @torch.no_grad() - def calc_clip_s(self, img_feat, text_feat): - return self.clipscore_w * torch.relu((img_feat * text_feat).sum(dim=-1)) - - # @torch.no_grad() - def calc_refclip_s(self, img_feat=None, text_feat=None, ref_text_feat=None, ref_text_mask=None, clip_s=None): - - if clip_s is None: - clip_s = self.calc_clip_s(img_feat, text_feat) - - B, dim = img_feat.size() - - ref_text_feat = ref_text_feat.view(B, -1, dim) - - K = ref_text_feat.size(1) - - text_feat = text_feat.view(B, 1, dim).expand(-1, K, -1) - assert ref_text_feat.size() == text_feat.size( - ), (ref_text_feat.size(), text_feat.size()) - - ref_score = self.calc_clip_s(text_feat, ref_text_feat) - if ref_text_mask is not None: - if not isinstance(ref_text_mask, torch.Tensor): - ref_text_mask = torch.tensor( - ref_text_mask, dtype=ref_score.dtype, device=ref_score.device) - ref_score = ref_score.view(B, K) * ref_text_mask.view(B, K) - - ref_score = ref_score.view(B, K).max(dim=1).values - - assert clip_s.size() == (B,) - assert clip_s.size() == ref_score.size() - - # harmonic mean - refclip_s = 2 / (1 / clip_s + 1 / ref_score) - return refclip_s - - # # @torch.no_grad() - # def forward(self, - # images=None, text=None, - # img_feat=None, text_feat=None, - # ref_text=None, ref_text_feat=None, ref_text_mask=None, - # prompt="A photo depicts", - # mode=None): - # if img_feat is None: - # img_feat = self.image_extract(images) - # img_feat = img_feat.view(-1, 512) - - # if text_feat is None: - # text_feat = self.text_extract(text, prompt=prompt) - # text_feat = text_feat.view(-1, 512) - - # if mode is None: - # mode = self.mode - # assert mode in ['clip_s', 'refclip_s'] - - # if mode == 'clip_s': - # clip_s = self.calc_clip_s(img_feat, text_feat) - # return clip_s - # elif mode == 'refclip_s': - # if ref_text_feat is None: - # ref_text_feat = self.text_extract(ref_text, prompt=prompt) - # ref_text_feat = ref_text_feat.view(-1, 512) - - # refclip_s = self.calc_refclip_s( - # img_feat, text_feat, ref_text_feat, ref_text_mask=ref_text_mask) - # return refclip_s - - - def train_step(self, - images=None, text=None, - img_feat=None, text_feat=None, - neg_text=None, neg_text_feat=None, - # ref_text=None, ref_text_feat=None, ref_text_mask=None, - prompt="A photo depicts", - # return_loss=True, - **kwargs): - - if img_feat is None: - img_feat = self.image_extract(images) - img_feat = img_feat.view(-1, 512) - - B = img_feat.size(0) - - if self.joint_out: - pos_text_feat = self.text_extract(text, prompt=prompt, proj_norm=False).view(B, 512) - neg_text_feat = self.text_extract(neg_text, prompt=prompt, proj_norm=False).view(-1, 512) - neg_B = neg_text_feat.size(0) - - # [B+neg_B, 512] - text_feat = torch.cat([pos_text_feat, neg_text_feat], dim=0) - - text_cont_feat = self.clip_model.text_projection(text_feat) - text_cont_feat = text_cont_feat / text_cont_feat.norm(dim=-1, keepdim=True) - - text_cont_feat = text_cont_feat.view(B+neg_B, 512) - - logit_scale = self.clip_model.logit_scale.exp() - - # [B+neg_B * B] - logits_per_text = torch.matmul(text_cont_feat, img_feat.t()) * logit_scale - - # image-to-text label: positive text - caption_loss = -torch.diag(nn.functional.log_softmax(logits_per_text, dim=0)[:B]).mean() - - # calculate text-to-image only on positive text - image_loss = -torch.diag(nn.functional.log_softmax(logits_per_text[:B], dim=1)).mean() - - clip_loss = (caption_loss + image_loss) / 2.0 - - out = { - 'clip_loss': clip_loss, - 'img_feat': img_feat, - 'text_feat': text_cont_feat[:B].detach(), - # 'neg_text_feat': neg_text_feat, - } - - return out - - - else: - if text_feat is None: - text_feat = self.text_extract(text, prompt=prompt, proj_norm=False) - - text_cont_feat = self.clip_model.text_projection(text_feat) - text_cont_feat = text_cont_feat / \ - text_cont_feat.norm(dim=-1, keepdim=True) - - text_cont_feat = text_cont_feat.view(B, 512) - - - # cosine similarity as logits - logit_scale = self.clip_model.logit_scale.exp() - logits_per_text = torch.matmul(text_cont_feat, img_feat.t()) * logit_scale - # logits_per_image = logits_per_text.T - - clip_loss = clip_loss_fn(logits_per_text) - - - # negative sampling - pos_text_feat = text_feat.view(B, 512) - neg_text_feat = self.text_extract(neg_text, prompt=prompt, proj_norm=False).view(B, 512) - - grammar_text_feat = torch.cat([pos_text_feat, neg_text_feat], dim=0) - - # 2B, 1 - grammar_text_logit = self.grammar_score_head(grammar_text_feat) - grammar_labels = torch.LongTensor([1] * B + [0] * B).to(grammar_text_logit.device).view(2 * B) - - grammar_loss = torch.nn.functional.cross_entropy(grammar_text_logit, grammar_labels) - - grammar_pred = grammar_text_logit.argmax(dim=1, keepdim=False) - grammar_pos_pred = grammar_pred[:B] - grammar_neg_pred = grammar_pred[B:] - # grammar_acc = (grammar_pred == grammar_labels).float().mean() - - out = { - 'clip_loss': clip_loss, - 'grammar_loss': grammar_loss, - 'img_feat': img_feat, - 'text_feat': text_cont_feat, - 'neg_text_feat': neg_text_feat, - 'grammar_pos_pred': grammar_pos_pred, - 'grammar_neg_pred': grammar_neg_pred, - } - - return out - - def train_step_old(self, - images=None, text=None, - img_feat=None, text_feat=None, - neg_text=None, neg_text_feat=None, - # ref_text=None, ref_text_feat=None, ref_text_mask=None, - prompt="A photo depicts", - # return_loss=True, - **kwargs): - - if img_feat is None: - img_feat = self.image_extract(images) - img_feat = img_feat.view(-1, 512) - - B = img_feat.size(0) - - - - if text_feat is None: - text_feat = self.text_extract(text, prompt=prompt, proj_norm=False) - - text_cont_feat = self.clip_model.text_projection(text_feat) - text_cont_feat = text_cont_feat / text_cont_feat.norm(dim=-1, keepdim=True) - text_cont_feat = text_cont_feat.view(B, 512) - - # cosine similarity as logits - logit_scale = self.clip_model.logit_scale.exp() - logits_per_text = torch.matmul(text_cont_feat, img_feat.t()) * logit_scale - # logits_per_image = logits_per_text.T - - clip_loss = clip_loss_fn(logits_per_text) - - - # negative sampling - pos_text_feat = text_feat.view(B, 512) - neg_text_feat = self.text_extract(neg_text, prompt=prompt, proj_norm=False).view(B, 512) - - grammar_text_feat = torch.cat([pos_text_feat, neg_text_feat], dim=0) - - # 2B, 1 - grammar_text_logit = self.grammar_score_head(grammar_text_feat) - grammar_labels = torch.LongTensor([1] * B + [0] * B).to(grammar_text_logit.device).view(2 * B) - - grammar_loss = torch.nn.functional.cross_entropy(grammar_text_logit, grammar_labels) - - grammar_pred = grammar_text_logit.argmax(dim=1, keepdim=False) - grammar_pos_pred = grammar_pred[:B] - grammar_neg_pred = grammar_pred[B:] - # grammar_acc = (grammar_pred == grammar_labels).float().mean() - - out = { - 'clip_loss': clip_loss, - 'grammar_loss': grammar_loss, - 'img_feat': img_feat, - 'text_feat': text_cont_feat, - 'neg_text_feat': neg_text_feat, - 'grammar_pos_pred': grammar_pos_pred, - 'grammar_neg_pred': grammar_neg_pred, - } - - return out - -# contrastive loss function, adapted from -# https://sachinruk.github.io/blog/pytorch/pytorch%20lightning/loss%20function/gpu/2021/03/07/CLIP.html -def contrastive_loss(logits: torch.Tensor, dim: int) -> torch.Tensor: - neg_ce = torch.diag(nn.functional.log_softmax(logits, dim=dim)) - return -neg_ce.mean() - - -def clip_loss_fn(similarity: torch.Tensor) -> torch.Tensor: - caption_loss = contrastive_loss(similarity, dim=0) - image_loss = contrastive_loss(similarity, dim=1) - return (caption_loss + image_loss) / 2.0 diff --git a/spaces/mithril-security/blind_chat/.svelte-kit/types/src/routes/$types.d.ts b/spaces/mithril-security/blind_chat/.svelte-kit/types/src/routes/$types.d.ts deleted file mode 100644 index 849c7db387cf4bf3079b16e7b86d5d5ae60a4774..0000000000000000000000000000000000000000 --- a/spaces/mithril-security/blind_chat/.svelte-kit/types/src/routes/$types.d.ts +++ /dev/null @@ -1,24 +0,0 @@ -import type * as Kit from '@sveltejs/kit'; - -type Expand = T extends infer O ? { [K in keyof O]: O[K] } : never; -type RouteParams = { } -type RouteId = '/'; -type MaybeWithVoid = {} extends T ? T | void : T; -export type RequiredKeys = { [K in keyof T]-?: {} extends { [P in K]: T[K] } ? never : K; }[keyof T]; -type OutputDataShape = MaybeWithVoid> & Partial> & Record> -type EnsureDefined = T extends null | undefined ? {} : T; -type OptionalUnion, A extends keyof U = U extends U ? keyof U : never> = U extends unknown ? { [P in Exclude]?: never } & U : never; -export type Snapshot = Kit.Snapshot; -type PageParentData = EnsureDefined; -type LayoutRouteId = RouteId | "/" | "/conversation/[id]" | "/conversations" | "/login" | "/login/callback" | "/logout" | "/privacy" | "/r/[id]" | "/settings" | null -type LayoutParams = RouteParams & { id?: string } -type LayoutServerParentData = EnsureDefined<{}>; -type LayoutParentData = EnsureDefined<{}>; - -export type PageServerData = null; -export type PageData = Expand; -export type LayoutServerLoad = OutputDataShape> = Kit.ServerLoad; -export type LayoutServerLoadEvent = Parameters[0]; -export type LayoutServerData = Expand>>>>>; -export type LayoutData = Expand & EnsureDefined>; -export type RequestEvent = Kit.RequestEvent; \ No newline at end of file diff --git a/spaces/mithril-security/blind_chat/src/lib/actions/snapScrollToBottom.ts b/spaces/mithril-security/blind_chat/src/lib/actions/snapScrollToBottom.ts deleted file mode 100644 index b22a0648221f6b58853a910fb6286f79574a0246..0000000000000000000000000000000000000000 --- a/spaces/mithril-security/blind_chat/src/lib/actions/snapScrollToBottom.ts +++ /dev/null @@ -1,54 +0,0 @@ -import { navigating } from "$app/stores"; -import { tick } from "svelte"; -import { get } from "svelte/store"; - -const detachedOffset = 10; - -/** - * @param node element to snap scroll to bottom - * @param dependency pass in a dependency to update scroll on changes. - */ -export const snapScrollToBottom = (node: HTMLElement, dependency: unknown) => { - let prevScrollValue = node.scrollTop; - let isDetached = false; - - const handleScroll = () => { - // if user scrolled up, we detach - if (node.scrollTop < prevScrollValue) { - isDetached = true; - } - - // if user scrolled back to within 10px of bottom, we reattach - if (node.scrollTop - (node.scrollHeight - node.clientHeight) >= -detachedOffset) { - isDetached = false; - } - - prevScrollValue = node.scrollTop; - }; - - const updateScroll = async (_options: { force?: boolean } = {}) => { - const defaultOptions = { force: false }; - const options = { ...defaultOptions, ..._options }; - const { force } = options; - - if (!force && isDetached && !get(navigating)) return; - - // wait for next tick to ensure that the DOM is updated - await tick(); - - node.scrollTo({ top: node.scrollHeight }); - }; - - node.addEventListener("scroll", handleScroll); - - if (dependency) { - updateScroll({ force: true }); - } - - return { - update: updateScroll, - destroy: () => { - node.removeEventListener("scroll", handleScroll); - }, - }; -}; diff --git a/spaces/miyaaa666/bingo/src/lib/storage.ts b/spaces/miyaaa666/bingo/src/lib/storage.ts deleted file mode 100644 index a5b7825c4f76a28c704da512ae39e8bb45addd09..0000000000000000000000000000000000000000 --- a/spaces/miyaaa666/bingo/src/lib/storage.ts +++ /dev/null @@ -1,27 +0,0 @@ -import { getMany, set, del, clear } from 'idb-keyval'; - -export const Storage = { - async get(key: string | string[] | null): Promise { - if (key === null) return null; - if (typeof key === 'string') { - key = [key] - } - const returnData: Record = {} - const values = await getMany(key) - key.forEach((k, idx)=> { - returnData[k] = values[idx] - }) - return returnData; - }, - async set(object: any) { - for (let key of Object.keys(object)) { - await set(key, object[key]) - } - }, - async remove(key: string) { - return del(key); - }, - async clear() { - return clear(); - } -} diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/hubert/__init__.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/models/hubert/__init__.py deleted file mode 100644 index a1b0eabbdbcaf12b15bb96b329ab1e276256f79a..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/hubert/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .hubert import * # noqa -from .hubert_asr import * # noqa diff --git a/spaces/mshukor/UnIVAL/fairseq/scripts/read_binarized.py b/spaces/mshukor/UnIVAL/fairseq/scripts/read_binarized.py deleted file mode 100644 index a414095d03fb022a6753e816fc8bfd80e11db24d..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/scripts/read_binarized.py +++ /dev/null @@ -1,48 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse - -from fairseq.data import Dictionary, data_utils, indexed_dataset - - -def get_parser(): - parser = argparse.ArgumentParser( - description="writes text from binarized file to stdout" - ) - # fmt: off - parser.add_argument('--dataset-impl', help='dataset implementation', - choices=indexed_dataset.get_available_dataset_impl()) - parser.add_argument('--dict', metavar='FP', help='dictionary containing known words', default=None) - parser.add_argument('--input', metavar='FP', required=True, help='binarized file to read') - # fmt: on - - return parser - - -def main(): - parser = get_parser() - args = parser.parse_args() - - dictionary = Dictionary.load(args.dict) if args.dict is not None else None - dataset = data_utils.load_indexed_dataset( - args.input, - dictionary, - dataset_impl=args.dataset_impl, - default="lazy", - ) - - for tensor_line in dataset: - if dictionary is None: - line = " ".join([str(int(x)) for x in tensor_line]) - else: - line = dictionary.string(tensor_line) - - print(line) - - -if __name__ == "__main__": - main() diff --git a/spaces/mygyasir/genious_bgremover/carvekit/web/static/css/jquery.fancybox.css b/spaces/mygyasir/genious_bgremover/carvekit/web/static/css/jquery.fancybox.css deleted file mode 100644 index 367890a4af658d073d2b79c06829337d45434b84..0000000000000000000000000000000000000000 --- a/spaces/mygyasir/genious_bgremover/carvekit/web/static/css/jquery.fancybox.css +++ /dev/null @@ -1,274 +0,0 @@ -/*! fancyBox v2.1.5 fancyapps.com | fancyapps.com/fancybox/#license */ -.fancybox-wrap, -.fancybox-skin, -.fancybox-outer, -.fancybox-inner, -.fancybox-image, -.fancybox-wrap iframe, -.fancybox-wrap object, -.fancybox-nav, -.fancybox-nav span, -.fancybox-tmp -{ - padding: 0; - margin: 0; - border: 0; - outline: none; - vertical-align: top; -} - -.fancybox-wrap { - position: absolute; - top: 0; - left: 0; - z-index: 8020; -} - -.fancybox-skin { - position: relative; - background: #f9f9f9; - color: #444; - text-shadow: none; - -webkit-border-radius: 4px; - -moz-border-radius: 4px; - border-radius: 4px; -} - -.fancybox-opened { - z-index: 8030; -} - -.fancybox-opened .fancybox-skin { - -webkit-box-shadow: 0 10px 25px rgba(0, 0, 0, 0.5); - -moz-box-shadow: 0 10px 25px rgba(0, 0, 0, 0.5); - box-shadow: 0 10px 25px rgba(0, 0, 0, 0.5); -} - -.fancybox-outer, .fancybox-inner { - position: relative; -} - -.fancybox-inner { - overflow: hidden; -} - -.fancybox-type-iframe .fancybox-inner { - -webkit-overflow-scrolling: touch; -} - -.fancybox-error { - color: #444; - font: 14px/20px "Helvetica Neue",Helvetica,Arial,sans-serif; - margin: 0; - padding: 15px; - white-space: nowrap; -} - -.fancybox-image, .fancybox-iframe { - display: block; - width: 100%; - height: 100%; -} - -.fancybox-image { - max-width: 100%; - max-height: 100%; -} - -#fancybox-loading, .fancybox-close, .fancybox-prev span, .fancybox-next span { - background-image: url('fancybox_sprite.png'); -} - -#fancybox-loading { - position: fixed; - top: 50%; - left: 50%; - margin-top: -22px; - margin-left: -22px; - background-position: 0 -108px; - opacity: 0.8; - cursor: pointer; - z-index: 8060; -} - -#fancybox-loading div { - width: 44px; - height: 44px; - background: url('fancybox_loading.gif') center center no-repeat; -} - -.fancybox-close { - position: absolute; - top: -18px; - right: -18px; - width: 36px; - height: 36px; - cursor: pointer; - z-index: 8040; -} - -.fancybox-nav { - position: absolute; - top: 0; - width: 40%; - height: 100%; - cursor: pointer; - text-decoration: none; - background: transparent url('blank.gif'); /* helps IE */ - -webkit-tap-highlight-color: rgba(0,0,0,0); - z-index: 8040; -} - -.fancybox-prev { - left: 0; -} - -.fancybox-next { - right: 0; -} - -.fancybox-nav span { - position: absolute; - top: 50%; - width: 36px; - height: 34px; - margin-top: -18px; - cursor: pointer; - z-index: 8040; - visibility: hidden; -} - -.fancybox-prev span { - left: 10px; - background-position: 0 -36px; -} - -.fancybox-next span { - right: 10px; - background-position: 0 -72px; -} - -.fancybox-nav:hover span { - visibility: visible; -} - -.fancybox-tmp { - position: absolute; - top: -99999px; - left: -99999px; - visibility: hidden; - max-width: 99999px; - max-height: 99999px; - overflow: visible !important; -} - -/* Overlay helper */ - -.fancybox-lock { - overflow: hidden !important; - width: auto; -} - -.fancybox-lock body { - overflow: hidden !important; -} - -.fancybox-lock-test { - overflow-y: hidden !important; -} - -.fancybox-overlay { - position: absolute; - top: 0; - left: 0; - overflow: hidden; - display: none; - z-index: 8010; - background: url('fancybox_overlay.png'); -} - -.fancybox-overlay-fixed { - position: fixed; - bottom: 0; - right: 0; -} - -.fancybox-lock .fancybox-overlay { - overflow: auto; - overflow-y: scroll; -} - -/* Title helper */ - -.fancybox-title { - visibility: hidden; - font: normal 13px/20px "Helvetica Neue",Helvetica,Arial,sans-serif; - position: relative; - text-shadow: none; - z-index: 8050; -} - -.fancybox-opened .fancybox-title { - visibility: visible; -} - -.fancybox-title-float-wrap { - position: absolute; - bottom: 0; - right: 50%; - margin-bottom: -35px; - z-index: 8050; - text-align: center; -} - -.fancybox-title-float-wrap .child { - display: inline-block; - margin-right: -100%; - padding: 2px 20px; - background: transparent; /* Fallback for web browsers that doesn't support RGBa */ - background: rgba(0, 0, 0, 0.8); - -webkit-border-radius: 15px; - -moz-border-radius: 15px; - border-radius: 15px; - text-shadow: 0 1px 2px #222; - color: #FFF; - font-weight: bold; - line-height: 24px; - white-space: nowrap; -} - -.fancybox-title-outside-wrap { - position: relative; - margin-top: 10px; - color: #fff; -} - -.fancybox-title-inside-wrap { - padding-top: 10px; -} - -.fancybox-title-over-wrap { - position: absolute; - bottom: 0; - left: 0; - color: #fff; - padding: 10px; - background: #000; - background: rgba(0, 0, 0, .8); -} - -/*Retina graphics!*/ -@media only screen and (-webkit-min-device-pixel-ratio: 1.5), - only screen and (min--moz-device-pixel-ratio: 1.5), - only screen and (min-device-pixel-ratio: 1.5){ - - #fancybox-loading, .fancybox-close, .fancybox-prev span, .fancybox-next span { - background-image: url('fancybox_sprite@2x.png'); - background-size: 44px 152px; /*The size of the normal image, half the size of the hi-res image*/ - } - - #fancybox-loading div { - background-image: url('fancybox_loading@2x.gif'); - background-size: 24px 24px; /*The size of the normal image, half the size of the hi-res image*/ - } -} \ No newline at end of file diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/training/modules/spatial_transform.py b/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/training/modules/spatial_transform.py deleted file mode 100644 index 2de024ba08c549605a08b64d096f1f0db7b7722a..0000000000000000000000000000000000000000 --- a/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/training/modules/spatial_transform.py +++ /dev/null @@ -1,49 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from kornia.geometry.transform import rotate - - -class LearnableSpatialTransformWrapper(nn.Module): - def __init__(self, impl, pad_coef=0.5, angle_init_range=80, train_angle=True): - super().__init__() - self.impl = impl - self.angle = torch.rand(1) * angle_init_range - if train_angle: - self.angle = nn.Parameter(self.angle, requires_grad=True) - self.pad_coef = pad_coef - - def forward(self, x): - if torch.is_tensor(x): - return self.inverse_transform(self.impl(self.transform(x)), x) - elif isinstance(x, tuple): - x_trans = tuple(self.transform(elem) for elem in x) - y_trans = self.impl(x_trans) - return tuple(self.inverse_transform(elem, orig_x) for elem, orig_x in zip(y_trans, x)) - else: - raise ValueError(f'Unexpected input type {type(x)}') - - def transform(self, x): - height, width = x.shape[2:] - pad_h, pad_w = int(height * self.pad_coef), int(width * self.pad_coef) - x_padded = F.pad(x, [pad_w, pad_w, pad_h, pad_h], mode='reflect') - x_padded_rotated = rotate(x_padded, angle=self.angle.to(x_padded)) - return x_padded_rotated - - def inverse_transform(self, y_padded_rotated, orig_x): - height, width = orig_x.shape[2:] - pad_h, pad_w = int(height * self.pad_coef), int(width * self.pad_coef) - - y_padded = rotate(y_padded_rotated, angle=-self.angle.to(y_padded_rotated)) - y_height, y_width = y_padded.shape[2:] - y = y_padded[:, :, pad_h : y_height - pad_h, pad_w : y_width - pad_w] - return y - - -if __name__ == '__main__': - layer = LearnableSpatialTransformWrapper(nn.Identity()) - x = torch.arange(2* 3 * 15 * 15).view(2, 3, 15, 15).float() - y = layer(x) - assert x.shape == y.shape - assert torch.allclose(x[:, :, 1:, 1:][:, :, :-1, :-1], y[:, :, 1:, 1:][:, :, :-1, :-1]) - print('all ok') diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/CPUCores ClearMem Lite Full Crack __HOT__ [License].md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/CPUCores ClearMem Lite Full Crack __HOT__ [License].md deleted file mode 100644 index 5de4c9535a08910e45bab97741d6d57ec0ba76dd..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/CPUCores ClearMem Lite Full Crack __HOT__ [License].md +++ /dev/null @@ -1,138 +0,0 @@ -
        -

        CPUCores :: ClearMem Lite Full Crack [License] - How to Download and Install

        -

        Do you want to boost your gaming experience by freeing up your RAM for better performance? Do you want to use a powerful software that can clear out the memory that is being wasted by your OS and unoptimized programs? If yes, then you might be interested in CPUCores :: ClearMem Lite, a DLC expansion of CPUCores that can do all that and more. But before you rush to buy it, you might want to know how to get it for free with a full crack [License]. In this article, we will show you what CPUCores :: ClearMem Lite is, why you need a crack for it, how to find a reliable crack for it, and how to download and install it on your PC. Read on to find out more.

        -

        CPUCores :: ClearMem Lite full crack [License]


        Download Zip ——— https://urlcod.com/2uI9xm



        -

        What is CPUCores :: ClearMem Lite?

        -

        CPUCores :: ClearMem Lite is a downloadable content (DLC) for CPUCores :: Maximize Your FPS, a software that optimizes your CPU usage and performance for gaming. CPUCores :: ClearMem Lite adds a new feature that allows you to clear out RAM that is being wasted by your OS and unoptimized programs. This will enable your games to fully utilize more of your system's RAM, which can improve your FPS, load times, graphics quality, and overall gaming experience.

        -

        Features and benefits of CPUCores :: ClearMem Lite

        -

        Some of the features and benefits of CPUCores :: ClearMem Lite are:

        -
          -
        • It can free up RAM that is being used by your OS and other programs that are not essential for gaming.
        • -
        • It can reduce the chances of getting the "out of memory" error from Windows or your games.
        • -
        • It can increase the speed and efficiency of your PC by reducing the memory fragmentation and swapping.
        • -
        • It can enhance the performance of your games by allowing them to load more components into RAM, such as maps, graphics, sounds, and variables.
        • -
        • It can work with any game or application that runs on Windows.
        • -
        • It can be easily activated or deactivated with a single click from the CPUCores interface.
        • -
        -

        System requirements and compatibility of CPUCores :: ClearMem Lite

        -

        To use CPUCores :: ClearMem Lite, you need to have the following system requirements:

        -
          -
        • A Windows PC with at least 4 GB of RAM (8 GB or more recommended).
        • -
        • The base application CPUCores :: Maximize Your FPS installed on Steam.
        • -
        • A Steam account with a valid license for both CPUCores :: Maximize Your FPS and CPUCores :: ClearMem Lite.
        • -
        -

        CPUCores :: ClearMem Lite is compatible with Windows 7, 8, 8.1, 10 (32-bit or 64-bit).

        -

        Why do you need a crack for CPUCores :: ClearMem Lite?

        -

        If you want to use CPUCores :: ClearMem Lite, you need to pay $4.99 for the DLC on Steam. This might seem like a reasonable price for some, but not for others who are on a tight budget or who do not want to spend money on software that they can get for free. That is why some people look for a crack for CPUCores :: ClearMem Lite, which is a way of bypassing the license verification and activation process of the software and using it without paying anything.

        -

        -

        The disadvantages of using the official version of CPUCores :: ClearMem Lite

        -

        Some of the disadvantages of using the official version of CPUCores :: ClearMem Lite are:

        -
          -
        • You need to pay $4.99 for the DLC, which might not be affordable or worth it for some users.
        • -
        • You need to have a Steam account and a valid license for both CPUCores :: Maximize Your FPS and CPUCores :: ClearMem Lite, which might be inconvenient or problematic for some users.
        • -
        • You need to have an internet connection and log in to Steam every time you want to use CPUCores :: ClearMem Lite, which might be slow or unreliable for some users.
        • -
        • You need to update CPUCores :: ClearMem Lite regularly through Steam, which might consume your bandwidth or cause compatibility issues with your system or games.
        • -
        • You need to abide by the terms and conditions of Steam and CPUCores, which might limit your freedom or privacy as a user.
        • -
        -

        The advantages of using a cracked version of CPUCores :: ClearMem Lite

        -

        Some of the advantages of using a cracked version of CPUCores :: ClearMem Lite are:

        -
          -
        • You do not need to pay anything for the DLC, which can save you money and hassle.
        • -
        • You do not need to have a Steam account or a valid license for CPUCores :: ClearMem Lite, which can simplify your installation and usage process.
        • -
        • You do not need to have an internet connection or log in to Steam every time you want to use CPUCores :: ClearMem Lite, which can speed up your performance and reliability.
        • -
        • You do not need to update CPUCores :: ClearMem Lite regularly through Steam, which can save your bandwidth and avoid compatibility issues with your system or games.
        • -
        • You do not need to abide by the terms and conditions of Steam and CPUCores, which can give you more freedom and privacy as a user.
        • -
        -

        How to find a reliable crack for CPUCores :: ClearMem Lite?

        -

        Now that you know why you might want to use a crack for CPUCores :: ClearMem Lite, you might be wondering how to find one. There are many websites and sources that claim to offer cracks for various software, but not all of them are trustworthy or working. In fact, some of them might be scams, viruses, malware, or spyware that can harm your PC or steal your personal information. Therefore, you need to be careful and selective when looking for a crack for CPUCores :: ClearMem Lite.

        -

        The risks of using untrusted sources for cracks

        -

        Some of the risks of using untrusted sources for cracks are:

        -
          -
        • You might download a fake or corrupted crack file that does not work or causes errors on your PC.
        • -
        • You might download a virus, malware, spyware, or ransomware that infects your PC and damages your files, programs, or system.
        • -
        • You might download a crack file that contains hidden code that steals your personal information, such as your passwords, credit card numbers, bank accounts, or identity.
        • -
        • You might download a crack file that installs unwanted programs, toolbars, ads, or pop-ups on your PC that slow down your performance or annoy you.
        • -
        • You might download a crack file that violates the law or infringes the intellectual property rights of the software developers or publishers.
        • -
        -

        The criteria for choosing a safe and working crack for CPUCores :: ClearMem Lite

        -

        Some of the criteria for choosing a safe and working crack for CPUCores :: ClearMem Lite are:

        -
          -
        • The source should be reputable and reliable, with positive reviews and feedback from other users.
        • -
        • The source should provide clear and detailed instructions on how to download and install the crack file.
        • -
        • The source should offer multiple download links from different servers or mirrors.
        • -
        • The source should scan the crack file with antivirus software and provide proof of its safety and cleanliness.
        • -
        • The source should update the crack file regularly to ensure its compatibility and functionality with the latest version of CPUCores :: ClearMem Lite.
        • -
        -

        The best sites to find serial keys and crack files for CPUCores :: ClearMem Lite

        -

        Based on the criteria above, we have selected some of the best sites to find serial keys and crack files for CPUCores :: ClearMem Lite. These sites are:

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
        Site NameSite URLSite Description
        CrackWatch[1](https://crackwatch.com/game/cpucore-clearmem-lite)A website that tracks the crack status of various games and software, and provides links to verified cracks from trusted sources.
        Crack4Windows[2](https://crack4windows.com/crack?s=cpucores-maximize-your-fps&id=112074)A website that offers free download of serial keys and crack files for various Windows software, including CPUCores :: Maximize Your FPS and CPUCores :: ClearMem Lite.
        CrackNest[3](https://cracknest.com/2021/09/cpucore-clearmem-lite-crack.html)A website that provides latest and working crack files for various games and software, along with installation guides and screenshots.
        CrackedPC[4](https://crackedpc.org/cpucore-clearmem-lite-crack-license-key/)A website that offers free download of full version software with crack, patch, keygen, and license key, including CPUCores :: ClearMem Lite.
        CrackDLL[5](https://crackdll.com/cpucore-clearmem-lite-crack-free-download/)A website that provides direct download links for crack files for various software, without any surveys or ads.
        -

        Please note that we do not endorse or guarantee the safety or legality of these sites or their content. Use them at your own risk and discretion.

        -

        How to download and install CPUCores :: ClearMem Lite full crack [License]?

        -

        After you have found a reliable crack for CPUCores :: ClearMem Lite, you can proceed to download and install it on your PC. Here are the general steps to follow:

        -

        Step-by-step guide for downloading and installing CPUCores :: ClearMem Lite full crack [License]

        -
          -
        1. Download the crack file from one of the sites mentioned above, or any other source that you trust.
        2. -
        3. Extract the crack file using a file archiver program such as WinRAR or 7-Zip.
        4. -
        5. Copy the crack file and paste it into the installation folder of CPUCores :: Maximize Your FPS on your PC. The default location is C:\Program Files (x86)\Steam\steamapps\common\CPUCores.
        6. -
        7. Run the crack file as administrator and follow the instructions on the screen.
        8. -
        9. Launch CPUCores :: Maximize Your FPS from Steam and activate CPUCores :: ClearMem Lite from the interface.
        10. -
        11. Enjoy using CPUCores :: ClearMem Lite full crack [License] for free.
        12. -
        -

        Tips and tricks for optimizing CPUCores :: ClearMem Lite performance

        -

        To get the most out of CPUCores :: ClearMem Lite, you can follow these tips and tricks:

        -
          -
        • Use CPUCores :: ClearMem Lite before launching your game or application, and deactivate it after closing it.
        • -
        • Adjust the settings of CPUCores :: ClearMem Lite according to your system specifications and preferences. You can choose between three modes: Basic, Advanced, and Ultra. You can also customize the amount of RAM to clear, the frequency of clearing, and the priority of clearing.
        • -
        • Monitor your RAM usage and performance with the built-in graphs and statistics of CPUCores :: ClearMem Lite. You can see how much RAM is being used by your OS, programs, and games, and how much RAM is being freed by CPUCores :: ClearMem Lite.
        • -
        • Combine CPUCores :: ClearMem Lite with other features of CPUCores :: Maximize Your FPS, such as CPU optimization, game boosting, Steam integration, and system tweaking. This will further enhance your gaming experience and performance.
        • -
        • Update CPUCores :: ClearMem Lite regularly through Steam or through the crack file source. This will ensure that you have the latest version of the software with bug fixes and improvements.
        • -
        -

        Conclusion

        -

        In conclusion, CPUCores :: ClearMem Lite is a DLC expansion of CPUCores :: Maximize Your FPS that can clear out RAM that is being wasted by your OS and unoptimized programs. This can improve your gaming performance and experience by allowing your games to fully utilize more of your system's RAM. However, if you do not want to pay $4.99 for the DLC on Steam, you can use a crack for CPUCores :: ClearMem Lite that can bypass the license verification and activation process and let you use it for free. In this article, we have shown you what CPUCores :: ClearMem Lite is, why you need a crack for it, how to find a reliable crack for it, and how to download and install it on your PC. We have also given you some tips and tricks for optimizing CPUCores :: ClearMem Lite performance. We hope that this article has been helpful and informative for you. If you have any questions or comments, please feel free to leave them below.

        -

        FAQs

        -

        Here are some frequently asked questions about CPUCores :: ClearMem Lite full crack [License]:

        -
          -
        1. Is CPUCores :: ClearMem Lite safe to use?
        2. -

          CPUCores :: ClearMem Lite is safe to use as long as you download it from a trusted source and follow the instructions carefully. However, using a crack for CPUCores :: ClearMem Lite might expose you to some risks, such as viruses, malware, spyware, or legal issues. Therefore, use a crack for CPUCores :: ClearMem Lite at your own risk and discretion.

          -
        3. Does CPUCores :: ClearMem Lite work with all games and applications?
        4. -

          CPUCores :: ClearMem Lite works with any game or application that runs on Windows. However, some games or applications might not benefit from CPUCores :: ClearMem Lite as much as others, depending on their RAM usage and optimization. Therefore, you might need to experiment with different settings and modes of CPUCores :: ClearMem Lite to find the best one for your game or application.

          -
        5. Can I use CPUCores :: ClearMem Lite without CPUCores :: Maximize Your FPS?
        6. -

          No, you cannot use CPUCores :: ClearMem Lite without CPUCores :: Maximize Your FPS. CPUCores :: ClearMem Lite is a DLC expansion of CPUCores :: Maximize Your FPS, which means that you need to have the base application installed on Steam in order to use CPUCores :: ClearMem Lite. If you do not have CPUCores :: Maximize Your FPS, you can buy it for $14.99 on Steam or use a crack for it as well.

          -
        7. How much RAM can CPUCores :: ClearMem Lite free up?
        8. -

          The amount of RAM that CPUCores :: ClearMem Lite can free up depends on several factors, such as your system specifications, your OS settings, your running programs, and your chosen mode and settings of CPUCores :: ClearMem Lite. Generally speaking, CPUCores :: ClearMem Lite can free up anywhere from a few hundred megabytes to several gigabytes of RAM.

          -
        9. How can I contact the developers or support team of CPUCores :: ClearMem Lite?
        10. -

          If you have any issues or feedback regarding CPUCores :: ClearMem Lite, you can contact the developers or support team of CPUCores through their official website [6](https://cpucor.es/), their Steam page [7](https://store.steampowered.com/app/384300/CPUCores__Maximize_Your_FPS/), their Discord server [8](https://discord.gg/cpucor), or their email address support@cpucor.es.

          -

        b2dd77e56b
        -
        -
        \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/FREE Download Main Aur Charles 720p Or 1080p.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/FREE Download Main Aur Charles 720p Or 1080p.md deleted file mode 100644 index 45f4d8bdcb1b86df24810cd9c9bc42a854b7a92d..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/FREE Download Main Aur Charles 720p Or 1080p.md +++ /dev/null @@ -1,26 +0,0 @@ -
        -

        How to Download Main Aur Charles (2015) Movie in HD Quality

        -

        Main Aur Charles is a 2015 Bollywood movie based on the life of notorious serial killer and conman Charles Sobhraj. The movie stars Randeep Hooda as Charles, Adil Hussain as Commissioner Amod Kant, Richa Chadda as Mira Sharma, and Tisca Chopra as Reena. The movie revolves around Charles' escape from prison and his subsequent manhunt by Amod Kant, who is determined to catch him. The movie also explores the lives of various people who are charmed, manipulated, or affected by Charles in some way.

        -

        download main aur charles 720p or 1080p


        Download Ziphttps://urlcod.com/2uIc8u



        -

        If you are a fan of crime thrillers and biopics, you might want to watch Main Aur Charles online or download it in HD quality. However, finding a reliable and legal source to download Main Aur Charles 720p or 1080p can be tricky. Many websites claim to offer free downloads of Main Aur Charles movie, but they may be unsafe, illegal, or contain malware. Moreover, downloading pirated content can land you in trouble with the law and harm the filmmakers.

        -

        Therefore, we have compiled a list of some of the best and legal ways to download Main Aur Charles 720p or 1080p movie online. These sources are verified, secure, and offer high-quality downloads of Main Aur Charles movie. You can choose any of these options according to your preference and budget.

        -

        Option 1: Download Main Aur Charles Movie from Torrent Sites

        -

        One of the most popular and easy ways to download Main Aur Charles 720p or 1080p movie is to use torrent sites. Torrent sites are platforms that allow users to share files over a peer-to-peer network. You can find almost any movie or TV show on torrent sites, including Main Aur Charles movie.

        -

        However, there are some drawbacks and risks associated with using torrent sites. First of all, torrenting is illegal in many countries and can result in fines or legal action. Secondly, torrent sites are often unregulated and may contain viruses, malware, or fake files that can harm your device or compromise your privacy. Thirdly, torrenting can be slow and unreliable depending on the availability and speed of seeders and leechers.

        -

        -

        If you still want to use torrent sites to download Main Aur Charles 720p or 1080p movie, you will need a torrent client software such as BitTorrent or uTorrent. You will also need a VPN service to hide your IP address and encrypt your traffic. A VPN will help you bypass geo-restrictions and ISP throttling while torrenting.

        -

        Once you have these tools ready, you can follow these steps to download Main Aur Charles movie from torrent sites:

        -
          -
        1. Go to a torrent site that has Main Aur Charles movie available for download. Some of the popular torrent sites are The Pirate Bay[^1^], Torrentv[^2^], RARBG[^3^], etc.
        2. -
        3. Search for Main Aur Charles movie using the search bar or browse through the categories.
        4. -
        5. Select the torrent file that has the best quality, size, and seeders/leechers ratio. You can also check the comments and ratings of the torrent file before downloading it.
        6. -
        7. Click on the download button or magnet link to open the torrent file in your torrent client software.
        8. -
        9. Choose a location on your device where you want to save the downloaded file.
        10. -
        11. Wait for the download to complete. The speed and time of the download will depend on various factors such as your internet connection, number of seeders/leechers, etc.
        12. -
        13. Once the download is finished, you can enjoy watching Main Aur Charles movie in HD quality.
        14. -
        -

        Option 2: Download Main Aur Charles Movie from Streaming Sites

        -

        Another option to download Main Aur Charles 720p or 1080p movie is to use streaming sites. Streaming sites are platforms that allow users to watch movies and TV shows online without downloading them. You can find a variety of content on streaming sites, including Main Aur Charles movie.

        -

        However, there are some drawbacks and risks associated with using streaming sites. First of all, streaming sites may

        7196e7f11a
        -
        -
        \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Nokia Best Bb5 Easy Service Tool Crack WORK Latest Version.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Nokia Best Bb5 Easy Service Tool Crack WORK Latest Version.md deleted file mode 100644 index 18f47ef8944f29495c2016eaf90acd85d351cb29..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Nokia Best Bb5 Easy Service Tool Crack WORK Latest Version.md +++ /dev/null @@ -1,36 +0,0 @@ - -

        How to Download and Use Nokia Best Bb5 Easy Service Tool Crack Latest Version

        -

        If you are looking for a software that can help you flash, unlock, and repair your Nokia mobile phones, then you might want to try Nokia Best Bb5 Easy Service Tool Crack Latest Version. This tool is a modified version of the official Nokia Best Bb5 Easy Service Tool that does not require a dongle or a box to run. In this article, we will show you how to download and use this tool to service your Nokia devices.

        -

        What is Nokia Best Bb5 Easy Service Tool Crack Latest Version?

        -

        Nokia Best Bb5 Easy Service Tool Crack Latest Version is a software that allows you to flash firmware, unlock network, reset user code, and perform other operations on Nokia mobile phones powered by BB5, MeeGo, MediaTek, and NXPlatform chipsets[^1^]. It supports a wide range of Nokia models, including feature phones and smartphones. It also has a user-friendly interface that makes it easy to use even for beginners.

        -

        Nokia Best Bb5 Easy Service Tool Crack Latest Version


        DOWNLOAD 🌟 https://urlcod.com/2uIaHZ



        -

        How to Download Nokia Best Bb5 Easy Service Tool Crack Latest Version?

        -

        To download Nokia Best Bb5 Easy Service Tool Crack Latest Version, you need to follow these steps:

        -
          -
        1. Click on this link[^3^] to go to the download page.
        2. -
        3. Select the latest version of the tool (v2.29) and click on the download button.
        4. -
        5. Wait for the download to complete and then extract the zip file using WinRAR or any other extraction software.
        6. -
        7. You will get a folder containing the setup file, the USB driver, and the tutorial.
        8. -
        -

        How to Install Nokia Best Bb5 Easy Service Tool Crack Latest Version?

        -

        To install Nokia Best Bb5 Easy Service Tool Crack Latest Version, you need to follow these steps:

        -
          -
        1. Run the setup file (InfinityBox_install_BEST_v2.29.exe) as administrator and follow the installation wizard.
        2. -
        3. After the installation is complete, open the BEST folder and run the BEST.exe file as administrator.
        4. -
        5. You will see the main interface of the tool. You can now connect your Nokia device to the computer using a USB cable.
        6. -
        7. Make sure you have installed the USB driver for your device. If not, you can find it in the USB Driver folder inside the BEST folder.
        8. -
        -

        How to Use Nokia Best Bb5 Easy Service Tool Crack Latest Version?

        -

        To use Nokia Best Bb5 Easy Service Tool Crack Latest Version, you need to follow these steps:

        -
          -
        1. Select the platform of your device (BB5, MeeGo, MediaTek, or NXPlatform) from the drop-down menu at the top left corner of the tool.
        2. -
        3. Depending on what operation you want to perform, go to the Flashing tab or the Service tab.
        4. -
        5. If you want to flash firmware on your device, go to the Flashing tab and choose the firmware file from your computer. You can also download firmware from online sources using the Download Firmware button. Then click on FLASH and follow the instructions on how to put your device in flash mode.
        6. -
        7. If you want to unlock or repair your device, go to the Service tab and choose the operation you want to perform. For example, if you want to reset user code, click on Reset (User Code) and follow the instructions on how to put your device in test mode.
        8. -
        9. Wait for the process to complete and then disconnect your device from the computer.
        10. -
        -

        Conclusion

        -

        Nokia Best Bb5 Easy Service Tool Crack Latest Version is a handy software that can help you service your Nokia mobile phones without any dongle or box. It can flash firmware, unlock network, reset user code, and perform other operations on various Nokia models. You can download it from this link[^3^] and follow our guide on how to install and use it. However, please note that this is a crack tool/application that may not be legal or safe to use. We do not recommend using it

        -

        e93f5a0c3f
        -
        -
        \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Shaolin Soccer 1080p Ganool Indonesia HOT.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Shaolin Soccer 1080p Ganool Indonesia HOT.md deleted file mode 100644 index c6257e555016da23148f8fa1976d8f11543e1831..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Shaolin Soccer 1080p Ganool Indonesia HOT.md +++ /dev/null @@ -1,24 +0,0 @@ - -

        Shaolin Soccer: A Hilarious and Action-Packed Comedy

        -

        Shaolin Soccer is a 2001 Hong Kong-Chinese comedy film directed by and starring Stephen Chow. The film tells the story of a young Shaolin monk who reunites with his brothers to form a soccer team using their martial arts skills to their advantage. The film was a huge success in Asia and received positive reviews from critics and audiences worldwide.

        -

        Shaolin Soccer 1080p Ganool Indonesia


        Download File ––– https://urlcod.com/2uIazg



        -

        The film is available for download in high quality 1080p resolution with Indonesian subtitles from various sources, such as Broflix.net[^1^], Adikfilm.click[^2^], and Subdl.com[^3^]. These sites offer fast and easy download options without annoying ads or pop-ups. You can enjoy watching this hilarious and action-packed comedy on your device anytime and anywhere.

        -

        Shaolin Soccer is a film that will make you laugh, cheer, and marvel at the amazing stunts and special effects. It is a perfect blend of sports, comedy, and martial arts that will appeal to fans of all genres. If you are looking for a fun and entertaining film to watch, you should definitely check out Shaolin Soccer.

        - -

        Shaolin Soccer features a talented cast of actors and martial artists who bring the characters to life with humor and charisma. Stephen Chow plays Sing, the optimistic and determined Shaolin monk who dreams of spreading kung fu to the world. Zhao Wei plays Mui, a shy and insecure baker who falls in love with Sing and uses her tai chi skills to make delicious bread. Ng Man-tat plays Fung, the former soccer star who becomes Sing's mentor and coach. Patrick Tse plays Hung, the ruthless and greedy owner of Team Evil who betrayed Fung in the past. Danny Chan Kwok-kwan plays Brother Sum, Sing's eldest brother who specializes in kicking techniques.

        -

        The film also features Sing's other brothers, each with their own unique kung fu style and personality. They are Iron Head (Wong Yat-fei), Iron Shirt (Tin Kai-man), Hooking Leg (Mok Mei-lam), Light Weight Vest (Lam Tze-chung), and Empty Hand (Lee Kin-yan). Together, they form Team Shaolin, a formidable force on the soccer field.

        -

        -

        The film is full of hilarious moments and references to other films and pop culture icons, such as Bruce Lee, The Matrix, Sailor Moon, and Dragon Ball Z. The film also showcases impressive martial arts choreography and visual effects that enhance the action scenes. The film has a positive message about teamwork, friendship, and following your dreams.

        - -

        Shaolin Soccer has received critical acclaim and commercial success for its originality and entertainment value. The film won numerous awards, including Best Picture, Best Director, Best Actor, and Best Visual Effects at the Hong Kong Film Awards. The film also holds a 89% rating on Rotten Tomatoes, with the critics' consensus stating: \"The plot is utterly ridiculous, and the soccer in the movie is unlike any ever played anywhere on Earth, but watching Shaolin Soccer, you will probably find it impossible to care.\"[^1^]

        -

        The film is also full of memorable quotes that showcase the wit and humor of the script. Some of the best ones are:

        -
          -
        • \"With kung fu, you can do anything!\" - Sing
        • -
        • \"I'm not a human being. I'm a beast!\" - Team Evil player
        • -
        • \"You're not a soccer player. You're a Shaolin master!\" - Fung
        • -
        • \"I don't want to be your friend. I want to be your wife.\" - Mui
        • -
        • \"We're not just playing soccer. We're playing Shaolin soccer.\" - Sing
        • -
        -

        If you are looking for a fun and funny film that will make you laugh and cheer, you should definitely watch Shaolin Soccer. It is a film that celebrates the power of kung fu, soccer, and friendship. It is a film that will make you feel good and inspire you to follow your dreams.

        e93f5a0c3f
        -
        -
        \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Symantec Ghost 11.5 Download Full Version HOT.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Symantec Ghost 11.5 Download Full Version HOT.md deleted file mode 100644 index 57fa933b2cb41d3de16a63e6e4cd199bcaa29edc..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Symantec Ghost 11.5 Download Full Version HOT.md +++ /dev/null @@ -1,34 +0,0 @@ - -

        How to Download and Install Symantec Ghost 11.5

        -

        Symantec Ghost 11.5 is a powerful tool that allows you to create, manage, and restore disk images of your computer. It can help you backup your data, migrate your system to a new hardware, or recover from a disaster. In this article, we will show you how to download and install Symantec Ghost 11.5 on your Windows PC.

        -

        Step 1: Download Symantec Ghost 11.5

        -

        To download Symantec Ghost 11.5, you need to have a valid license key and a support contract with Broadcom, the company that owns Symantec products. If you have these, you can follow these steps:

        -

        Symantec Ghost 11.5 Download Full Version


        DOWNLOAD ☆☆☆☆☆ https://urlcod.com/2uIa1I



        - -

        If you do not have a license key or a support contract, you can try to find an older version of Symantec Ghost 11.5 on the Internet Archive [^2^]. However, this is not recommended as it may not be secure or compatible with your system.

        -

        Step 2: Install Symantec Ghost 11.5

        -

        Once you have downloaded the Symantec Ghost 11.5 installer file, you can follow these steps to install it on your PC:

        -
          -
        • Double-click on the installer file to launch it.
        • -
        • Follow the on-screen instructions to accept the license agreement and choose the installation location.
        • -
        • Select the components you want to install. You can choose between "Ghost Standard Tools" and "Ghost Solution Suite". The former includes only the basic features of Symantec Ghost 11.5, while the latter includes additional features such as remote management and deployment.
        • -
        • Click on "Install" to start the installation process.
        • -
        • Wait for the installation to complete and click on "Finish" to exit the installer.
        • -
        -

        Step 3: Use Symantec Ghost 11.5

        -

        After installing Symantec Ghost 11.5, you can use it to create and restore disk images of your computer. To do so, you can follow these steps:

        -

        -
          -
        • Launch Symantec Ghost 11.5 from the Start menu or the desktop shortcut.
        • -
        • Select the option that suits your needs. You can choose between "Local", "Peer-to-Peer", or "GhostCast Server". The first option allows you to create and restore disk images on your local computer, the second option allows you to create and restore disk images between two computers connected by a network cable, and the third option allows you to create and restore disk images over a network using a server.
        • -
        • Follow the on-screen instructions to select the source and destination of your disk image, choose the compression level and encryption options, and start the process.
        • -
        • Wait for the process to complete and verify that your disk image is created or restored successfully.
        • -

        e93f5a0c3f
        -
        -
        \ No newline at end of file diff --git a/spaces/nickprock/nickprock-bert-italian-finetuned-ner/app.py b/spaces/nickprock/nickprock-bert-italian-finetuned-ner/app.py deleted file mode 100644 index 0dabbb3962a6a4a0a47dec3d6e614ecf75dfc490..0000000000000000000000000000000000000000 --- a/spaces/nickprock/nickprock-bert-italian-finetuned-ner/app.py +++ /dev/null @@ -1,24 +0,0 @@ - -from transformers import pipeline - -import gradio as gr - -ner_pipeline = pipeline("ner", model="nickprock/bert-italian-finetuned-ner", aggregation_strategy=None) - -examples = [ - ["Domani andrò allo stadio con Giovanna a vedere la Fiorentina"], - ["La sede storica della Olivetti è ad Ivrea"], - ["Ieri sera c'è stato Harry Potter in TV"] - -] - -def ner(text): - output = ner_pipeline(text) - return {"text": text, "entities": output} - -demo = gr.Interface(ner, - gr.Textbox(placeholder="Inserisci una frase qui..."), - gr.HighlightedText(), - examples=examples) - -demo.launch() diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/tools/convert-torchvision-to-d2.py b/spaces/nikitaPDL2023/assignment4/detectron2/tools/convert-torchvision-to-d2.py deleted file mode 100644 index 4b827d960cca69657e98bd89a9aa5623a847099d..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/tools/convert-torchvision-to-d2.py +++ /dev/null @@ -1,56 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. - -import pickle as pkl -import sys -import torch - -""" -Usage: - # download one of the ResNet{18,34,50,101,152} models from torchvision: - wget https://download.pytorch.org/models/resnet50-19c8e357.pth -O r50.pth - # run the conversion - ./convert-torchvision-to-d2.py r50.pth r50.pkl - - # Then, use r50.pkl with the following changes in config: - -MODEL: - WEIGHTS: "/path/to/r50.pkl" - PIXEL_MEAN: [123.675, 116.280, 103.530] - PIXEL_STD: [58.395, 57.120, 57.375] - RESNETS: - DEPTH: 50 - STRIDE_IN_1X1: False -INPUT: - FORMAT: "RGB" - - These models typically produce slightly worse results than the - pre-trained ResNets we use in official configs, which are the - original ResNet models released by MSRA. -""" - -if __name__ == "__main__": - input = sys.argv[1] - - obj = torch.load(input, map_location="cpu") - - newmodel = {} - for k in list(obj.keys()): - old_k = k - if "layer" not in k: - k = "stem." + k - for t in [1, 2, 3, 4]: - k = k.replace("layer{}".format(t), "res{}".format(t + 1)) - for t in [1, 2, 3]: - k = k.replace("bn{}".format(t), "conv{}.norm".format(t)) - k = k.replace("downsample.0", "shortcut") - k = k.replace("downsample.1", "shortcut.norm") - print(old_k, "->", k) - newmodel[k] = obj.pop(old_k).detach().numpy() - - res = {"model": newmodel, "__author__": "torchvision", "matching_heuristics": True} - - with open(sys.argv[2], "wb") as f: - pkl.dump(res, f) - if obj: - print("Unconverted keys:", obj.keys()) diff --git a/spaces/nkatraga/7.22.CarePlanQnAWithContext/README.md b/spaces/nkatraga/7.22.CarePlanQnAWithContext/README.md deleted file mode 100644 index c130a284cb9f860d87409f09a364c3b1ea9668dc..0000000000000000000000000000000000000000 --- a/spaces/nkatraga/7.22.CarePlanQnAWithContext/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 7.22.CarePlanQnAWithContext -emoji: 📊 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.1.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nsakki55/my-aim-demo/Dockerfile b/spaces/nsakki55/my-aim-demo/Dockerfile deleted file mode 100644 index b75923c1c7b0c081b4a2779b03ee1779dac9dffd..0000000000000000000000000000000000000000 --- a/spaces/nsakki55/my-aim-demo/Dockerfile +++ /dev/null @@ -1,40 +0,0 @@ -FROM ubuntu:kinetic - -# Doesn't usually have an "upgrade" -RUN apt-get update \ - && DEBIAN_FRONTEND=noninteractive \ - apt-get install --no-install-recommends --assume-yes \ - build-essential \ - python3 \ - python3-dev \ - python3-pip - - -RUN useradd -m -u 1000 aim_user - -# Switch to the "aim_user" user -USER aim_user - -# Set home to the user's home directory -ENV HOME=/home/aim_user \ - PATH=/home/aim_user/.local/bin:$PATH - -# Set the working directory to the user's home directory -WORKDIR $HOME - -# install the `aim` package on the latest version -RUN pip install aim - -RUN aim telemetry off - -ENTRYPOINT ["/bin/sh", "-c"] - -COPY aim_repo.tar.gz . -RUN tar xvzf aim_repo.tar.gz -# have to run `aim init` in the directory that stores aim data for -# otherwise `aim up` will prompt for confirmation to create the directory itself. -# We run aim listening on 0.0.0.0 to expose all ports. Also, we run -# using `--dev` to print verbose logs. Port 43800 is the default port of -# `aim up` but explicit is better than implicit. -CMD ["aim up --host 0.0.0.0 --port 7860 --workers 2"] - diff --git a/spaces/oconnoob/audio-intelligence-dashboard/app/styles.css b/spaces/oconnoob/audio-intelligence-dashboard/app/styles.css deleted file mode 100644 index 68732c9045b0cf8edf429c22258a731405928918..0000000000000000000000000000000000000000 --- a/spaces/oconnoob/audio-intelligence-dashboard/app/styles.css +++ /dev/null @@ -1,134 +0,0 @@ -body { - font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, - Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol"; -} - -.logo { - width: 180px; -} - -.title { - font-weight: 600; - text-align: left; - color: black; - font-size: 18px; -} - -.alert, -#component-2, -#component-3 { - padding: 24px; - color: black; - background-color: #f4f8fb; - border: 1px solid #d6dce7; - border-radius: 8px; - box-shadow: 0px 6px 15px rgb(0 0 0 / 2%), 0px 2px 5px rgb(0 0 0 / 4%); -} - -ol { - list-style: disc; -} - -.alert__info { - background-color: #f4f8fb; - color: #323552; -} - -.alert__warning { - background-color: #fffae5; - color: #917115; - border: 1px solid #e4cf2b; -} - -#pw { - -webkit-text-security: disc; -} - -/* unvisited link */ -a:link { - color: #6b2bd6; -} - -/* visited link */ -a:visited { - color: #6b2bd6; -} - -/* mouse over link */ -a:hover { - color: #6b2bd6; -} - -/* selected link */ -a:active { - color: #6b2bd6; -} - -li { - margin-left: 1em; -} - -.apikey { -} - -.entity-list { - color: #6b2bd6; - font-size: 16px -} - -.entity-elt { - color: black -}.istopic { -color: #6b2bd6; -} - -.topic-L0 { -font-size: 30px; -text-indent: 0px; -} - -.topic-L1 { -font-size: 25px; -text-indent: 18px; -} - -.topic-L2 { -font-size: 20px; -text-indent: 36px; -} - -.topic-L3 { -font-size: 15px; -text-indent: 54px; -} - -.topic-L4 { -font-size: 15px; -text-indent: 72px; -} - -.topic-L5 { -font-size: 15px; -text-indent: 90px; -} - -.topic-L6 { -font-size: 15px; -text-indent: 108px; -} - -.topic-L7 { -font-size: 15px; -text-indent: 126px; -} - -.topic-L8 { -font-size: 15px; -text-indent: 144px; -} - -.topic-L9 { -font-size: 15px; -text-indent: 162px; -} - diff --git a/spaces/oliver2023/chatgpt-on-wechat/voice/pytts/pytts_voice.py b/spaces/oliver2023/chatgpt-on-wechat/voice/pytts/pytts_voice.py deleted file mode 100644 index 2e9cdc0454ce89d7d7d32992951dd7e90c8173a1..0000000000000000000000000000000000000000 --- a/spaces/oliver2023/chatgpt-on-wechat/voice/pytts/pytts_voice.py +++ /dev/null @@ -1,37 +0,0 @@ - -""" -pytts voice service (offline) -""" - -import time -import pyttsx3 -from bridge.reply import Reply, ReplyType -from common.log import logger -from common.tmp_dir import TmpDir -from voice.voice import Voice - - -class PyttsVoice(Voice): - engine = pyttsx3.init() - - def __init__(self): - # 语速 - self.engine.setProperty('rate', 125) - # 音量 - self.engine.setProperty('volume', 1.0) - for voice in self.engine.getProperty('voices'): - if "Chinese" in voice.name: - self.engine.setProperty('voice', voice.id) - - def textToVoice(self, text): - try: - wavFile = TmpDir().path() + 'reply-' + str(int(time.time())) + '.wav' - self.engine.save_to_file(text, wavFile) - self.engine.runAndWait() - logger.info( - '[Pytts] textToVoice text={} voice file name={}'.format(text, wavFile)) - reply = Reply(ReplyType.VOICE, wavFile) - except Exception as e: - reply = Reply(ReplyType.ERROR, str(e)) - finally: - return reply diff --git a/spaces/open-source-metrics/repository-statistics/style.css b/spaces/open-source-metrics/repository-statistics/style.css deleted file mode 100644 index 1d4c6939e37a9da8688f4e2bb282c3927feca5e8..0000000000000000000000000000000000000000 --- a/spaces/open-source-metrics/repository-statistics/style.css +++ /dev/null @@ -1,133 +0,0 @@ -html { - /*color: white;*/ - /*background-color: rgb(50, 50, 50);*/ - font-family: Source Sans Pro,ui-sans-serif,system-ui,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,Apple Color Emoji,Segoe UI Emoji,Segoe UI Symbol,Noto Color Emoji; - line-height: 1.5; -} - -body { - display: flex; - align-items: center; - flex-direction: column; -} - -button { - height: 90px; - width: 200px; - margin-top: 20px; - cursor: pointer; - background-color: rgb(220, 220, 240); - border: none; - border-radius: 10px; - border-bottom: 3px solid rgb(200, 200, 220); - border-right: 3px solid rgb(200, 200, 220); - transition: all 0.2s ease; -} - -button:hover { - background-color: rgb(240, 220, 220); - border-bottom: 3px solid rgb(220, 200, 200); - border-right: 3px solid rgb(220, 200, 200); - transition: all 0.2s ease; -} - -.graphs { - margin: 20px; - display: flex; - flex-direction: row; - justify-content: center; - width: 100%; -} - -.option-div { - border-radius: 10px; - border-color: rgb(180, 180, 200); - border-style: solid; - border-width: 2px 4px 4px 2px; - margin: 14px 0; - padding: 5px; -} - -.option-div > div { - margin-left: 20px; -} - -.warning-div { - background-color: rgb(255, 230, 164); - border-radius: 10px; - border-bottom: 3px solid rgb(235, 210, 144); - border-right: 3px solid rgb(235, 210, 144); - margin: 10px; - padding: 20px; -} - -.submit { - margin-bottom: 50px; -} - -.graphs > div { - margin: 20px; - width: 300px; - padding: 30px 40px; - background-color: rgb(220, 220, 240); - border-bottom: 3px solid rgb(200, 200, 220); - border-right: 3px solid rgb(200, 200, 220); - border-radius: 10px; - line-height: 30px; -} - -.graphs > div > h3 { - font-weight: 400; - text-decoration: underline; -} - -.lds-ripple { - display: inline-block; - position: relative; - width: 80px; - height: 80px; -} -.lds-ripple div { - position: absolute; - border: 4px solid #000; - opacity: 1; - border-radius: 50%; - animation: lds-ripple 1s cubic-bezier(0, 0.2, 0.8, 1) infinite; -} - -.lds-ripple .dark-theme div { - border: 4px solid #fff; -} -.lds-ripple div:nth-child(2) { - animation-delay: -0.5s; -} -@keyframes lds-ripple { - 0% { - top: 36px; - left: 36px; - width: 0; - height: 0; - opacity: 0; - } - 4.9% { - top: 36px; - left: 36px; - width: 0; - height: 0; - opacity: 0; - } - 5% { - top: 36px; - left: 36px; - width: 0; - height: 0; - opacity: 1; - } - 100% { - top: 0px; - left: 0px; - width: 72px; - height: 72px; - opacity: 0; - } -} diff --git a/spaces/ops-gaurav/tts/README.md b/spaces/ops-gaurav/tts/README.md deleted file mode 100644 index 8b509bf0363c7a092b0ea5761d96ac5d4b826563..0000000000000000000000000000000000000000 --- a/spaces/ops-gaurav/tts/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Theserverfault -emoji: 🐨 -colorFrom: indigo -colorTo: blue -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/blip_diffusion/modeling_ctx_clip.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/blip_diffusion/modeling_ctx_clip.py deleted file mode 100644 index 53d57188743deec0c312f45f1aff3d0c488637a7..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/blip_diffusion/modeling_ctx_clip.py +++ /dev/null @@ -1,212 +0,0 @@ -# Copyright 2023 Salesforce.com, inc. -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import Optional, Tuple, Union - -import torch -from torch import nn -from transformers import CLIPPreTrainedModel -from transformers.modeling_outputs import BaseModelOutputWithPooling -from transformers.models.clip.configuration_clip import CLIPTextConfig -from transformers.models.clip.modeling_clip import ( - CLIPEncoder, - _expand_mask, -) - - -# This is a modified version of the CLIPTextModel from transformers.models.clip.modeling_clip -# Which allows for an extra input of "context embeddings", which are the query embeddings used in Qformer -# They pass through the clip model, along with the text embeddings, and interact with them using self attention -class ContextCLIPTextModel(CLIPPreTrainedModel): - config_class = CLIPTextConfig - - _no_split_modules = ["CLIPEncoderLayer"] - - def __init__(self, config: CLIPTextConfig): - super().__init__(config) - self.text_model = ContextCLIPTextTransformer(config) - # Initialize weights and apply final processing - self.post_init() - - def forward( - self, - ctx_embeddings: torch.Tensor = None, - ctx_begin_pos: list = None, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPooling]: - return self.text_model( - ctx_embeddings=ctx_embeddings, - ctx_begin_pos=ctx_begin_pos, - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - -class ContextCLIPTextTransformer(nn.Module): - def __init__(self, config: CLIPTextConfig): - super().__init__() - self.config = config - embed_dim = config.hidden_size - self.embeddings = ContextCLIPTextEmbeddings(config) - self.encoder = CLIPEncoder(config) - self.final_layer_norm = nn.LayerNorm(embed_dim) - - def forward( - self, - ctx_embeddings: torch.Tensor, - ctx_begin_pos: list, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPooling]: - r""" - Returns: - - """ - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if input_ids is None: - raise ValueError("You have to specify either input_ids") - - input_shape = input_ids.size() - input_ids = input_ids.view(-1, input_shape[-1]) - - hidden_states = self.embeddings( - input_ids=input_ids, - position_ids=position_ids, - ctx_embeddings=ctx_embeddings, - ctx_begin_pos=ctx_begin_pos, - ) - - bsz, seq_len = input_shape - if ctx_embeddings is not None: - seq_len += ctx_embeddings.size(1) - # CLIP's text model uses causal mask, prepare it here. - # https://github.com/openai/CLIP/blob/cfcffb90e69f37bf2ff1e988237a0fbe41f33c04/clip/model.py#L324 - causal_attention_mask = self._build_causal_attention_mask(bsz, seq_len, hidden_states.dtype).to( - hidden_states.device - ) - # expand attention_mask - if attention_mask is not None: - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] - attention_mask = _expand_mask(attention_mask, hidden_states.dtype) - - encoder_outputs = self.encoder( - inputs_embeds=hidden_states, - attention_mask=attention_mask, - causal_attention_mask=causal_attention_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - last_hidden_state = encoder_outputs[0] - last_hidden_state = self.final_layer_norm(last_hidden_state) - - # text_embeds.shape = [batch_size, sequence_length, transformer.width] - # take features from the eot embedding (eot_token is the highest number in each sequence) - # casting to torch.int for onnx compatibility: argmax doesn't support int64 inputs with opset 14 - pooled_output = last_hidden_state[ - torch.arange(last_hidden_state.shape[0], device=input_ids.device), - input_ids.to(torch.int).argmax(dim=-1), - ] - - if not return_dict: - return (last_hidden_state, pooled_output) + encoder_outputs[1:] - - return BaseModelOutputWithPooling( - last_hidden_state=last_hidden_state, - pooler_output=pooled_output, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - ) - - def _build_causal_attention_mask(self, bsz, seq_len, dtype): - # lazily create causal attention mask, with full attention between the vision tokens - # pytorch uses additive attention mask; fill with -inf - mask = torch.empty(bsz, seq_len, seq_len, dtype=dtype) - mask.fill_(torch.tensor(torch.finfo(dtype).min)) - mask.triu_(1) # zero out the lower diagonal - mask = mask.unsqueeze(1) # expand mask - return mask - - -class ContextCLIPTextEmbeddings(nn.Module): - def __init__(self, config: CLIPTextConfig): - super().__init__() - embed_dim = config.hidden_size - - self.token_embedding = nn.Embedding(config.vocab_size, embed_dim) - self.position_embedding = nn.Embedding(config.max_position_embeddings, embed_dim) - - # position_ids (1, len position emb) is contiguous in memory and exported when serialized - self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1))) - - def forward( - self, - ctx_embeddings: torch.Tensor, - ctx_begin_pos: list, - input_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - ) -> torch.Tensor: - if ctx_embeddings is None: - ctx_len = 0 - else: - ctx_len = ctx_embeddings.shape[1] - - seq_length = (input_ids.shape[-1] if input_ids is not None else inputs_embeds.shape[-2]) + ctx_len - - if position_ids is None: - position_ids = self.position_ids[:, :seq_length] - - if inputs_embeds is None: - inputs_embeds = self.token_embedding(input_ids) - - # for each input embeddings, add the ctx embeddings at the correct position - input_embeds_ctx = [] - bsz = inputs_embeds.shape[0] - - if ctx_embeddings is not None: - for i in range(bsz): - cbp = ctx_begin_pos[i] - - prefix = inputs_embeds[i, :cbp] - # remove the special token embedding - suffix = inputs_embeds[i, cbp:] - - input_embeds_ctx.append(torch.cat([prefix, ctx_embeddings[i], suffix], dim=0)) - - inputs_embeds = torch.stack(input_embeds_ctx, dim=0) - - position_embeddings = self.position_embedding(position_ids) - embeddings = inputs_embeds + position_embeddings - - return embeddings diff --git a/spaces/pixiou/bingo/src/lib/storage.ts b/spaces/pixiou/bingo/src/lib/storage.ts deleted file mode 100644 index a5b7825c4f76a28c704da512ae39e8bb45addd09..0000000000000000000000000000000000000000 --- a/spaces/pixiou/bingo/src/lib/storage.ts +++ /dev/null @@ -1,27 +0,0 @@ -import { getMany, set, del, clear } from 'idb-keyval'; - -export const Storage = { - async get(key: string | string[] | null): Promise { - if (key === null) return null; - if (typeof key === 'string') { - key = [key] - } - const returnData: Record = {} - const values = await getMany(key) - key.forEach((k, idx)=> { - returnData[k] = values[idx] - }) - return returnData; - }, - async set(object: any) { - for (let key of Object.keys(object)) { - await set(key, object[key]) - } - }, - async remove(key: string) { - return del(key); - }, - async clear() { - return clear(); - } -} diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/models/target_python.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/models/target_python.py deleted file mode 100644 index 744bd7ef58b4870406fcef8cb3b3667548a0ccea..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/models/target_python.py +++ /dev/null @@ -1,110 +0,0 @@ -import sys -from typing import List, Optional, Tuple - -from pip._vendor.packaging.tags import Tag - -from pip._internal.utils.compatibility_tags import get_supported, version_info_to_nodot -from pip._internal.utils.misc import normalize_version_info - - -class TargetPython: - - """ - Encapsulates the properties of a Python interpreter one is targeting - for a package install, download, etc. - """ - - __slots__ = [ - "_given_py_version_info", - "abis", - "implementation", - "platforms", - "py_version", - "py_version_info", - "_valid_tags", - ] - - def __init__( - self, - platforms: Optional[List[str]] = None, - py_version_info: Optional[Tuple[int, ...]] = None, - abis: Optional[List[str]] = None, - implementation: Optional[str] = None, - ) -> None: - """ - :param platforms: A list of strings or None. If None, searches for - packages that are supported by the current system. Otherwise, will - find packages that can be built on the platforms passed in. These - packages will only be downloaded for distribution: they will - not be built locally. - :param py_version_info: An optional tuple of ints representing the - Python version information to use (e.g. `sys.version_info[:3]`). - This can have length 1, 2, or 3 when provided. - :param abis: A list of strings or None. This is passed to - compatibility_tags.py's get_supported() function as is. - :param implementation: A string or None. This is passed to - compatibility_tags.py's get_supported() function as is. - """ - # Store the given py_version_info for when we call get_supported(). - self._given_py_version_info = py_version_info - - if py_version_info is None: - py_version_info = sys.version_info[:3] - else: - py_version_info = normalize_version_info(py_version_info) - - py_version = ".".join(map(str, py_version_info[:2])) - - self.abis = abis - self.implementation = implementation - self.platforms = platforms - self.py_version = py_version - self.py_version_info = py_version_info - - # This is used to cache the return value of get_tags(). - self._valid_tags: Optional[List[Tag]] = None - - def format_given(self) -> str: - """ - Format the given, non-None attributes for display. - """ - display_version = None - if self._given_py_version_info is not None: - display_version = ".".join( - str(part) for part in self._given_py_version_info - ) - - key_values = [ - ("platforms", self.platforms), - ("version_info", display_version), - ("abis", self.abis), - ("implementation", self.implementation), - ] - return " ".join( - f"{key}={value!r}" for key, value in key_values if value is not None - ) - - def get_tags(self) -> List[Tag]: - """ - Return the supported PEP 425 tags to check wheel candidates against. - - The tags are returned in order of preference (most preferred first). - """ - if self._valid_tags is None: - # Pass versions=None if no py_version_info was given since - # versions=None uses special default logic. - py_version_info = self._given_py_version_info - if py_version_info is None: - version = None - else: - version = version_info_to_nodot(py_version_info) - - tags = get_supported( - version=version, - platforms=self.platforms, - abis=self.abis, - impl=self.implementation, - ) - self._valid_tags = tags - - return self._valid_tags diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tomli/__init__.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tomli/__init__.py deleted file mode 100644 index 4c6ec97ec6961bcf184b6e0b2437b9924db0b9de..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tomli/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# SPDX-License-Identifier: MIT -# SPDX-FileCopyrightText: 2021 Taneli Hukkinen -# Licensed to PSF under a Contributor Agreement. - -__all__ = ("loads", "load", "TOMLDecodeError") -__version__ = "2.0.1" # DO NOT EDIT THIS LINE MANUALLY. LET bump2version UTILITY DO IT - -from ._parser import TOMLDecodeError, load, loads - -# Pretend this exception was created here. -TOMLDecodeError.__module__ = __name__ diff --git a/spaces/probing-vits/attention-rollout/app.py b/spaces/probing-vits/attention-rollout/app.py deleted file mode 100644 index c7cdba906f1cefdd042d2bde50a1121da45b62f6..0000000000000000000000000000000000000000 --- a/spaces/probing-vits/attention-rollout/app.py +++ /dev/null @@ -1,51 +0,0 @@ -import gradio as gr -import tensorflow as tf -import tensorflow_hub as hub -from PIL import Image - -import utils - -_RESOLUTION = 224 -_MODEL_URL = "https://tfhub.dev/sayakpaul/deit_tiny_patch16_224/1" - - -def get_model() -> tf.keras.Model: - """Initiates a tf.keras.Model from TF-Hub.""" - inputs = tf.keras.Input((_RESOLUTION, _RESOLUTION, 3)) - hub_module = hub.KerasLayer(_MODEL_URL) - - logits, attention_scores_dict = hub_module( - inputs - ) # Second output in the tuple is a dictionary containing attention scores. - - return tf.keras.Model(inputs, [logits, attention_scores_dict]) - - -_MODEL = get_model() - - -def show_rollout(image): - """Function to be called when user hits submit on the UI.""" - _, preprocessed_image = utils.preprocess_image( - image, "deit_tiny_patch16_224" - ) - _, attention_scores_dict = _MODEL.predict(preprocessed_image) - result = utils.attention_rollout_map( - image, attention_scores_dict, "deit_tiny_patch16_224" - ) - return Image.fromarray(result) - - -title = "Generate Attention Rollout Plots" -article = "Attention Rollout was proposed by [Abnar et al.](https://arxiv.org/abs/2005.00928) to quantify the information that flows through self-attention layers. In the original ViT paper ([Dosovitskiy et al.](https://arxiv.org/abs/2010.11929)), the authors use it to investigate the representations learned by ViTs. The model used in the backend is `deit_tiny_patch16_224`. For more details about it, refer [here](https://tfhub.dev/sayakpaul/collections/deit/1). DeiT was proposed by [Touvron et al.](https://arxiv.org/abs/2012.12877)" - -iface = gr.Interface( - show_rollout, - inputs=gr.inputs.Image(type="pil", label="Input Image"), - outputs="image", - title=title, - article=article, - allow_flagging="never", - # examples=[["./car.jpeg", "./bulbul.jpeg"]], -) -iface.launch() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/artist.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/artist.py deleted file mode 100644 index 04eaa6cf75df503653becab5f52dedeba1ff7b60..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/artist.py +++ /dev/null @@ -1,1860 +0,0 @@ -from collections import namedtuple -import contextlib -from functools import cache, wraps -import inspect -from inspect import Signature, Parameter -import logging -from numbers import Number, Real -import re -import warnings - -import numpy as np - -import matplotlib as mpl -from . import _api, cbook -from .colors import BoundaryNorm -from .cm import ScalarMappable -from .path import Path -from .transforms import (BboxBase, Bbox, IdentityTransform, Transform, TransformedBbox, - TransformedPatchPath, TransformedPath) - -_log = logging.getLogger(__name__) - - -def _prevent_rasterization(draw): - # We assume that by default artists are not allowed to rasterize (unless - # its draw method is explicitly decorated). If it is being drawn after a - # rasterized artist and it has reached a raster_depth of 0, we stop - # rasterization so that it does not affect the behavior of normal artist - # (e.g., change in dpi). - - @wraps(draw) - def draw_wrapper(artist, renderer, *args, **kwargs): - if renderer._raster_depth == 0 and renderer._rasterizing: - # Only stop when we are not in a rasterized parent - # and something has been rasterized since last stop. - renderer.stop_rasterizing() - renderer._rasterizing = False - - return draw(artist, renderer, *args, **kwargs) - - draw_wrapper._supports_rasterization = False - return draw_wrapper - - -def allow_rasterization(draw): - """ - Decorator for Artist.draw method. Provides routines - that run before and after the draw call. The before and after functions - are useful for changing artist-dependent renderer attributes or making - other setup function calls, such as starting and flushing a mixed-mode - renderer. - """ - - @wraps(draw) - def draw_wrapper(artist, renderer): - try: - if artist.get_rasterized(): - if renderer._raster_depth == 0 and not renderer._rasterizing: - renderer.start_rasterizing() - renderer._rasterizing = True - renderer._raster_depth += 1 - else: - if renderer._raster_depth == 0 and renderer._rasterizing: - # Only stop when we are not in a rasterized parent - # and something has be rasterized since last stop - renderer.stop_rasterizing() - renderer._rasterizing = False - - if artist.get_agg_filter() is not None: - renderer.start_filter() - - return draw(artist, renderer) - finally: - if artist.get_agg_filter() is not None: - renderer.stop_filter(artist.get_agg_filter()) - if artist.get_rasterized(): - renderer._raster_depth -= 1 - if (renderer._rasterizing and artist.figure and - artist.figure.suppressComposite): - # restart rasterizing to prevent merging - renderer.stop_rasterizing() - renderer.start_rasterizing() - - draw_wrapper._supports_rasterization = True - return draw_wrapper - - -def _finalize_rasterization(draw): - """ - Decorator for Artist.draw method. Needed on the outermost artist, i.e. - Figure, to finish up if the render is still in rasterized mode. - """ - @wraps(draw) - def draw_wrapper(artist, renderer, *args, **kwargs): - result = draw(artist, renderer, *args, **kwargs) - if renderer._rasterizing: - renderer.stop_rasterizing() - renderer._rasterizing = False - return result - return draw_wrapper - - -def _stale_axes_callback(self, val): - if self.axes: - self.axes.stale = val - - -_XYPair = namedtuple("_XYPair", "x y") - - -class _Unset: - def __repr__(self): - return "" -_UNSET = _Unset() - - -class Artist: - """ - Abstract base class for objects that render into a FigureCanvas. - - Typically, all visible elements in a figure are subclasses of Artist. - """ - - zorder = 0 - - def __init_subclass__(cls): - - # Decorate draw() method so that all artists are able to stop - # rastrization when necessary. If the artist's draw method is already - # decorated (has a `_supports_rasterization` attribute), it won't be - # decorated. - - if not hasattr(cls.draw, "_supports_rasterization"): - cls.draw = _prevent_rasterization(cls.draw) - - # Inject custom set() methods into the subclass with signature and - # docstring based on the subclasses' properties. - - if not hasattr(cls.set, '_autogenerated_signature'): - # Don't overwrite cls.set if the subclass or one of its parents - # has defined a set method set itself. - # If there was no explicit definition, cls.set is inherited from - # the hierarchy of auto-generated set methods, which hold the - # flag _autogenerated_signature. - return - - cls.set = lambda self, **kwargs: Artist.set(self, **kwargs) - cls.set.__name__ = "set" - cls.set.__qualname__ = f"{cls.__qualname__}.set" - cls._update_set_signature_and_docstring() - - _PROPERTIES_EXCLUDED_FROM_SET = [ - 'navigate_mode', # not a user-facing function - 'figure', # changing the figure is such a profound operation - # that we don't want this in set() - '3d_properties', # cannot be used as a keyword due to leading digit - ] - - @classmethod - def _update_set_signature_and_docstring(cls): - """ - Update the signature of the set function to list all properties - as keyword arguments. - - Property aliases are not listed in the signature for brevity, but - are still accepted as keyword arguments. - """ - cls.set.__signature__ = Signature( - [Parameter("self", Parameter.POSITIONAL_OR_KEYWORD), - *[Parameter(prop, Parameter.KEYWORD_ONLY, default=_UNSET) - for prop in ArtistInspector(cls).get_setters() - if prop not in Artist._PROPERTIES_EXCLUDED_FROM_SET]]) - cls.set._autogenerated_signature = True - - cls.set.__doc__ = ( - "Set multiple properties at once.\n\n" - "Supported properties are\n\n" - + kwdoc(cls)) - - def __init__(self): - self._stale = True - self.stale_callback = None - self._axes = None - self.figure = None - - self._transform = None - self._transformSet = False - self._visible = True - self._animated = False - self._alpha = None - self.clipbox = None - self._clippath = None - self._clipon = True - self._label = '' - self._picker = None - self._rasterized = False - self._agg_filter = None - # Normally, artist classes need to be queried for mouseover info if and - # only if they override get_cursor_data. - self._mouseover = type(self).get_cursor_data != Artist.get_cursor_data - self._callbacks = cbook.CallbackRegistry(signals=["pchanged"]) - try: - self.axes = None - except AttributeError: - # Handle self.axes as a read-only property, as in Figure. - pass - self._remove_method = None - self._url = None - self._gid = None - self._snap = None - self._sketch = mpl.rcParams['path.sketch'] - self._path_effects = mpl.rcParams['path.effects'] - self._sticky_edges = _XYPair([], []) - self._in_layout = True - - def __getstate__(self): - d = self.__dict__.copy() - d['stale_callback'] = None - return d - - def remove(self): - """ - Remove the artist from the figure if possible. - - The effect will not be visible until the figure is redrawn, e.g., - with `.FigureCanvasBase.draw_idle`. Call `~.axes.Axes.relim` to - update the axes limits if desired. - - Note: `~.axes.Axes.relim` will not see collections even if the - collection was added to the axes with *autolim* = True. - - Note: there is no support for removing the artist's legend entry. - """ - - # There is no method to set the callback. Instead, the parent should - # set the _remove_method attribute directly. This would be a - # protected attribute if Python supported that sort of thing. The - # callback has one parameter, which is the child to be removed. - if self._remove_method is not None: - self._remove_method(self) - # clear stale callback - self.stale_callback = None - _ax_flag = False - if hasattr(self, 'axes') and self.axes: - # remove from the mouse hit list - self.axes._mouseover_set.discard(self) - self.axes.stale = True - self.axes = None # decouple the artist from the Axes - _ax_flag = True - - if self.figure: - if not _ax_flag: - self.figure.stale = True - self.figure = None - - else: - raise NotImplementedError('cannot remove artist') - # TODO: the fix for the collections relim problem is to move the - # limits calculation into the artist itself, including the property of - # whether or not the artist should affect the limits. Then there will - # be no distinction between axes.add_line, axes.add_patch, etc. - # TODO: add legend support - - def have_units(self): - """Return whether units are set on any axis.""" - ax = self.axes - return ax and any(axis.have_units() for axis in ax._axis_map.values()) - - def convert_xunits(self, x): - """ - Convert *x* using the unit type of the xaxis. - - If the artist is not contained in an Axes or if the xaxis does not - have units, *x* itself is returned. - """ - ax = getattr(self, 'axes', None) - if ax is None or ax.xaxis is None: - return x - return ax.xaxis.convert_units(x) - - def convert_yunits(self, y): - """ - Convert *y* using the unit type of the yaxis. - - If the artist is not contained in an Axes or if the yaxis does not - have units, *y* itself is returned. - """ - ax = getattr(self, 'axes', None) - if ax is None or ax.yaxis is None: - return y - return ax.yaxis.convert_units(y) - - @property - def axes(self): - """The `~.axes.Axes` instance the artist resides in, or *None*.""" - return self._axes - - @axes.setter - def axes(self, new_axes): - if (new_axes is not None and self._axes is not None - and new_axes != self._axes): - raise ValueError("Can not reset the axes. You are probably " - "trying to re-use an artist in more than one " - "Axes which is not supported") - self._axes = new_axes - if new_axes is not None and new_axes is not self: - self.stale_callback = _stale_axes_callback - - @property - def stale(self): - """ - Whether the artist is 'stale' and needs to be re-drawn for the output - to match the internal state of the artist. - """ - return self._stale - - @stale.setter - def stale(self, val): - self._stale = val - - # if the artist is animated it does not take normal part in the - # draw stack and is not expected to be drawn as part of the normal - # draw loop (when not saving) so do not propagate this change - if self._animated: - return - - if val and self.stale_callback is not None: - self.stale_callback(self, val) - - def get_window_extent(self, renderer=None): - """ - Get the artist's bounding box in display space. - - The bounding box' width and height are nonnegative. - - Subclasses should override for inclusion in the bounding box - "tight" calculation. Default is to return an empty bounding - box at 0, 0. - - Be careful when using this function, the results will not update - if the artist window extent of the artist changes. The extent - can change due to any changes in the transform stack, such as - changing the axes limits, the figure size, or the canvas used - (as is done when saving a figure). This can lead to unexpected - behavior where interactive figures will look fine on the screen, - but will save incorrectly. - """ - return Bbox([[0, 0], [0, 0]]) - - def get_tightbbox(self, renderer=None): - """ - Like `.Artist.get_window_extent`, but includes any clipping. - - Parameters - ---------- - renderer : `~matplotlib.backend_bases.RendererBase` subclass, optional - renderer that will be used to draw the figures (i.e. - ``fig.canvas.get_renderer()``) - - Returns - ------- - `.Bbox` or None - The enclosing bounding box (in figure pixel coordinates). - Returns None if clipping results in no intersection. - """ - bbox = self.get_window_extent(renderer) - if self.get_clip_on(): - clip_box = self.get_clip_box() - if clip_box is not None: - bbox = Bbox.intersection(bbox, clip_box) - clip_path = self.get_clip_path() - if clip_path is not None and bbox is not None: - clip_path = clip_path.get_fully_transformed_path() - bbox = Bbox.intersection(bbox, clip_path.get_extents()) - return bbox - - def add_callback(self, func): - """ - Add a callback function that will be called whenever one of the - `.Artist`'s properties changes. - - Parameters - ---------- - func : callable - The callback function. It must have the signature:: - - def func(artist: Artist) -> Any - - where *artist* is the calling `.Artist`. Return values may exist - but are ignored. - - Returns - ------- - int - The observer id associated with the callback. This id can be - used for removing the callback with `.remove_callback` later. - - See Also - -------- - remove_callback - """ - # Wrapping func in a lambda ensures it can be connected multiple times - # and never gets weakref-gc'ed. - return self._callbacks.connect("pchanged", lambda: func(self)) - - def remove_callback(self, oid): - """ - Remove a callback based on its observer id. - - See Also - -------- - add_callback - """ - self._callbacks.disconnect(oid) - - def pchanged(self): - """ - Call all of the registered callbacks. - - This function is triggered internally when a property is changed. - - See Also - -------- - add_callback - remove_callback - """ - self._callbacks.process("pchanged") - - def is_transform_set(self): - """ - Return whether the Artist has an explicitly set transform. - - This is *True* after `.set_transform` has been called. - """ - return self._transformSet - - def set_transform(self, t): - """ - Set the artist transform. - - Parameters - ---------- - t : `~matplotlib.transforms.Transform` - """ - self._transform = t - self._transformSet = True - self.pchanged() - self.stale = True - - def get_transform(self): - """Return the `.Transform` instance used by this artist.""" - if self._transform is None: - self._transform = IdentityTransform() - elif (not isinstance(self._transform, Transform) - and hasattr(self._transform, '_as_mpl_transform')): - self._transform = self._transform._as_mpl_transform(self.axes) - return self._transform - - def get_children(self): - r"""Return a list of the child `.Artist`\s of this `.Artist`.""" - return [] - - def _different_canvas(self, event): - """ - Check whether an *event* occurred on a canvas other that this artist's canvas. - - If this method returns True, the event definitely occurred on a different - canvas; if it returns False, either it occurred on the same canvas, or we may - not have enough information to know. - - Subclasses should start their definition of `contains` as follows:: - - if self._different_canvas(mouseevent): - return False, {} - # subclass-specific implementation follows - """ - return (getattr(event, "canvas", None) is not None and self.figure is not None - and event.canvas is not self.figure.canvas) - - def contains(self, mouseevent): - """ - Test whether the artist contains the mouse event. - - Parameters - ---------- - mouseevent : `~matplotlib.backend_bases.MouseEvent` - - Returns - ------- - contains : bool - Whether any values are within the radius. - details : dict - An artist-specific dictionary of details of the event context, - such as which points are contained in the pick radius. See the - individual Artist subclasses for details. - """ - inside, info = self._default_contains(mouseevent) - if inside is not None: - return inside, info - _log.warning("%r needs 'contains' method", self.__class__.__name__) - return False, {} - - def pickable(self): - """ - Return whether the artist is pickable. - - See Also - -------- - set_picker, get_picker, pick - """ - return self.figure is not None and self._picker is not None - - def pick(self, mouseevent): - """ - Process a pick event. - - Each child artist will fire a pick event if *mouseevent* is over - the artist and the artist has picker set. - - See Also - -------- - set_picker, get_picker, pickable - """ - from .backend_bases import PickEvent # Circular import. - # Pick self - if self.pickable(): - picker = self.get_picker() - if callable(picker): - inside, prop = picker(self, mouseevent) - else: - inside, prop = self.contains(mouseevent) - if inside: - PickEvent("pick_event", self.figure.canvas, - mouseevent, self, **prop)._process() - - # Pick children - for a in self.get_children(): - # make sure the event happened in the same Axes - ax = getattr(a, 'axes', None) - if (mouseevent.inaxes is None or ax is None - or mouseevent.inaxes == ax): - # we need to check if mouseevent.inaxes is None - # because some objects associated with an Axes (e.g., a - # tick label) can be outside the bounding box of the - # Axes and inaxes will be None - # also check that ax is None so that it traverse objects - # which do not have an axes property but children might - a.pick(mouseevent) - - def set_picker(self, picker): - """ - Define the picking behavior of the artist. - - Parameters - ---------- - picker : None or bool or float or callable - This can be one of the following: - - - *None*: Picking is disabled for this artist (default). - - - A boolean: If *True* then picking will be enabled and the - artist will fire a pick event if the mouse event is over - the artist. - - - A float: If picker is a number it is interpreted as an - epsilon tolerance in points and the artist will fire - off an event if its data is within epsilon of the mouse - event. For some artists like lines and patch collections, - the artist may provide additional data to the pick event - that is generated, e.g., the indices of the data within - epsilon of the pick event - - - A function: If picker is callable, it is a user supplied - function which determines whether the artist is hit by the - mouse event:: - - hit, props = picker(artist, mouseevent) - - to determine the hit test. if the mouse event is over the - artist, return *hit=True* and props is a dictionary of - properties you want added to the PickEvent attributes. - """ - self._picker = picker - - def get_picker(self): - """ - Return the picking behavior of the artist. - - The possible values are described in `.set_picker`. - - See Also - -------- - set_picker, pickable, pick - """ - return self._picker - - def get_url(self): - """Return the url.""" - return self._url - - def set_url(self, url): - """ - Set the url for the artist. - - Parameters - ---------- - url : str - """ - self._url = url - - def get_gid(self): - """Return the group id.""" - return self._gid - - def set_gid(self, gid): - """ - Set the (group) id for the artist. - - Parameters - ---------- - gid : str - """ - self._gid = gid - - def get_snap(self): - """ - Return the snap setting. - - See `.set_snap` for details. - """ - if mpl.rcParams['path.snap']: - return self._snap - else: - return False - - def set_snap(self, snap): - """ - Set the snapping behavior. - - Snapping aligns positions with the pixel grid, which results in - clearer images. For example, if a black line of 1px width was - defined at a position in between two pixels, the resulting image - would contain the interpolated value of that line in the pixel grid, - which would be a grey value on both adjacent pixel positions. In - contrast, snapping will move the line to the nearest integer pixel - value, so that the resulting image will really contain a 1px wide - black line. - - Snapping is currently only supported by the Agg and MacOSX backends. - - Parameters - ---------- - snap : bool or None - Possible values: - - - *True*: Snap vertices to the nearest pixel center. - - *False*: Do not modify vertex positions. - - *None*: (auto) If the path contains only rectilinear line - segments, round to the nearest pixel center. - """ - self._snap = snap - self.stale = True - - def get_sketch_params(self): - """ - Return the sketch parameters for the artist. - - Returns - ------- - tuple or None - - A 3-tuple with the following elements: - - - *scale*: The amplitude of the wiggle perpendicular to the - source line. - - *length*: The length of the wiggle along the line. - - *randomness*: The scale factor by which the length is - shrunken or expanded. - - Returns *None* if no sketch parameters were set. - """ - return self._sketch - - def set_sketch_params(self, scale=None, length=None, randomness=None): - """ - Set the sketch parameters. - - Parameters - ---------- - scale : float, optional - The amplitude of the wiggle perpendicular to the source - line, in pixels. If scale is `None`, or not provided, no - sketch filter will be provided. - length : float, optional - The length of the wiggle along the line, in pixels - (default 128.0) - randomness : float, optional - The scale factor by which the length is shrunken or - expanded (default 16.0) - - The PGF backend uses this argument as an RNG seed and not as - described above. Using the same seed yields the same random shape. - - .. ACCEPTS: (scale: float, length: float, randomness: float) - """ - if scale is None: - self._sketch = None - else: - self._sketch = (scale, length or 128.0, randomness or 16.0) - self.stale = True - - def set_path_effects(self, path_effects): - """ - Set the path effects. - - Parameters - ---------- - path_effects : list of `.AbstractPathEffect` - """ - self._path_effects = path_effects - self.stale = True - - def get_path_effects(self): - return self._path_effects - - def get_figure(self): - """Return the `.Figure` instance the artist belongs to.""" - return self.figure - - def set_figure(self, fig): - """ - Set the `.Figure` instance the artist belongs to. - - Parameters - ---------- - fig : `~matplotlib.figure.Figure` - """ - # if this is a no-op just return - if self.figure is fig: - return - # if we currently have a figure (the case of both `self.figure` - # and *fig* being none is taken care of above) we then user is - # trying to change the figure an artist is associated with which - # is not allowed for the same reason as adding the same instance - # to more than one Axes - if self.figure is not None: - raise RuntimeError("Can not put single artist in " - "more than one figure") - self.figure = fig - if self.figure and self.figure is not self: - self.pchanged() - self.stale = True - - def set_clip_box(self, clipbox): - """ - Set the artist's clip `.Bbox`. - - Parameters - ---------- - clipbox : `~matplotlib.transforms.BboxBase` or None - Will typically be created from a `.TransformedBbox`. For instance, - ``TransformedBbox(Bbox([[0, 0], [1, 1]]), ax.transAxes)`` is the default - clipping for an artist added to an Axes. - - """ - _api.check_isinstance((BboxBase, None), clipbox=clipbox) - if clipbox != self.clipbox: - self.clipbox = clipbox - self.pchanged() - self.stale = True - - def set_clip_path(self, path, transform=None): - """ - Set the artist's clip path. - - Parameters - ---------- - path : `~matplotlib.patches.Patch` or `.Path` or `.TransformedPath` or None - The clip path. If given a `.Path`, *transform* must be provided as - well. If *None*, a previously set clip path is removed. - transform : `~matplotlib.transforms.Transform`, optional - Only used if *path* is a `.Path`, in which case the given `.Path` - is converted to a `.TransformedPath` using *transform*. - - Notes - ----- - For efficiency, if *path* is a `.Rectangle` this method will set the - clipping box to the corresponding rectangle and set the clipping path - to ``None``. - - For technical reasons (support of `~.Artist.set`), a tuple - (*path*, *transform*) is also accepted as a single positional - parameter. - - .. ACCEPTS: Patch or (Path, Transform) or None - """ - from matplotlib.patches import Patch, Rectangle - - success = False - if transform is None: - if isinstance(path, Rectangle): - self.clipbox = TransformedBbox(Bbox.unit(), - path.get_transform()) - self._clippath = None - success = True - elif isinstance(path, Patch): - self._clippath = TransformedPatchPath(path) - success = True - elif isinstance(path, tuple): - path, transform = path - - if path is None: - self._clippath = None - success = True - elif isinstance(path, Path): - self._clippath = TransformedPath(path, transform) - success = True - elif isinstance(path, TransformedPatchPath): - self._clippath = path - success = True - elif isinstance(path, TransformedPath): - self._clippath = path - success = True - - if not success: - raise TypeError( - "Invalid arguments to set_clip_path, of type " - f"{type(path).__name__} and {type(transform).__name__}") - # This may result in the callbacks being hit twice, but guarantees they - # will be hit at least once. - self.pchanged() - self.stale = True - - def get_alpha(self): - """ - Return the alpha value used for blending - not supported on all - backends. - """ - return self._alpha - - def get_visible(self): - """Return the visibility.""" - return self._visible - - def get_animated(self): - """Return whether the artist is animated.""" - return self._animated - - def get_in_layout(self): - """ - Return boolean flag, ``True`` if artist is included in layout - calculations. - - E.g. :ref:`constrainedlayout_guide`, - `.Figure.tight_layout()`, and - ``fig.savefig(fname, bbox_inches='tight')``. - """ - return self._in_layout - - def _fully_clipped_to_axes(self): - """ - Return a boolean flag, ``True`` if the artist is clipped to the Axes - and can thus be skipped in layout calculations. Requires `get_clip_on` - is True, one of `clip_box` or `clip_path` is set, ``clip_box.extents`` - is equivalent to ``ax.bbox.extents`` (if set), and ``clip_path._patch`` - is equivalent to ``ax.patch`` (if set). - """ - # Note that ``clip_path.get_fully_transformed_path().get_extents()`` - # cannot be directly compared to ``axes.bbox.extents`` because the - # extents may be undefined (i.e. equivalent to ``Bbox.null()``) - # before the associated artist is drawn, and this method is meant - # to determine whether ``axes.get_tightbbox()`` may bypass drawing - clip_box = self.get_clip_box() - clip_path = self.get_clip_path() - return (self.axes is not None - and self.get_clip_on() - and (clip_box is not None or clip_path is not None) - and (clip_box is None - or np.all(clip_box.extents == self.axes.bbox.extents)) - and (clip_path is None - or isinstance(clip_path, TransformedPatchPath) - and clip_path._patch is self.axes.patch)) - - def get_clip_on(self): - """Return whether the artist uses clipping.""" - return self._clipon - - def get_clip_box(self): - """Return the clipbox.""" - return self.clipbox - - def get_clip_path(self): - """Return the clip path.""" - return self._clippath - - def get_transformed_clip_path_and_affine(self): - """ - Return the clip path with the non-affine part of its - transformation applied, and the remaining affine part of its - transformation. - """ - if self._clippath is not None: - return self._clippath.get_transformed_path_and_affine() - return None, None - - def set_clip_on(self, b): - """ - Set whether the artist uses clipping. - - When False, artists will be visible outside the Axes which - can lead to unexpected results. - - Parameters - ---------- - b : bool - """ - self._clipon = b - # This may result in the callbacks being hit twice, but ensures they - # are hit at least once - self.pchanged() - self.stale = True - - def _set_gc_clip(self, gc): - """Set the clip properly for the gc.""" - if self._clipon: - if self.clipbox is not None: - gc.set_clip_rectangle(self.clipbox) - gc.set_clip_path(self._clippath) - else: - gc.set_clip_rectangle(None) - gc.set_clip_path(None) - - def get_rasterized(self): - """Return whether the artist is to be rasterized.""" - return self._rasterized - - def set_rasterized(self, rasterized): - """ - Force rasterized (bitmap) drawing for vector graphics output. - - Rasterized drawing is not supported by all artists. If you try to - enable this on an artist that does not support it, the command has no - effect and a warning will be issued. - - This setting is ignored for pixel-based output. - - See also :doc:`/gallery/misc/rasterization_demo`. - - Parameters - ---------- - rasterized : bool - """ - supports_rasterization = getattr(self.draw, - "_supports_rasterization", False) - if rasterized and not supports_rasterization: - _api.warn_external(f"Rasterization of '{self}' will be ignored") - - self._rasterized = rasterized - - def get_agg_filter(self): - """Return filter function to be used for agg filter.""" - return self._agg_filter - - def set_agg_filter(self, filter_func): - """ - Set the agg filter. - - Parameters - ---------- - filter_func : callable - A filter function, which takes a (m, n, depth) float array - and a dpi value, and returns a (m, n, depth) array and two - offsets from the bottom left corner of the image - - .. ACCEPTS: a filter function, which takes a (m, n, 3) float array - and a dpi value, and returns a (m, n, 3) array and two offsets - from the bottom left corner of the image - """ - self._agg_filter = filter_func - self.stale = True - - def draw(self, renderer): - """ - Draw the Artist (and its children) using the given renderer. - - This has no effect if the artist is not visible (`.Artist.get_visible` - returns False). - - Parameters - ---------- - renderer : `~matplotlib.backend_bases.RendererBase` subclass. - - Notes - ----- - This method is overridden in the Artist subclasses. - """ - if not self.get_visible(): - return - self.stale = False - - def set_alpha(self, alpha): - """ - Set the alpha value used for blending - not supported on all backends. - - Parameters - ---------- - alpha : scalar or None - *alpha* must be within the 0-1 range, inclusive. - """ - if alpha is not None and not isinstance(alpha, Real): - raise TypeError( - f'alpha must be numeric or None, not {type(alpha)}') - if alpha is not None and not (0 <= alpha <= 1): - raise ValueError(f'alpha ({alpha}) is outside 0-1 range') - if alpha != self._alpha: - self._alpha = alpha - self.pchanged() - self.stale = True - - def _set_alpha_for_array(self, alpha): - """ - Set the alpha value used for blending - not supported on all backends. - - Parameters - ---------- - alpha : array-like or scalar or None - All values must be within the 0-1 range, inclusive. - Masked values and nans are not supported. - """ - if isinstance(alpha, str): - raise TypeError("alpha must be numeric or None, not a string") - if not np.iterable(alpha): - Artist.set_alpha(self, alpha) - return - alpha = np.asarray(alpha) - if not (0 <= alpha.min() and alpha.max() <= 1): - raise ValueError('alpha must be between 0 and 1, inclusive, ' - f'but min is {alpha.min()}, max is {alpha.max()}') - self._alpha = alpha - self.pchanged() - self.stale = True - - def set_visible(self, b): - """ - Set the artist's visibility. - - Parameters - ---------- - b : bool - """ - if b != self._visible: - self._visible = b - self.pchanged() - self.stale = True - - def set_animated(self, b): - """ - Set whether the artist is intended to be used in an animation. - - If True, the artist is excluded from regular drawing of the figure. - You have to call `.Figure.draw_artist` / `.Axes.draw_artist` - explicitly on the artist. This approach is used to speed up animations - using blitting. - - See also `matplotlib.animation` and - :ref:`blitting`. - - Parameters - ---------- - b : bool - """ - if self._animated != b: - self._animated = b - self.pchanged() - - def set_in_layout(self, in_layout): - """ - Set if artist is to be included in layout calculations, - E.g. :ref:`constrainedlayout_guide`, - `.Figure.tight_layout()`, and - ``fig.savefig(fname, bbox_inches='tight')``. - - Parameters - ---------- - in_layout : bool - """ - self._in_layout = in_layout - - def get_label(self): - """Return the label used for this artist in the legend.""" - return self._label - - def set_label(self, s): - """ - Set a label that will be displayed in the legend. - - Parameters - ---------- - s : object - *s* will be converted to a string by calling `str`. - """ - label = str(s) if s is not None else None - if label != self._label: - self._label = label - self.pchanged() - self.stale = True - - def get_zorder(self): - """Return the artist's zorder.""" - return self.zorder - - def set_zorder(self, level): - """ - Set the zorder for the artist. Artists with lower zorder - values are drawn first. - - Parameters - ---------- - level : float - """ - if level is None: - level = self.__class__.zorder - if level != self.zorder: - self.zorder = level - self.pchanged() - self.stale = True - - @property - def sticky_edges(self): - """ - ``x`` and ``y`` sticky edge lists for autoscaling. - - When performing autoscaling, if a data limit coincides with a value in - the corresponding sticky_edges list, then no margin will be added--the - view limit "sticks" to the edge. A typical use case is histograms, - where one usually expects no margin on the bottom edge (0) of the - histogram. - - Moreover, margin expansion "bumps" against sticky edges and cannot - cross them. For example, if the upper data limit is 1.0, the upper - view limit computed by simple margin application is 1.2, but there is a - sticky edge at 1.1, then the actual upper view limit will be 1.1. - - This attribute cannot be assigned to; however, the ``x`` and ``y`` - lists can be modified in place as needed. - - Examples - -------- - >>> artist.sticky_edges.x[:] = (xmin, xmax) - >>> artist.sticky_edges.y[:] = (ymin, ymax) - - """ - return self._sticky_edges - - def update_from(self, other): - """Copy properties from *other* to *self*.""" - self._transform = other._transform - self._transformSet = other._transformSet - self._visible = other._visible - self._alpha = other._alpha - self.clipbox = other.clipbox - self._clipon = other._clipon - self._clippath = other._clippath - self._label = other._label - self._sketch = other._sketch - self._path_effects = other._path_effects - self.sticky_edges.x[:] = other.sticky_edges.x.copy() - self.sticky_edges.y[:] = other.sticky_edges.y.copy() - self.pchanged() - self.stale = True - - def properties(self): - """Return a dictionary of all the properties of the artist.""" - return ArtistInspector(self).properties() - - def _update_props(self, props, errfmt): - """ - Helper for `.Artist.set` and `.Artist.update`. - - *errfmt* is used to generate error messages for invalid property - names; it gets formatted with ``type(self)`` and the property name. - """ - ret = [] - with cbook._setattr_cm(self, eventson=False): - for k, v in props.items(): - # Allow attributes we want to be able to update through - # art.update, art.set, setp. - if k == "axes": - ret.append(setattr(self, k, v)) - else: - func = getattr(self, f"set_{k}", None) - if not callable(func): - raise AttributeError( - errfmt.format(cls=type(self), prop_name=k)) - ret.append(func(v)) - if ret: - self.pchanged() - self.stale = True - return ret - - def update(self, props): - """ - Update this artist's properties from the dict *props*. - - Parameters - ---------- - props : dict - """ - return self._update_props( - props, "{cls.__name__!r} object has no property {prop_name!r}") - - def _internal_update(self, kwargs): - """ - Update artist properties without prenormalizing them, but generating - errors as if calling `set`. - - The lack of prenormalization is to maintain backcompatibility. - """ - return self._update_props( - kwargs, "{cls.__name__}.set() got an unexpected keyword argument " - "{prop_name!r}") - - def set(self, **kwargs): - # docstring and signature are auto-generated via - # Artist._update_set_signature_and_docstring() at the end of the - # module. - return self._internal_update(cbook.normalize_kwargs(kwargs, self)) - - @contextlib.contextmanager - def _cm_set(self, **kwargs): - """ - `.Artist.set` context-manager that restores original values at exit. - """ - orig_vals = {k: getattr(self, f"get_{k}")() for k in kwargs} - try: - self.set(**kwargs) - yield - finally: - self.set(**orig_vals) - - def findobj(self, match=None, include_self=True): - """ - Find artist objects. - - Recursively find all `.Artist` instances contained in the artist. - - Parameters - ---------- - match - A filter criterion for the matches. This can be - - - *None*: Return all objects contained in artist. - - A function with signature ``def match(artist: Artist) -> bool``. - The result will only contain artists for which the function - returns *True*. - - A class instance: e.g., `.Line2D`. The result will only contain - artists of this class or its subclasses (``isinstance`` check). - - include_self : bool - Include *self* in the list to be checked for a match. - - Returns - ------- - list of `.Artist` - - """ - if match is None: # always return True - def matchfunc(x): - return True - elif isinstance(match, type) and issubclass(match, Artist): - def matchfunc(x): - return isinstance(x, match) - elif callable(match): - matchfunc = match - else: - raise ValueError('match must be None, a matplotlib.artist.Artist ' - 'subclass, or a callable') - - artists = sum([c.findobj(matchfunc) for c in self.get_children()], []) - if include_self and matchfunc(self): - artists.append(self) - return artists - - def get_cursor_data(self, event): - """ - Return the cursor data for a given event. - - .. note:: - This method is intended to be overridden by artist subclasses. - As an end-user of Matplotlib you will most likely not call this - method yourself. - - Cursor data can be used by Artists to provide additional context - information for a given event. The default implementation just returns - *None*. - - Subclasses can override the method and return arbitrary data. However, - when doing so, they must ensure that `.format_cursor_data` can convert - the data to a string representation. - - The only current use case is displaying the z-value of an `.AxesImage` - in the status bar of a plot window, while moving the mouse. - - Parameters - ---------- - event : `~matplotlib.backend_bases.MouseEvent` - - See Also - -------- - format_cursor_data - - """ - return None - - def format_cursor_data(self, data): - """ - Return a string representation of *data*. - - .. note:: - This method is intended to be overridden by artist subclasses. - As an end-user of Matplotlib you will most likely not call this - method yourself. - - The default implementation converts ints and floats and arrays of ints - and floats into a comma-separated string enclosed in square brackets, - unless the artist has an associated colorbar, in which case scalar - values are formatted using the colorbar's formatter. - - See Also - -------- - get_cursor_data - """ - if np.ndim(data) == 0 and isinstance(self, ScalarMappable): - # This block logically belongs to ScalarMappable, but can't be - # implemented in it because most ScalarMappable subclasses inherit - # from Artist first and from ScalarMappable second, so - # Artist.format_cursor_data would always have precedence over - # ScalarMappable.format_cursor_data. - n = self.cmap.N - if np.ma.getmask(data): - return "[]" - normed = self.norm(data) - if np.isfinite(normed): - if isinstance(self.norm, BoundaryNorm): - # not an invertible normalization mapping - cur_idx = np.argmin(np.abs(self.norm.boundaries - data)) - neigh_idx = max(0, cur_idx - 1) - # use max diff to prevent delta == 0 - delta = np.diff( - self.norm.boundaries[neigh_idx:cur_idx + 2] - ).max() - - else: - # Midpoints of neighboring color intervals. - neighbors = self.norm.inverse( - (int(normed * n) + np.array([0, 1])) / n) - delta = abs(neighbors - data).max() - g_sig_digits = cbook._g_sig_digits(data, delta) - else: - g_sig_digits = 3 # Consistent with default below. - return f"[{data:-#.{g_sig_digits}g}]" - else: - try: - data[0] - except (TypeError, IndexError): - data = [data] - data_str = ', '.join(f'{item:0.3g}' for item in data - if isinstance(item, Number)) - return "[" + data_str + "]" - - def get_mouseover(self): - """ - Return whether this artist is queried for custom context information - when the mouse cursor moves over it. - """ - return self._mouseover - - def set_mouseover(self, mouseover): - """ - Set whether this artist is queried for custom context information when - the mouse cursor moves over it. - - Parameters - ---------- - mouseover : bool - - See Also - -------- - get_cursor_data - .ToolCursorPosition - .NavigationToolbar2 - """ - self._mouseover = bool(mouseover) - ax = self.axes - if ax: - if self._mouseover: - ax._mouseover_set.add(self) - else: - ax._mouseover_set.discard(self) - - mouseover = property(get_mouseover, set_mouseover) # backcompat. - - -def _get_tightbbox_for_layout_only(obj, *args, **kwargs): - """ - Matplotlib's `.Axes.get_tightbbox` and `.Axis.get_tightbbox` support a - *for_layout_only* kwarg; this helper tries to use the kwarg but skips it - when encountering third-party subclasses that do not support it. - """ - try: - return obj.get_tightbbox(*args, **{**kwargs, "for_layout_only": True}) - except TypeError: - return obj.get_tightbbox(*args, **kwargs) - - -class ArtistInspector: - """ - A helper class to inspect an `~matplotlib.artist.Artist` and return - information about its settable properties and their current values. - """ - - def __init__(self, o): - r""" - Initialize the artist inspector with an `Artist` or an iterable of - `Artist`\s. If an iterable is used, we assume it is a homogeneous - sequence (all `Artist`\s are of the same type) and it is your - responsibility to make sure this is so. - """ - if not isinstance(o, Artist): - if np.iterable(o): - o = list(o) - if len(o): - o = o[0] - - self.oorig = o - if not isinstance(o, type): - o = type(o) - self.o = o - - self.aliasd = self.get_aliases() - - def get_aliases(self): - """ - Get a dict mapping property fullnames to sets of aliases for each alias - in the :class:`~matplotlib.artist.ArtistInspector`. - - e.g., for lines:: - - {'markerfacecolor': {'mfc'}, - 'linewidth' : {'lw'}, - } - """ - names = [name for name in dir(self.o) - if name.startswith(('set_', 'get_')) - and callable(getattr(self.o, name))] - aliases = {} - for name in names: - func = getattr(self.o, name) - if not self.is_alias(func): - continue - propname = re.search(f"`({name[:4]}.*)`", # get_.*/set_.* - inspect.getdoc(func)).group(1) - aliases.setdefault(propname[4:], set()).add(name[4:]) - return aliases - - _get_valid_values_regex = re.compile( - r"\n\s*(?:\.\.\s+)?ACCEPTS:\s*((?:.|\n)*?)(?:$|(?:\n\n))" - ) - - def get_valid_values(self, attr): - """ - Get the legal arguments for the setter associated with *attr*. - - This is done by querying the docstring of the setter for a line that - begins with "ACCEPTS:" or ".. ACCEPTS:", and then by looking for a - numpydoc-style documentation for the setter's first argument. - """ - - name = 'set_%s' % attr - if not hasattr(self.o, name): - raise AttributeError(f'{self.o} has no function {name}') - func = getattr(self.o, name) - - docstring = inspect.getdoc(func) - if docstring is None: - return 'unknown' - - if docstring.startswith('Alias for '): - return None - - match = self._get_valid_values_regex.search(docstring) - if match is not None: - return re.sub("\n *", " ", match.group(1)) - - # Much faster than list(inspect.signature(func).parameters)[1], - # although barely relevant wrt. matplotlib's total import time. - param_name = func.__code__.co_varnames[1] - # We could set the presence * based on whether the parameter is a - # varargs (it can't be a varkwargs) but it's not really worth it. - match = re.search(fr"(?m)^ *\*?{param_name} : (.+)", docstring) - if match: - return match.group(1) - - return 'unknown' - - def _replace_path(self, source_class): - """ - Changes the full path to the public API path that is used - in sphinx. This is needed for links to work. - """ - replace_dict = {'_base._AxesBase': 'Axes', - '_axes.Axes': 'Axes'} - for key, value in replace_dict.items(): - source_class = source_class.replace(key, value) - return source_class - - def get_setters(self): - """ - Get the attribute strings with setters for object. - - For example, for a line, return ``['markerfacecolor', 'linewidth', - ....]``. - """ - setters = [] - for name in dir(self.o): - if not name.startswith('set_'): - continue - func = getattr(self.o, name) - if (not callable(func) - or self.number_of_parameters(func) < 2 - or self.is_alias(func)): - continue - setters.append(name[4:]) - return setters - - @staticmethod - @cache - def number_of_parameters(func): - """Return number of parameters of the callable *func*.""" - return len(inspect.signature(func).parameters) - - @staticmethod - @cache - def is_alias(method): - """ - Return whether the object *method* is an alias for another method. - """ - - ds = inspect.getdoc(method) - if ds is None: - return False - - return ds.startswith('Alias for ') - - def aliased_name(self, s): - """ - Return 'PROPNAME or alias' if *s* has an alias, else return 'PROPNAME'. - - For example, for the line markerfacecolor property, which has an - alias, return 'markerfacecolor or mfc' and for the transform - property, which does not, return 'transform'. - """ - aliases = ''.join(' or %s' % x for x in sorted(self.aliasd.get(s, []))) - return s + aliases - - _NOT_LINKABLE = { - # A set of property setter methods that are not available in our - # current docs. This is a workaround used to prevent trying to link - # these setters which would lead to "target reference not found" - # warnings during doc build. - 'matplotlib.image._ImageBase.set_alpha', - 'matplotlib.image._ImageBase.set_array', - 'matplotlib.image._ImageBase.set_data', - 'matplotlib.image._ImageBase.set_filternorm', - 'matplotlib.image._ImageBase.set_filterrad', - 'matplotlib.image._ImageBase.set_interpolation', - 'matplotlib.image._ImageBase.set_interpolation_stage', - 'matplotlib.image._ImageBase.set_resample', - 'matplotlib.text._AnnotationBase.set_annotation_clip', - } - - def aliased_name_rest(self, s, target): - """ - Return 'PROPNAME or alias' if *s* has an alias, else return 'PROPNAME', - formatted for reST. - - For example, for the line markerfacecolor property, which has an - alias, return 'markerfacecolor or mfc' and for the transform - property, which does not, return 'transform'. - """ - # workaround to prevent "reference target not found" - if target in self._NOT_LINKABLE: - return f'``{s}``' - - aliases = ''.join(' or %s' % x for x in sorted(self.aliasd.get(s, []))) - return f':meth:`{s} <{target}>`{aliases}' - - def pprint_setters(self, prop=None, leadingspace=2): - """ - If *prop* is *None*, return a list of strings of all settable - properties and their valid values. - - If *prop* is not *None*, it is a valid property name and that - property will be returned as a string of property : valid - values. - """ - if leadingspace: - pad = ' ' * leadingspace - else: - pad = '' - if prop is not None: - accepts = self.get_valid_values(prop) - return f'{pad}{prop}: {accepts}' - - lines = [] - for prop in sorted(self.get_setters()): - accepts = self.get_valid_values(prop) - name = self.aliased_name(prop) - lines.append(f'{pad}{name}: {accepts}') - return lines - - def pprint_setters_rest(self, prop=None, leadingspace=4): - """ - If *prop* is *None*, return a list of reST-formatted strings of all - settable properties and their valid values. - - If *prop* is not *None*, it is a valid property name and that - property will be returned as a string of "property : valid" - values. - """ - if leadingspace: - pad = ' ' * leadingspace - else: - pad = '' - if prop is not None: - accepts = self.get_valid_values(prop) - return f'{pad}{prop}: {accepts}' - - prop_and_qualnames = [] - for prop in sorted(self.get_setters()): - # Find the parent method which actually provides the docstring. - for cls in self.o.__mro__: - method = getattr(cls, f"set_{prop}", None) - if method and method.__doc__ is not None: - break - else: # No docstring available. - method = getattr(self.o, f"set_{prop}") - prop_and_qualnames.append( - (prop, f"{method.__module__}.{method.__qualname__}")) - - names = [self.aliased_name_rest(prop, target) - .replace('_base._AxesBase', 'Axes') - .replace('_axes.Axes', 'Axes') - for prop, target in prop_and_qualnames] - accepts = [self.get_valid_values(prop) - for prop, _ in prop_and_qualnames] - - col0_len = max(len(n) for n in names) - col1_len = max(len(a) for a in accepts) - table_formatstr = pad + ' ' + '=' * col0_len + ' ' + '=' * col1_len - - return [ - '', - pad + '.. table::', - pad + ' :class: property-table', - '', - table_formatstr, - pad + ' ' + 'Property'.ljust(col0_len) - + ' ' + 'Description'.ljust(col1_len), - table_formatstr, - *[pad + ' ' + n.ljust(col0_len) + ' ' + a.ljust(col1_len) - for n, a in zip(names, accepts)], - table_formatstr, - '', - ] - - def properties(self): - """Return a dictionary mapping property name -> value.""" - o = self.oorig - getters = [name for name in dir(o) - if name.startswith('get_') and callable(getattr(o, name))] - getters.sort() - d = {} - for name in getters: - func = getattr(o, name) - if self.is_alias(func): - continue - try: - with warnings.catch_warnings(): - warnings.simplefilter('ignore') - val = func() - except Exception: - continue - else: - d[name[4:]] = val - return d - - def pprint_getters(self): - """Return the getters and actual values as list of strings.""" - lines = [] - for name, val in sorted(self.properties().items()): - if getattr(val, 'shape', ()) != () and len(val) > 6: - s = str(val[:6]) + '...' - else: - s = str(val) - s = s.replace('\n', ' ') - if len(s) > 50: - s = s[:50] + '...' - name = self.aliased_name(name) - lines.append(f' {name} = {s}') - return lines - - -def getp(obj, property=None): - """ - Return the value of an `.Artist`'s *property*, or print all of them. - - Parameters - ---------- - obj : `~matplotlib.artist.Artist` - The queried artist; e.g., a `.Line2D`, a `.Text`, or an `~.axes.Axes`. - - property : str or None, default: None - If *property* is 'somename', this function returns - ``obj.get_somename()``. - - If it's None (or unset), it *prints* all gettable properties from - *obj*. Many properties have aliases for shorter typing, e.g. 'lw' is - an alias for 'linewidth'. In the output, aliases and full property - names will be listed as: - - property or alias = value - - e.g.: - - linewidth or lw = 2 - - See Also - -------- - setp - """ - if property is None: - insp = ArtistInspector(obj) - ret = insp.pprint_getters() - print('\n'.join(ret)) - return - return getattr(obj, 'get_' + property)() - -# alias -get = getp - - -def setp(obj, *args, file=None, **kwargs): - """ - Set one or more properties on an `.Artist`, or list allowed values. - - Parameters - ---------- - obj : `~matplotlib.artist.Artist` or list of `.Artist` - The artist(s) whose properties are being set or queried. When setting - properties, all artists are affected; when querying the allowed values, - only the first instance in the sequence is queried. - - For example, two lines can be made thicker and red with a single call: - - >>> x = arange(0, 1, 0.01) - >>> lines = plot(x, sin(2*pi*x), x, sin(4*pi*x)) - >>> setp(lines, linewidth=2, color='r') - - file : file-like, default: `sys.stdout` - Where `setp` writes its output when asked to list allowed values. - - >>> with open('output.log') as file: - ... setp(line, file=file) - - The default, ``None``, means `sys.stdout`. - - *args, **kwargs - The properties to set. The following combinations are supported: - - - Set the linestyle of a line to be dashed: - - >>> line, = plot([1, 2, 3]) - >>> setp(line, linestyle='--') - - - Set multiple properties at once: - - >>> setp(line, linewidth=2, color='r') - - - List allowed values for a line's linestyle: - - >>> setp(line, 'linestyle') - linestyle: {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} - - - List all properties that can be set, and their allowed values: - - >>> setp(line) - agg_filter: a filter function, ... - [long output listing omitted] - - `setp` also supports MATLAB style string/value pairs. For example, the - following are equivalent: - - >>> setp(lines, 'linewidth', 2, 'color', 'r') # MATLAB style - >>> setp(lines, linewidth=2, color='r') # Python style - - See Also - -------- - getp - """ - - if isinstance(obj, Artist): - objs = [obj] - else: - objs = list(cbook.flatten(obj)) - - if not objs: - return - - insp = ArtistInspector(objs[0]) - - if not kwargs and len(args) < 2: - if args: - print(insp.pprint_setters(prop=args[0]), file=file) - else: - print('\n'.join(insp.pprint_setters()), file=file) - return - - if len(args) % 2: - raise ValueError('The set args must be string, value pairs') - - funcvals = dict(zip(args[::2], args[1::2])) - ret = [o.update(funcvals) for o in objs] + [o.set(**kwargs) for o in objs] - return list(cbook.flatten(ret)) - - -def kwdoc(artist): - r""" - Inspect an `~matplotlib.artist.Artist` class (using `.ArtistInspector`) and - return information about its settable properties and their current values. - - Parameters - ---------- - artist : `~matplotlib.artist.Artist` or an iterable of `Artist`\s - - Returns - ------- - str - The settable properties of *artist*, as plain text if - :rc:`docstring.hardcopy` is False and as a rst table (intended for - use in Sphinx) if it is True. - """ - ai = ArtistInspector(artist) - return ('\n'.join(ai.pprint_setters_rest(leadingspace=4)) - if mpl.rcParams['docstring.hardcopy'] else - 'Properties:\n' + '\n'.join(ai.pprint_setters(leadingspace=4))) - -# We defer this to the end of them module, because it needs ArtistInspector -# to be defined. -Artist._update_set_signature_and_docstring() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_gtk4agg.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_gtk4agg.py deleted file mode 100644 index efddfec5607586b37dcdfe216bcb1ff7ae68ae30..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_gtk4agg.py +++ /dev/null @@ -1,36 +0,0 @@ -import numpy as np - -from .. import cbook -from . import backend_agg, backend_gtk4 -from .backend_gtk4 import Gtk, _BackendGTK4 - -import cairo # Presence of cairo is already checked by _backend_gtk. - - -class FigureCanvasGTK4Agg(backend_agg.FigureCanvasAgg, - backend_gtk4.FigureCanvasGTK4): - - def on_draw_event(self, widget, ctx): - scale = self.device_pixel_ratio - allocation = self.get_allocation() - - Gtk.render_background( - self.get_style_context(), ctx, - allocation.x, allocation.y, - allocation.width, allocation.height) - - buf = cbook._unmultiplied_rgba8888_to_premultiplied_argb32( - np.asarray(self.get_renderer().buffer_rgba())) - height, width, _ = buf.shape - image = cairo.ImageSurface.create_for_data( - buf.ravel().data, cairo.FORMAT_ARGB32, width, height) - image.set_device_scale(scale, scale) - ctx.set_source_surface(image, 0, 0) - ctx.paint() - - return False - - -@_BackendGTK4.export -class _BackendGTK4Agg(_BackendGTK4): - FigureCanvas = FigureCanvasGTK4Agg diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_typing/_shape.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_typing/_shape.py deleted file mode 100644 index 4f1204e47c6a20012e729514fdd78424126d45b8..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_typing/_shape.py +++ /dev/null @@ -1,7 +0,0 @@ -from collections.abc import Sequence -from typing import Union, SupportsIndex - -_Shape = tuple[int, ...] - -# Anything that can be coerced to a shape tuple -_ShapeLike = Union[SupportsIndex, Sequence[SupportsIndex]] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_explode.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_explode.py deleted file mode 100644 index d1e4a603c5710d7356313741198862a0349a26e3..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_explode.py +++ /dev/null @@ -1,303 +0,0 @@ -import re - -import numpy as np -import pytest - -import pandas as pd -import pandas._testing as tm - - -def test_error(): - df = pd.DataFrame( - {"A": pd.Series([[0, 1, 2], np.nan, [], (3, 4)], index=list("abcd")), "B": 1} - ) - with pytest.raises( - ValueError, match="column must be a scalar, tuple, or list thereof" - ): - df.explode([list("AA")]) - - with pytest.raises(ValueError, match="column must be unique"): - df.explode(list("AA")) - - df.columns = list("AA") - with pytest.raises( - ValueError, - match=re.escape("DataFrame columns must be unique. Duplicate columns: ['A']"), - ): - df.explode("A") - - -@pytest.mark.parametrize( - "input_subset, error_message", - [ - ( - list("AC"), - "columns must have matching element counts", - ), - ( - [], - "column must be nonempty", - ), - ( - list("AC"), - "columns must have matching element counts", - ), - ], -) -def test_error_multi_columns(input_subset, error_message): - # GH 39240 - df = pd.DataFrame( - { - "A": [[0, 1, 2], np.nan, [], (3, 4)], - "B": 1, - "C": [["a", "b", "c"], "foo", [], ["d", "e", "f"]], - }, - index=list("abcd"), - ) - with pytest.raises(ValueError, match=error_message): - df.explode(input_subset) - - -@pytest.mark.parametrize( - "scalar", - ["a", 0, 1.5, pd.Timedelta("1 days"), pd.Timestamp("2019-12-31")], -) -def test_basic(scalar): - df = pd.DataFrame( - {scalar: pd.Series([[0, 1, 2], np.nan, [], (3, 4)], index=list("abcd")), "B": 1} - ) - result = df.explode(scalar) - expected = pd.DataFrame( - { - scalar: pd.Series( - [0, 1, 2, np.nan, np.nan, 3, 4], index=list("aaabcdd"), dtype=object - ), - "B": 1, - } - ) - tm.assert_frame_equal(result, expected) - - -def test_multi_index_rows(): - df = pd.DataFrame( - {"A": np.array([[0, 1, 2], np.nan, [], (3, 4)], dtype=object), "B": 1}, - index=pd.MultiIndex.from_tuples([("a", 1), ("a", 2), ("b", 1), ("b", 2)]), - ) - - result = df.explode("A") - expected = pd.DataFrame( - { - "A": pd.Series( - [0, 1, 2, np.nan, np.nan, 3, 4], - index=pd.MultiIndex.from_tuples( - [ - ("a", 1), - ("a", 1), - ("a", 1), - ("a", 2), - ("b", 1), - ("b", 2), - ("b", 2), - ] - ), - dtype=object, - ), - "B": 1, - } - ) - tm.assert_frame_equal(result, expected) - - -def test_multi_index_columns(): - df = pd.DataFrame( - {("A", 1): np.array([[0, 1, 2], np.nan, [], (3, 4)], dtype=object), ("A", 2): 1} - ) - - result = df.explode(("A", 1)) - expected = pd.DataFrame( - { - ("A", 1): pd.Series( - [0, 1, 2, np.nan, np.nan, 3, 4], - index=pd.Index([0, 0, 0, 1, 2, 3, 3]), - dtype=object, - ), - ("A", 2): 1, - } - ) - tm.assert_frame_equal(result, expected) - - -def test_usecase(): - # explode a single column - # gh-10511 - df = pd.DataFrame( - [[11, range(5), 10], [22, range(3), 20]], columns=list("ABC") - ).set_index("C") - result = df.explode("B") - - expected = pd.DataFrame( - { - "A": [11, 11, 11, 11, 11, 22, 22, 22], - "B": np.array([0, 1, 2, 3, 4, 0, 1, 2], dtype=object), - "C": [10, 10, 10, 10, 10, 20, 20, 20], - }, - columns=list("ABC"), - ).set_index("C") - - tm.assert_frame_equal(result, expected) - - # gh-8517 - df = pd.DataFrame( - [["2014-01-01", "Alice", "A B"], ["2014-01-02", "Bob", "C D"]], - columns=["dt", "name", "text"], - ) - result = df.assign(text=df.text.str.split(" ")).explode("text") - expected = pd.DataFrame( - [ - ["2014-01-01", "Alice", "A"], - ["2014-01-01", "Alice", "B"], - ["2014-01-02", "Bob", "C"], - ["2014-01-02", "Bob", "D"], - ], - columns=["dt", "name", "text"], - index=[0, 0, 1, 1], - ) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize( - "input_dict, input_index, expected_dict, expected_index", - [ - ( - {"col1": [[1, 2], [3, 4]], "col2": ["foo", "bar"]}, - [0, 0], - {"col1": [1, 2, 3, 4], "col2": ["foo", "foo", "bar", "bar"]}, - [0, 0, 0, 0], - ), - ( - {"col1": [[1, 2], [3, 4]], "col2": ["foo", "bar"]}, - pd.Index([0, 0], name="my_index"), - {"col1": [1, 2, 3, 4], "col2": ["foo", "foo", "bar", "bar"]}, - pd.Index([0, 0, 0, 0], name="my_index"), - ), - ( - {"col1": [[1, 2], [3, 4]], "col2": ["foo", "bar"]}, - pd.MultiIndex.from_arrays( - [[0, 0], [1, 1]], names=["my_first_index", "my_second_index"] - ), - {"col1": [1, 2, 3, 4], "col2": ["foo", "foo", "bar", "bar"]}, - pd.MultiIndex.from_arrays( - [[0, 0, 0, 0], [1, 1, 1, 1]], - names=["my_first_index", "my_second_index"], - ), - ), - ( - {"col1": [[1, 2], [3, 4]], "col2": ["foo", "bar"]}, - pd.MultiIndex.from_arrays([[0, 0], [1, 1]], names=["my_index", None]), - {"col1": [1, 2, 3, 4], "col2": ["foo", "foo", "bar", "bar"]}, - pd.MultiIndex.from_arrays( - [[0, 0, 0, 0], [1, 1, 1, 1]], names=["my_index", None] - ), - ), - ], -) -def test_duplicate_index(input_dict, input_index, expected_dict, expected_index): - # GH 28005 - df = pd.DataFrame(input_dict, index=input_index) - result = df.explode("col1") - expected = pd.DataFrame(expected_dict, index=expected_index, dtype=object) - tm.assert_frame_equal(result, expected) - - -def test_ignore_index(): - # GH 34932 - df = pd.DataFrame({"id": range(0, 20, 10), "values": [list("ab"), list("cd")]}) - result = df.explode("values", ignore_index=True) - expected = pd.DataFrame( - {"id": [0, 0, 10, 10], "values": list("abcd")}, index=[0, 1, 2, 3] - ) - tm.assert_frame_equal(result, expected) - - -def test_explode_sets(): - # https://github.com/pandas-dev/pandas/issues/35614 - df = pd.DataFrame({"a": [{"x", "y"}], "b": [1]}, index=[1]) - result = df.explode(column="a").sort_values(by="a") - expected = pd.DataFrame({"a": ["x", "y"], "b": [1, 1]}, index=[1, 1]) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize( - "input_subset, expected_dict, expected_index", - [ - ( - list("AC"), - { - "A": pd.Series( - [0, 1, 2, np.nan, np.nan, 3, 4, np.nan], - index=list("aaabcdde"), - dtype=object, - ), - "B": 1, - "C": ["a", "b", "c", "foo", np.nan, "d", "e", np.nan], - }, - list("aaabcdde"), - ), - ( - list("A"), - { - "A": pd.Series( - [0, 1, 2, np.nan, np.nan, 3, 4, np.nan], - index=list("aaabcdde"), - dtype=object, - ), - "B": 1, - "C": [ - ["a", "b", "c"], - ["a", "b", "c"], - ["a", "b", "c"], - "foo", - [], - ["d", "e"], - ["d", "e"], - np.nan, - ], - }, - list("aaabcdde"), - ), - ], -) -def test_multi_columns(input_subset, expected_dict, expected_index): - # GH 39240 - df = pd.DataFrame( - { - "A": [[0, 1, 2], np.nan, [], (3, 4), np.nan], - "B": 1, - "C": [["a", "b", "c"], "foo", [], ["d", "e"], np.nan], - }, - index=list("abcde"), - ) - result = df.explode(input_subset) - expected = pd.DataFrame(expected_dict, expected_index) - tm.assert_frame_equal(result, expected) - - -def test_multi_columns_nan_empty(): - # GH 46084 - df = pd.DataFrame( - { - "A": [[0, 1], [5], [], [2, 3]], - "B": [9, 8, 7, 6], - "C": [[1, 2], np.nan, [], [3, 4]], - } - ) - result = df.explode(["A", "C"]) - expected = pd.DataFrame( - { - "A": np.array([0, 1, 5, np.nan, 2, 3], dtype=object), - "B": [9, 9, 8, 7, 6, 6], - "C": np.array([1, 2, np.nan, np.nan, 3, 4], dtype=object), - }, - index=[0, 0, 1, 2, 3, 3], - ) - tm.assert_frame_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/formats/test_to_html.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/formats/test_to_html.py deleted file mode 100644 index 3b5fe329c320cc0fd963b690ae3160ca4f6a48c2..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/formats/test_to_html.py +++ /dev/null @@ -1,980 +0,0 @@ -from datetime import datetime -from io import StringIO -import re - -import numpy as np -import pytest - -import pandas as pd -from pandas import ( - DataFrame, - Index, - MultiIndex, - option_context, -) -import pandas._testing as tm - -import pandas.io.formats.format as fmt - -lorem_ipsum = ( - "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod " - "tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim " - "veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex " - "ea commodo consequat. Duis aute irure dolor in reprehenderit in " - "voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur " - "sint occaecat cupidatat non proident, sunt in culpa qui officia " - "deserunt mollit anim id est laborum." -) - - -def expected_html(datapath, name): - """ - Read HTML file from formats data directory. - - Parameters - ---------- - datapath : pytest fixture - The datapath fixture injected into a test by pytest. - name : str - The name of the HTML file without the suffix. - - Returns - ------- - str : contents of HTML file. - """ - filename = ".".join([name, "html"]) - filepath = datapath("io", "formats", "data", "html", filename) - with open(filepath, encoding="utf-8") as f: - html = f.read() - return html.rstrip() - - -@pytest.fixture(params=["mixed", "empty"]) -def biggie_df_fixture(request): - """Fixture for a big mixed Dataframe and an empty Dataframe""" - if request.param == "mixed": - df = DataFrame( - { - "A": np.random.default_rng(2).standard_normal(200), - "B": tm.makeStringIndex(200), - }, - index=np.arange(200), - ) - df.loc[:20, "A"] = np.nan - df.loc[:20, "B"] = np.nan - return df - elif request.param == "empty": - df = DataFrame(index=np.arange(200)) - return df - - -@pytest.fixture(params=fmt._VALID_JUSTIFY_PARAMETERS) -def justify(request): - return request.param - - -@pytest.mark.parametrize("col_space", [30, 50]) -def test_to_html_with_col_space(col_space): - df = DataFrame(np.random.default_rng(2).random(size=(1, 3))) - # check that col_space affects HTML generation - # and be very brittle about it. - result = df.to_html(col_space=col_space) - hdrs = [x for x in result.split(r"\n") if re.search(r"\s]", x)] - assert len(hdrs) > 0 - for h in hdrs: - assert "min-width" in h - assert str(col_space) in h - - -def test_to_html_with_column_specific_col_space_raises(): - df = DataFrame( - np.random.default_rng(2).random(size=(3, 3)), columns=["a", "b", "c"] - ) - - msg = ( - "Col_space length\\(\\d+\\) should match " - "DataFrame number of columns\\(\\d+\\)" - ) - with pytest.raises(ValueError, match=msg): - df.to_html(col_space=[30, 40]) - - with pytest.raises(ValueError, match=msg): - df.to_html(col_space=[30, 40, 50, 60]) - - msg = "unknown column" - with pytest.raises(ValueError, match=msg): - df.to_html(col_space={"a": "foo", "b": 23, "d": 34}) - - -def test_to_html_with_column_specific_col_space(): - df = DataFrame( - np.random.default_rng(2).random(size=(3, 3)), columns=["a", "b", "c"] - ) - - result = df.to_html(col_space={"a": "2em", "b": 23}) - hdrs = [x for x in result.split("\n") if re.search(r"\s]", x)] - assert 'min-width: 2em;">a' in hdrs[1] - assert 'min-width: 23px;">b' in hdrs[2] - assert "c" in hdrs[3] - - result = df.to_html(col_space=["1em", 2, 3]) - hdrs = [x for x in result.split("\n") if re.search(r"\s]", x)] - assert 'min-width: 1em;">a' in hdrs[1] - assert 'min-width: 2px;">b' in hdrs[2] - assert 'min-width: 3px;">c' in hdrs[3] - - -def test_to_html_with_empty_string_label(): - # GH 3547, to_html regards empty string labels as repeated labels - data = {"c1": ["a", "b"], "c2": ["a", ""], "data": [1, 2]} - df = DataFrame(data).set_index(["c1", "c2"]) - result = df.to_html() - assert "rowspan" not in result - - -@pytest.mark.parametrize( - "df,expected", - [ - (DataFrame({"\u03c3": np.arange(10.0)}), "unicode_1"), - (DataFrame({"A": ["\u03c3"]}), "unicode_2"), - ], -) -def test_to_html_unicode(df, expected, datapath): - expected = expected_html(datapath, expected) - result = df.to_html() - assert result == expected - - -def test_to_html_encoding(float_frame, tmp_path): - # GH 28663 - path = tmp_path / "test.html" - float_frame.to_html(path, encoding="gbk") - with open(str(path), encoding="gbk") as f: - assert float_frame.to_html() == f.read() - - -def test_to_html_decimal(datapath): - # GH 12031 - df = DataFrame({"A": [6.0, 3.1, 2.2]}) - result = df.to_html(decimal=",") - expected = expected_html(datapath, "gh12031_expected_output") - assert result == expected - - -@pytest.mark.parametrize( - "kwargs,string,expected", - [ - ({}, "", "escaped"), - ({"escape": False}, "bold", "escape_disabled"), - ], -) -def test_to_html_escaped(kwargs, string, expected, datapath): - a = "strl2": {a: string, b: string}} - result = DataFrame(test_dict).to_html(**kwargs) - expected = expected_html(datapath, expected) - assert result == expected - - -@pytest.mark.parametrize("index_is_named", [True, False]) -def test_to_html_multiindex_index_false(index_is_named, datapath): - # GH 8452 - df = DataFrame( - {"a": range(2), "b": range(3, 5), "c": range(5, 7), "d": range(3, 5)} - ) - df.columns = MultiIndex.from_product([["a", "b"], ["c", "d"]]) - if index_is_named: - df.index = Index(df.index.values, name="idx") - result = df.to_html(index=False) - expected = expected_html(datapath, "gh8452_expected_output") - assert result == expected - - -@pytest.mark.parametrize( - "multi_sparse,expected", - [ - (False, "multiindex_sparsify_false_multi_sparse_1"), - (False, "multiindex_sparsify_false_multi_sparse_2"), - (True, "multiindex_sparsify_1"), - (True, "multiindex_sparsify_2"), - ], -) -def test_to_html_multiindex_sparsify(multi_sparse, expected, datapath): - index = MultiIndex.from_arrays([[0, 0, 1, 1], [0, 1, 0, 1]], names=["foo", None]) - df = DataFrame([[0, 1], [2, 3], [4, 5], [6, 7]], index=index) - if expected.endswith("2"): - df.columns = index[::2] - with option_context("display.multi_sparse", multi_sparse): - result = df.to_html() - expected = expected_html(datapath, expected) - assert result == expected - - -@pytest.mark.parametrize( - "max_rows,expected", - [ - (60, "gh14882_expected_output_1"), - # Test that ... appears in a middle level - (56, "gh14882_expected_output_2"), - ], -) -def test_to_html_multiindex_odd_even_truncate(max_rows, expected, datapath): - # GH 14882 - Issue on truncation with odd length DataFrame - index = MultiIndex.from_product( - [[100, 200, 300], [10, 20, 30], [1, 2, 3, 4, 5, 6, 7]], names=["a", "b", "c"] - ) - df = DataFrame({"n": range(len(index))}, index=index) - result = df.to_html(max_rows=max_rows) - expected = expected_html(datapath, expected) - assert result == expected - - -@pytest.mark.parametrize( - "df,formatters,expected", - [ - ( - DataFrame( - [[0, 1], [2, 3], [4, 5], [6, 7]], - columns=["foo", None], - index=np.arange(4), - ), - {"__index__": lambda x: "abcd"[x]}, - "index_formatter", - ), - ( - DataFrame({"months": [datetime(2016, 1, 1), datetime(2016, 2, 2)]}), - {"months": lambda x: x.strftime("%Y-%m")}, - "datetime64_monthformatter", - ), - ( - DataFrame( - { - "hod": pd.to_datetime( - ["10:10:10.100", "12:12:12.120"], format="%H:%M:%S.%f" - ) - } - ), - {"hod": lambda x: x.strftime("%H:%M")}, - "datetime64_hourformatter", - ), - ( - DataFrame( - { - "i": pd.Series([1, 2], dtype="int64"), - "f": pd.Series([1, 2], dtype="float64"), - "I": pd.Series([1, 2], dtype="Int64"), - "s": pd.Series([1, 2], dtype="string"), - "b": pd.Series([True, False], dtype="boolean"), - "c": pd.Series(["a", "b"], dtype=pd.CategoricalDtype(["a", "b"])), - "o": pd.Series([1, "2"], dtype=object), - } - ), - [lambda x: "formatted"] * 7, - "various_dtypes_formatted", - ), - ], -) -def test_to_html_formatters(df, formatters, expected, datapath): - expected = expected_html(datapath, expected) - result = df.to_html(formatters=formatters) - assert result == expected - - -def test_to_html_regression_GH6098(): - df = DataFrame( - { - "clé1": ["a", "a", "b", "b", "a"], - "clé2": ["1er", "2ème", "1er", "2ème", "1er"], - "données1": np.random.default_rng(2).standard_normal(5), - "données2": np.random.default_rng(2).standard_normal(5), - } - ) - - # it works - df.pivot_table(index=["clé1"], columns=["clé2"])._repr_html_() - - -def test_to_html_truncate(datapath): - index = pd.date_range(start="20010101", freq="D", periods=20) - df = DataFrame(index=index, columns=range(20)) - result = df.to_html(max_rows=8, max_cols=4) - expected = expected_html(datapath, "truncate") - assert result == expected - - -@pytest.mark.parametrize("size", [1, 5]) -def test_html_invalid_formatters_arg_raises(size): - # issue-28469 - df = DataFrame(columns=["a", "b", "c"]) - msg = "Formatters length({}) should match DataFrame number of columns(3)" - with pytest.raises(ValueError, match=re.escape(msg.format(size))): - df.to_html(formatters=["{}".format] * size) - - -def test_to_html_truncate_formatter(datapath): - # issue-25955 - data = [ - {"A": 1, "B": 2, "C": 3, "D": 4}, - {"A": 5, "B": 6, "C": 7, "D": 8}, - {"A": 9, "B": 10, "C": 11, "D": 12}, - {"A": 13, "B": 14, "C": 15, "D": 16}, - ] - - df = DataFrame(data) - fmt = lambda x: str(x) + "_mod" - formatters = [fmt, fmt, None, None] - result = df.to_html(formatters=formatters, max_cols=3) - expected = expected_html(datapath, "truncate_formatter") - assert result == expected - - -@pytest.mark.parametrize( - "sparsify,expected", - [(True, "truncate_multi_index"), (False, "truncate_multi_index_sparse_off")], -) -def test_to_html_truncate_multi_index(sparsify, expected, datapath): - arrays = [ - ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"], - ["one", "two", "one", "two", "one", "two", "one", "two"], - ] - df = DataFrame(index=arrays, columns=arrays) - result = df.to_html(max_rows=7, max_cols=7, sparsify=sparsify) - expected = expected_html(datapath, expected) - assert result == expected - - -@pytest.mark.parametrize( - "option,result,expected", - [ - (None, lambda df: df.to_html(), "1"), - (None, lambda df: df.to_html(border=2), "2"), - (2, lambda df: df.to_html(), "2"), - (2, lambda df: df._repr_html_(), "2"), - ], -) -def test_to_html_border(option, result, expected): - df = DataFrame({"A": [1, 2]}) - if option is None: - result = result(df) - else: - with option_context("display.html.border", option): - result = result(df) - expected = f'border="{expected}"' - assert expected in result - - -@pytest.mark.parametrize("biggie_df_fixture", ["mixed"], indirect=True) -def test_to_html(biggie_df_fixture): - # TODO: split this test - df = biggie_df_fixture - s = df.to_html() - - buf = StringIO() - retval = df.to_html(buf=buf) - assert retval is None - assert buf.getvalue() == s - - assert isinstance(s, str) - - df.to_html(columns=["B", "A"], col_space=17) - df.to_html(columns=["B", "A"], formatters={"A": lambda x: f"{x:.1f}"}) - - df.to_html(columns=["B", "A"], float_format=str) - df.to_html(columns=["B", "A"], col_space=12, float_format=str) - - -@pytest.mark.parametrize("biggie_df_fixture", ["empty"], indirect=True) -def test_to_html_empty_dataframe(biggie_df_fixture): - df = biggie_df_fixture - df.to_html() - - -def test_to_html_filename(biggie_df_fixture, tmpdir): - df = biggie_df_fixture - expected = df.to_html() - path = tmpdir.join("test.html") - df.to_html(path) - result = path.read() - assert result == expected - - -def test_to_html_with_no_bold(): - df = DataFrame({"x": np.random.default_rng(2).standard_normal(5)}) - html = df.to_html(bold_rows=False) - result = html[html.find("")] - assert "B" not in result - - -@pytest.mark.parametrize( - "columns,justify,expected", - [ - ( - MultiIndex.from_tuples( - list(zip(np.arange(2).repeat(2), np.mod(range(4), 2))), - names=["CL0", "CL1"], - ), - "left", - "multiindex_1", - ), - ( - MultiIndex.from_tuples(list(zip(range(4), np.mod(range(4), 2)))), - "right", - "multiindex_2", - ), - ], -) -def test_to_html_multiindex(columns, justify, expected, datapath): - df = DataFrame([list("abcd"), list("efgh")], columns=columns) - result = df.to_html(justify=justify) - expected = expected_html(datapath, expected) - assert result == expected - - -def test_to_html_justify(justify, datapath): - df = DataFrame( - {"A": [6, 30000, 2], "B": [1, 2, 70000], "C": [223442, 0, 1]}, - columns=["A", "B", "C"], - ) - result = df.to_html(justify=justify) - expected = expected_html(datapath, "justify").format(justify=justify) - assert result == expected - - -@pytest.mark.parametrize( - "justify", ["super-right", "small-left", "noinherit", "tiny", "pandas"] -) -def test_to_html_invalid_justify(justify): - # GH 17527 - df = DataFrame() - msg = "Invalid value for justify parameter" - - with pytest.raises(ValueError, match=msg): - df.to_html(justify=justify) - - -class TestHTMLIndex: - @pytest.fixture - def df(self): - index = ["foo", "bar", "baz"] - df = DataFrame( - {"A": [1, 2, 3], "B": [1.2, 3.4, 5.6], "C": ["one", "two", np.nan]}, - columns=["A", "B", "C"], - index=index, - ) - return df - - @pytest.fixture - def expected_without_index(self, datapath): - return expected_html(datapath, "index_2") - - def test_to_html_flat_index_without_name( - self, datapath, df, expected_without_index - ): - expected_with_index = expected_html(datapath, "index_1") - assert df.to_html() == expected_with_index - - result = df.to_html(index=False) - for i in df.index: - assert i not in result - assert result == expected_without_index - - def test_to_html_flat_index_with_name(self, datapath, df, expected_without_index): - df.index = Index(["foo", "bar", "baz"], name="idx") - expected_with_index = expected_html(datapath, "index_3") - assert df.to_html() == expected_with_index - assert df.to_html(index=False) == expected_without_index - - def test_to_html_multiindex_without_names( - self, datapath, df, expected_without_index - ): - tuples = [("foo", "car"), ("foo", "bike"), ("bar", "car")] - df.index = MultiIndex.from_tuples(tuples) - - expected_with_index = expected_html(datapath, "index_4") - assert df.to_html() == expected_with_index - - result = df.to_html(index=False) - for i in ["foo", "bar", "car", "bike"]: - assert i not in result - # must be the same result as normal index - assert result == expected_without_index - - def test_to_html_multiindex_with_names(self, datapath, df, expected_without_index): - tuples = [("foo", "car"), ("foo", "bike"), ("bar", "car")] - df.index = MultiIndex.from_tuples(tuples, names=["idx1", "idx2"]) - expected_with_index = expected_html(datapath, "index_5") - assert df.to_html() == expected_with_index - assert df.to_html(index=False) == expected_without_index - - -@pytest.mark.parametrize("classes", ["sortable draggable", ["sortable", "draggable"]]) -def test_to_html_with_classes(classes, datapath): - df = DataFrame() - expected = expected_html(datapath, "with_classes") - result = df.to_html(classes=classes) - assert result == expected - - -def test_to_html_no_index_max_rows(datapath): - # GH 14998 - df = DataFrame({"A": [1, 2, 3, 4]}) - result = df.to_html(index=False, max_rows=1) - expected = expected_html(datapath, "gh14998_expected_output") - assert result == expected - - -def test_to_html_multiindex_max_cols(datapath): - # GH 6131 - index = MultiIndex( - levels=[["ba", "bb", "bc"], ["ca", "cb", "cc"]], - codes=[[0, 1, 2], [0, 1, 2]], - names=["b", "c"], - ) - columns = MultiIndex( - levels=[["d"], ["aa", "ab", "ac"]], - codes=[[0, 0, 0], [0, 1, 2]], - names=[None, "a"], - ) - data = np.array( - [[1.0, np.nan, np.nan], [np.nan, 2.0, np.nan], [np.nan, np.nan, 3.0]] - ) - df = DataFrame(data, index, columns) - result = df.to_html(max_cols=2) - expected = expected_html(datapath, "gh6131_expected_output") - assert result == expected - - -def test_to_html_multi_indexes_index_false(datapath): - # GH 22579 - df = DataFrame( - {"a": range(10), "b": range(10, 20), "c": range(10, 20), "d": range(10, 20)} - ) - df.columns = MultiIndex.from_product([["a", "b"], ["c", "d"]]) - df.index = MultiIndex.from_product([["a", "b"], ["c", "d", "e", "f", "g"]]) - result = df.to_html(index=False) - expected = expected_html(datapath, "gh22579_expected_output") - assert result == expected - - -@pytest.mark.parametrize("index_names", [True, False]) -@pytest.mark.parametrize("header", [True, False]) -@pytest.mark.parametrize("index", [True, False]) -@pytest.mark.parametrize( - "column_index, column_type", - [ - (Index([0, 1]), "unnamed_standard"), - (Index([0, 1], name="columns.name"), "named_standard"), - (MultiIndex.from_product([["a"], ["b", "c"]]), "unnamed_multi"), - ( - MultiIndex.from_product( - [["a"], ["b", "c"]], names=["columns.name.0", "columns.name.1"] - ), - "named_multi", - ), - ], -) -@pytest.mark.parametrize( - "row_index, row_type", - [ - (Index([0, 1]), "unnamed_standard"), - (Index([0, 1], name="index.name"), "named_standard"), - (MultiIndex.from_product([["a"], ["b", "c"]]), "unnamed_multi"), - ( - MultiIndex.from_product( - [["a"], ["b", "c"]], names=["index.name.0", "index.name.1"] - ), - "named_multi", - ), - ], -) -def test_to_html_basic_alignment( - datapath, row_index, row_type, column_index, column_type, index, header, index_names -): - # GH 22747, GH 22579 - df = DataFrame(np.zeros((2, 2), dtype=int), index=row_index, columns=column_index) - result = df.to_html(index=index, header=header, index_names=index_names) - - if not index: - row_type = "none" - elif not index_names and row_type.startswith("named"): - row_type = "un" + row_type - - if not header: - column_type = "none" - elif not index_names and column_type.startswith("named"): - column_type = "un" + column_type - - filename = "index_" + row_type + "_columns_" + column_type - expected = expected_html(datapath, filename) - assert result == expected - - -@pytest.mark.parametrize("index_names", [True, False]) -@pytest.mark.parametrize("header", [True, False]) -@pytest.mark.parametrize("index", [True, False]) -@pytest.mark.parametrize( - "column_index, column_type", - [ - (Index(np.arange(8)), "unnamed_standard"), - (Index(np.arange(8), name="columns.name"), "named_standard"), - ( - MultiIndex.from_product([["a", "b"], ["c", "d"], ["e", "f"]]), - "unnamed_multi", - ), - ( - MultiIndex.from_product( - [["a", "b"], ["c", "d"], ["e", "f"]], names=["foo", None, "baz"] - ), - "named_multi", - ), - ], -) -@pytest.mark.parametrize( - "row_index, row_type", - [ - (Index(np.arange(8)), "unnamed_standard"), - (Index(np.arange(8), name="index.name"), "named_standard"), - ( - MultiIndex.from_product([["a", "b"], ["c", "d"], ["e", "f"]]), - "unnamed_multi", - ), - ( - MultiIndex.from_product( - [["a", "b"], ["c", "d"], ["e", "f"]], names=["foo", None, "baz"] - ), - "named_multi", - ), - ], -) -def test_to_html_alignment_with_truncation( - datapath, row_index, row_type, column_index, column_type, index, header, index_names -): - # GH 22747, GH 22579 - df = DataFrame(np.arange(64).reshape(8, 8), index=row_index, columns=column_index) - result = df.to_html( - max_rows=4, max_cols=4, index=index, header=header, index_names=index_names - ) - - if not index: - row_type = "none" - elif not index_names and row_type.startswith("named"): - row_type = "un" + row_type - - if not header: - column_type = "none" - elif not index_names and column_type.startswith("named"): - column_type = "un" + column_type - - filename = "trunc_df_index_" + row_type + "_columns_" + column_type - expected = expected_html(datapath, filename) - assert result == expected - - -@pytest.mark.parametrize("index", [False, 0]) -def test_to_html_truncation_index_false_max_rows(datapath, index): - # GH 15019 - data = [ - [1.764052, 0.400157], - [0.978738, 2.240893], - [1.867558, -0.977278], - [0.950088, -0.151357], - [-0.103219, 0.410599], - ] - df = DataFrame(data) - result = df.to_html(max_rows=4, index=index) - expected = expected_html(datapath, "gh15019_expected_output") - assert result == expected - - -@pytest.mark.parametrize("index", [False, 0]) -@pytest.mark.parametrize( - "col_index_named, expected_output", - [(False, "gh22783_expected_output"), (True, "gh22783_named_columns_index")], -) -def test_to_html_truncation_index_false_max_cols( - datapath, index, col_index_named, expected_output -): - # GH 22783 - data = [ - [1.764052, 0.400157, 0.978738, 2.240893, 1.867558], - [-0.977278, 0.950088, -0.151357, -0.103219, 0.410599], - ] - df = DataFrame(data) - if col_index_named: - df.columns.rename("columns.name", inplace=True) - result = df.to_html(max_cols=4, index=index) - expected = expected_html(datapath, expected_output) - assert result == expected - - -@pytest.mark.parametrize("notebook", [True, False]) -def test_to_html_notebook_has_style(notebook): - df = DataFrame({"A": [1, 2, 3]}) - result = df.to_html(notebook=notebook) - - if notebook: - assert "tbody tr th:only-of-type" in result - assert "vertical-align: middle;" in result - assert "thead th" in result - else: - assert "tbody tr th:only-of-type" not in result - assert "vertical-align: middle;" not in result - assert "thead th" not in result - - -def test_to_html_with_index_names_false(): - # GH 16493 - df = DataFrame({"A": [1, 2]}, index=Index(["a", "b"], name="myindexname")) - result = df.to_html(index_names=False) - assert "myindexname" not in result - - -def test_to_html_with_id(): - # GH 8496 - df = DataFrame({"A": [1, 2]}, index=Index(["a", "b"], name="myindexname")) - result = df.to_html(index_names=False, table_id="TEST_ID") - assert ' id="TEST_ID"' in result - - -@pytest.mark.parametrize( - "value,float_format,expected", - [ - (0.19999, "%.3f", "gh21625_expected_output"), - (100.0, "%.0f", "gh22270_expected_output"), - ], -) -def test_to_html_float_format_no_fixed_width(value, float_format, expected, datapath): - # GH 21625, GH 22270 - df = DataFrame({"x": [value]}) - expected = expected_html(datapath, expected) - result = df.to_html(float_format=float_format) - assert result == expected - - -@pytest.mark.parametrize( - "render_links,expected", - [(True, "render_links_true"), (False, "render_links_false")], -) -def test_to_html_render_links(render_links, expected, datapath): - # GH 2679 - data = [ - [0, "https://pandas.pydata.org/?q1=a&q2=b", "pydata.org"], - [0, "www.pydata.org", "pydata.org"], - ] - df = DataFrame(data, columns=["foo", "bar", None]) - - result = df.to_html(render_links=render_links) - expected = expected_html(datapath, expected) - assert result == expected - - -@pytest.mark.parametrize( - "method,expected", - [ - ("to_html", lambda x: lorem_ipsum), - ("_repr_html_", lambda x: lorem_ipsum[: x - 4] + "..."), # regression case - ], -) -@pytest.mark.parametrize("max_colwidth", [10, 20, 50, 100]) -def test_ignore_display_max_colwidth(method, expected, max_colwidth): - # see gh-17004 - df = DataFrame([lorem_ipsum]) - with option_context("display.max_colwidth", max_colwidth): - result = getattr(df, method)() - expected = expected(max_colwidth) - assert expected in result - - -@pytest.mark.parametrize("classes", [True, 0]) -def test_to_html_invalid_classes_type(classes): - # GH 25608 - df = DataFrame() - msg = "classes must be a string, list, or tuple" - - with pytest.raises(TypeError, match=msg): - df.to_html(classes=classes) - - -def test_to_html_round_column_headers(): - # GH 17280 - df = DataFrame([1], columns=[0.55555]) - with option_context("display.precision", 3): - html = df.to_html(notebook=False) - notebook = df.to_html(notebook=True) - assert "0.55555" in html - assert "0.556" in notebook - - -@pytest.mark.parametrize("unit", ["100px", "10%", "5em", 150]) -def test_to_html_with_col_space_units(unit): - # GH 25941 - df = DataFrame(np.random.default_rng(2).random(size=(1, 3))) - result = df.to_html(col_space=unit) - result = result.split("tbody")[0] - hdrs = [x for x in result.split("\n") if re.search(r"\s]", x)] - if isinstance(unit, int): - unit = str(unit) + "px" - for h in hdrs: - expected = f'' - assert expected in h - - -def test_html_repr_min_rows_default(datapath): - # gh-27991 - - # default setting no truncation even if above min_rows - df = DataFrame({"a": range(20)}) - result = df._repr_html_() - expected = expected_html(datapath, "html_repr_min_rows_default_no_truncation") - assert result == expected - - # default of max_rows 60 triggers truncation if above - df = DataFrame({"a": range(61)}) - result = df._repr_html_() - expected = expected_html(datapath, "html_repr_min_rows_default_truncated") - assert result == expected - - -@pytest.mark.parametrize( - "max_rows,min_rows,expected", - [ - # truncated after first two rows - (10, 4, "html_repr_max_rows_10_min_rows_4"), - # when set to None, follow value of max_rows - (12, None, "html_repr_max_rows_12_min_rows_None"), - # when set value higher as max_rows, use the minimum - (10, 12, "html_repr_max_rows_10_min_rows_12"), - # max_rows of None -> never truncate - (None, 12, "html_repr_max_rows_None_min_rows_12"), - ], -) -def test_html_repr_min_rows(datapath, max_rows, min_rows, expected): - # gh-27991 - - df = DataFrame({"a": range(61)}) - expected = expected_html(datapath, expected) - with option_context("display.max_rows", max_rows, "display.min_rows", min_rows): - result = df._repr_html_() - assert result == expected - - -def test_to_html_multilevel(multiindex_year_month_day_dataframe_random_data): - ymd = multiindex_year_month_day_dataframe_random_data - - ymd.columns.name = "foo" - ymd.to_html() - ymd.T.to_html() - - -@pytest.mark.parametrize("na_rep", ["NaN", "Ted"]) -def test_to_html_na_rep_and_float_format(na_rep, datapath): - # https://github.com/pandas-dev/pandas/issues/13828 - df = DataFrame( - [ - ["A", 1.2225], - ["A", None], - ], - columns=["Group", "Data"], - ) - result = df.to_html(na_rep=na_rep, float_format="{:.2f}".format) - expected = expected_html(datapath, "gh13828_expected_output") - expected = expected.format(na_rep=na_rep) - assert result == expected - - -def test_to_html_na_rep_non_scalar_data(datapath): - # GH47103 - df = DataFrame([{"a": 1, "b": [1, 2, 3]}]) - result = df.to_html(na_rep="-") - expected = expected_html(datapath, "gh47103_expected_output") - assert result == expected - - -def test_to_html_float_format_object_col(datapath): - # GH#40024 - df = DataFrame(data={"x": [1000.0, "test"]}) - result = df.to_html(float_format=lambda x: f"{x:,.0f}") - expected = expected_html(datapath, "gh40024_expected_output") - assert result == expected - - -def test_to_html_multiindex_col_with_colspace(): - # GH#53885 - df = DataFrame([[1, 2]]) - df.columns = MultiIndex.from_tuples([(1, 1), (2, 1)]) - result = df.to_html(col_space=100) - expected = ( - '\n' - " \n" - " \n" - ' \n' - ' \n' - ' \n' - " \n" - " \n" - ' \n' - ' \n' - ' \n' - " \n" - " \n" - " \n" - " \n" - " \n" - " \n" - " \n" - " \n" - " \n" - "
        12
        11
        012
        " - ) - assert result == expected - - -def test_to_html_tuple_col_with_colspace(): - # GH#53885 - df = DataFrame({("a", "b"): [1], "b": [2]}) - result = df.to_html(col_space=100) - expected = ( - '\n' - " \n" - ' \n' - ' \n' - ' \n' - ' \n' - " \n" - " \n" - " \n" - " \n" - " \n" - " \n" - " \n" - " \n" - " \n" - "
        (a, b)b
        012
        " - ) - assert result == expected - - -def test_to_html_empty_complex_array(): - # GH#54167 - df = DataFrame({"x": np.array([], dtype="complex")}) - result = df.to_html(col_space=100) - expected = ( - '\n' - " \n" - ' \n' - ' \n' - ' \n' - " \n" - " \n" - " \n" - " \n" - "
        x
        " - ) - assert result == expected diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/_windows.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/_windows.py deleted file mode 100644 index 98c700863542bfe01848655a749cff197a881178..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/_windows.py +++ /dev/null @@ -1,72 +0,0 @@ -import sys -from dataclasses import dataclass - - -@dataclass -class WindowsConsoleFeatures: - """Windows features available.""" - - vt: bool = False - """The console supports VT codes.""" - truecolor: bool = False - """The console supports truecolor.""" - - -try: - import ctypes - from ctypes import LibraryLoader - - if sys.platform == "win32": - windll = LibraryLoader(ctypes.WinDLL) - else: - windll = None - raise ImportError("Not windows") - - from rich._win32_console import ( - ENABLE_VIRTUAL_TERMINAL_PROCESSING, - GetConsoleMode, - GetStdHandle, - LegacyWindowsError, - ) - -except (AttributeError, ImportError, ValueError): - - # Fallback if we can't load the Windows DLL - def get_windows_console_features() -> WindowsConsoleFeatures: - features = WindowsConsoleFeatures() - return features - -else: - - def get_windows_console_features() -> WindowsConsoleFeatures: - """Get windows console features. - - Returns: - WindowsConsoleFeatures: An instance of WindowsConsoleFeatures. - """ - handle = GetStdHandle() - try: - console_mode = GetConsoleMode(handle) - success = True - except LegacyWindowsError: - console_mode = 0 - success = False - vt = bool(success and console_mode & ENABLE_VIRTUAL_TERMINAL_PROCESSING) - truecolor = False - if vt: - win_version = sys.getwindowsversion() - truecolor = win_version.major > 10 or ( - win_version.major == 10 and win_version.build >= 15063 - ) - features = WindowsConsoleFeatures(vt=vt, truecolor=truecolor) - return features - - -if __name__ == "__main__": - import platform - - features = get_windows_console_features() - from rich import print - - print(f'platform="{platform.system()}"') - print(repr(features)) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/semantic_version/base.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/semantic_version/base.py deleted file mode 100644 index 777c27ac463f34996d0281fb7a68e5f6c7fb9a9c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/semantic_version/base.py +++ /dev/null @@ -1,1449 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) The python-semanticversion project -# This code is distributed under the two-clause BSD License. - -import functools -import re -import warnings - - -def _has_leading_zero(value): - return (value - and value[0] == '0' - and value.isdigit() - and value != '0') - - -class MaxIdentifier(object): - __slots__ = [] - - def __repr__(self): - return 'MaxIdentifier()' - - def __eq__(self, other): - return isinstance(other, self.__class__) - - -@functools.total_ordering -class NumericIdentifier(object): - __slots__ = ['value'] - - def __init__(self, value): - self.value = int(value) - - def __repr__(self): - return 'NumericIdentifier(%r)' % self.value - - def __eq__(self, other): - if isinstance(other, NumericIdentifier): - return self.value == other.value - return NotImplemented - - def __lt__(self, other): - if isinstance(other, MaxIdentifier): - return True - elif isinstance(other, AlphaIdentifier): - return True - elif isinstance(other, NumericIdentifier): - return self.value < other.value - else: - return NotImplemented - - -@functools.total_ordering -class AlphaIdentifier(object): - __slots__ = ['value'] - - def __init__(self, value): - self.value = value.encode('ascii') - - def __repr__(self): - return 'AlphaIdentifier(%r)' % self.value - - def __eq__(self, other): - if isinstance(other, AlphaIdentifier): - return self.value == other.value - return NotImplemented - - def __lt__(self, other): - if isinstance(other, MaxIdentifier): - return True - elif isinstance(other, NumericIdentifier): - return False - elif isinstance(other, AlphaIdentifier): - return self.value < other.value - else: - return NotImplemented - - -class Version(object): - - version_re = re.compile(r'^(\d+)\.(\d+)\.(\d+)(?:-([0-9a-zA-Z.-]+))?(?:\+([0-9a-zA-Z.-]+))?$') - partial_version_re = re.compile(r'^(\d+)(?:\.(\d+)(?:\.(\d+))?)?(?:-([0-9a-zA-Z.-]*))?(?:\+([0-9a-zA-Z.-]*))?$') - - def __init__( - self, - version_string=None, - major=None, - minor=None, - patch=None, - prerelease=None, - build=None, - partial=False): - if partial: - warnings.warn( - "Partial versions will be removed in 3.0; use SimpleSpec('1.x.x') instead.", - DeprecationWarning, - stacklevel=2, - ) - has_text = version_string is not None - has_parts = not (major is minor is patch is prerelease is build is None) - if not has_text ^ has_parts: - raise ValueError("Call either Version('1.2.3') or Version(major=1, ...).") - - if has_text: - major, minor, patch, prerelease, build = self.parse(version_string, partial) - else: - # Convenience: allow to omit prerelease/build. - prerelease = tuple(prerelease or ()) - if not partial: - build = tuple(build or ()) - self._validate_kwargs(major, minor, patch, prerelease, build, partial) - - self.major = major - self.minor = minor - self.patch = patch - self.prerelease = prerelease - self.build = build - - self.partial = partial - - # Cached precedence keys - # _cmp_precedence_key is used for semver-precedence comparison - self._cmp_precedence_key = self._build_precedence_key(with_build=False) - # _sort_precedence_key is used for self.precedence_key, esp. for sorted(...) - self._sort_precedence_key = self._build_precedence_key(with_build=True) - - @classmethod - def _coerce(cls, value, allow_none=False): - if value is None and allow_none: - return value - return int(value) - - def next_major(self): - if self.prerelease and self.minor == self.patch == 0: - return Version( - major=self.major, - minor=0, - patch=0, - partial=self.partial, - ) - else: - return Version( - major=self.major + 1, - minor=0, - patch=0, - partial=self.partial, - ) - - def next_minor(self): - if self.prerelease and self.patch == 0: - return Version( - major=self.major, - minor=self.minor, - patch=0, - partial=self.partial, - ) - else: - return Version( - major=self.major, - minor=self.minor + 1, - patch=0, - partial=self.partial, - ) - - def next_patch(self): - if self.prerelease: - return Version( - major=self.major, - minor=self.minor, - patch=self.patch, - partial=self.partial, - ) - else: - return Version( - major=self.major, - minor=self.minor, - patch=self.patch + 1, - partial=self.partial, - ) - - def truncate(self, level='patch'): - """Return a new Version object, truncated up to the selected level.""" - if level == 'build': - return self - elif level == 'prerelease': - return Version( - major=self.major, - minor=self.minor, - patch=self.patch, - prerelease=self.prerelease, - partial=self.partial, - ) - elif level == 'patch': - return Version( - major=self.major, - minor=self.minor, - patch=self.patch, - partial=self.partial, - ) - elif level == 'minor': - return Version( - major=self.major, - minor=self.minor, - patch=None if self.partial else 0, - partial=self.partial, - ) - elif level == 'major': - return Version( - major=self.major, - minor=None if self.partial else 0, - patch=None if self.partial else 0, - partial=self.partial, - ) - else: - raise ValueError("Invalid truncation level `%s`." % level) - - @classmethod - def coerce(cls, version_string, partial=False): - """Coerce an arbitrary version string into a semver-compatible one. - - The rule is: - - If not enough components, fill minor/patch with zeroes; unless - partial=True - - If more than 3 dot-separated components, extra components are "build" - data. If some "build" data already appeared, append it to the - extra components - - Examples: - >>> Version.coerce('0.1') - Version(0, 1, 0) - >>> Version.coerce('0.1.2.3') - Version(0, 1, 2, (), ('3',)) - >>> Version.coerce('0.1.2.3+4') - Version(0, 1, 2, (), ('3', '4')) - >>> Version.coerce('0.1+2-3+4_5') - Version(0, 1, 0, (), ('2-3', '4-5')) - """ - base_re = re.compile(r'^\d+(?:\.\d+(?:\.\d+)?)?') - - match = base_re.match(version_string) - if not match: - raise ValueError( - "Version string lacks a numerical component: %r" - % version_string - ) - - version = version_string[:match.end()] - if not partial: - # We need a not-partial version. - while version.count('.') < 2: - version += '.0' - - # Strip leading zeros in components - # Version is of the form nn, nn.pp or nn.pp.qq - version = '.'.join( - # If the part was '0', we end up with an empty string. - part.lstrip('0') or '0' - for part in version.split('.') - ) - - if match.end() == len(version_string): - return Version(version, partial=partial) - - rest = version_string[match.end():] - - # Cleanup the 'rest' - rest = re.sub(r'[^a-zA-Z0-9+.-]', '-', rest) - - if rest[0] == '+': - # A 'build' component - prerelease = '' - build = rest[1:] - elif rest[0] == '.': - # An extra version component, probably 'build' - prerelease = '' - build = rest[1:] - elif rest[0] == '-': - rest = rest[1:] - if '+' in rest: - prerelease, build = rest.split('+', 1) - else: - prerelease, build = rest, '' - elif '+' in rest: - prerelease, build = rest.split('+', 1) - else: - prerelease, build = rest, '' - - build = build.replace('+', '.') - - if prerelease: - version = '%s-%s' % (version, prerelease) - if build: - version = '%s+%s' % (version, build) - - return cls(version, partial=partial) - - @classmethod - def parse(cls, version_string, partial=False, coerce=False): - """Parse a version string into a tuple of components: - (major, minor, patch, prerelease, build). - - Args: - version_string (str), the version string to parse - partial (bool), whether to accept incomplete input - coerce (bool), whether to try to map the passed in string into a - valid Version. - """ - if not version_string: - raise ValueError('Invalid empty version string: %r' % version_string) - - if partial: - version_re = cls.partial_version_re - else: - version_re = cls.version_re - - match = version_re.match(version_string) - if not match: - raise ValueError('Invalid version string: %r' % version_string) - - major, minor, patch, prerelease, build = match.groups() - - if _has_leading_zero(major): - raise ValueError("Invalid leading zero in major: %r" % version_string) - if _has_leading_zero(minor): - raise ValueError("Invalid leading zero in minor: %r" % version_string) - if _has_leading_zero(patch): - raise ValueError("Invalid leading zero in patch: %r" % version_string) - - major = int(major) - minor = cls._coerce(minor, partial) - patch = cls._coerce(patch, partial) - - if prerelease is None: - if partial and (build is None): - # No build info, strip here - return (major, minor, patch, None, None) - else: - prerelease = () - elif prerelease == '': - prerelease = () - else: - prerelease = tuple(prerelease.split('.')) - cls._validate_identifiers(prerelease, allow_leading_zeroes=False) - - if build is None: - if partial: - build = None - else: - build = () - elif build == '': - build = () - else: - build = tuple(build.split('.')) - cls._validate_identifiers(build, allow_leading_zeroes=True) - - return (major, minor, patch, prerelease, build) - - @classmethod - def _validate_identifiers(cls, identifiers, allow_leading_zeroes=False): - for item in identifiers: - if not item: - raise ValueError( - "Invalid empty identifier %r in %r" - % (item, '.'.join(identifiers)) - ) - - if item[0] == '0' and item.isdigit() and item != '0' and not allow_leading_zeroes: - raise ValueError("Invalid leading zero in identifier %r" % item) - - @classmethod - def _validate_kwargs(cls, major, minor, patch, prerelease, build, partial): - if ( - major != int(major) - or minor != cls._coerce(minor, partial) - or patch != cls._coerce(patch, partial) - or prerelease is None and not partial - or build is None and not partial - ): - raise ValueError( - "Invalid kwargs to Version(major=%r, minor=%r, patch=%r, " - "prerelease=%r, build=%r, partial=%r" % ( - major, minor, patch, prerelease, build, partial - )) - if prerelease is not None: - cls._validate_identifiers(prerelease, allow_leading_zeroes=False) - if build is not None: - cls._validate_identifiers(build, allow_leading_zeroes=True) - - def __iter__(self): - return iter((self.major, self.minor, self.patch, self.prerelease, self.build)) - - def __str__(self): - version = '%d' % self.major - if self.minor is not None: - version = '%s.%d' % (version, self.minor) - if self.patch is not None: - version = '%s.%d' % (version, self.patch) - - if self.prerelease or (self.partial and self.prerelease == () and self.build is None): - version = '%s-%s' % (version, '.'.join(self.prerelease)) - if self.build or (self.partial and self.build == ()): - version = '%s+%s' % (version, '.'.join(self.build)) - return version - - def __repr__(self): - return '%s(%r%s)' % ( - self.__class__.__name__, - str(self), - ', partial=True' if self.partial else '', - ) - - def __hash__(self): - # We don't include 'partial', since this is strictly equivalent to having - # at least a field being `None`. - return hash((self.major, self.minor, self.patch, self.prerelease, self.build)) - - def _build_precedence_key(self, with_build=False): - """Build a precedence key. - - The "build" component should only be used when sorting an iterable - of versions. - """ - if self.prerelease: - prerelease_key = tuple( - NumericIdentifier(part) if part.isdigit() else AlphaIdentifier(part) - for part in self.prerelease - ) - else: - prerelease_key = ( - MaxIdentifier(), - ) - - if not with_build: - return ( - self.major, - self.minor, - self.patch, - prerelease_key, - ) - - build_key = tuple( - NumericIdentifier(part) if part.isdigit() else AlphaIdentifier(part) - for part in self.build or () - ) - - return ( - self.major, - self.minor, - self.patch, - prerelease_key, - build_key, - ) - - @property - def precedence_key(self): - return self._sort_precedence_key - - def __cmp__(self, other): - if not isinstance(other, self.__class__): - return NotImplemented - if self < other: - return -1 - elif self > other: - return 1 - elif self == other: - return 0 - else: - return NotImplemented - - def __eq__(self, other): - if not isinstance(other, self.__class__): - return NotImplemented - return ( - self.major == other.major - and self.minor == other.minor - and self.patch == other.patch - and (self.prerelease or ()) == (other.prerelease or ()) - and (self.build or ()) == (other.build or ()) - ) - - def __ne__(self, other): - if not isinstance(other, self.__class__): - return NotImplemented - return tuple(self) != tuple(other) - - def __lt__(self, other): - if not isinstance(other, self.__class__): - return NotImplemented - return self._cmp_precedence_key < other._cmp_precedence_key - - def __le__(self, other): - if not isinstance(other, self.__class__): - return NotImplemented - return self._cmp_precedence_key <= other._cmp_precedence_key - - def __gt__(self, other): - if not isinstance(other, self.__class__): - return NotImplemented - return self._cmp_precedence_key > other._cmp_precedence_key - - def __ge__(self, other): - if not isinstance(other, self.__class__): - return NotImplemented - return self._cmp_precedence_key >= other._cmp_precedence_key - - -class SpecItem(object): - """A requirement specification.""" - - KIND_ANY = '*' - KIND_LT = '<' - KIND_LTE = '<=' - KIND_EQUAL = '==' - KIND_SHORTEQ = '=' - KIND_EMPTY = '' - KIND_GTE = '>=' - KIND_GT = '>' - KIND_NEQ = '!=' - KIND_CARET = '^' - KIND_TILDE = '~' - KIND_COMPATIBLE = '~=' - - # Map a kind alias to its full version - KIND_ALIASES = { - KIND_SHORTEQ: KIND_EQUAL, - KIND_EMPTY: KIND_EQUAL, - } - - re_spec = re.compile(r'^(<|<=||=|==|>=|>|!=|\^|~|~=)(\d.*)$') - - def __init__(self, requirement_string, _warn=True): - if _warn: - warnings.warn( - "The `SpecItem` class will be removed in 3.0.", - DeprecationWarning, - stacklevel=2, - ) - kind, spec = self.parse(requirement_string) - self.kind = kind - self.spec = spec - self._clause = Spec(requirement_string).clause - - @classmethod - def parse(cls, requirement_string): - if not requirement_string: - raise ValueError("Invalid empty requirement specification: %r" % requirement_string) - - # Special case: the 'any' version spec. - if requirement_string == '*': - return (cls.KIND_ANY, '') - - match = cls.re_spec.match(requirement_string) - if not match: - raise ValueError("Invalid requirement specification: %r" % requirement_string) - - kind, version = match.groups() - if kind in cls.KIND_ALIASES: - kind = cls.KIND_ALIASES[kind] - - spec = Version(version, partial=True) - if spec.build is not None and kind not in (cls.KIND_EQUAL, cls.KIND_NEQ): - raise ValueError( - "Invalid requirement specification %r: build numbers have no ordering." - % requirement_string - ) - return (kind, spec) - - @classmethod - def from_matcher(cls, matcher): - if matcher == Always(): - return cls('*', _warn=False) - elif matcher == Never(): - return cls('<0.0.0-', _warn=False) - elif isinstance(matcher, Range): - return cls('%s%s' % (matcher.operator, matcher.target), _warn=False) - - def match(self, version): - return self._clause.match(version) - - def __str__(self): - return '%s%s' % (self.kind, self.spec) - - def __repr__(self): - return '' % (self.kind, self.spec) - - def __eq__(self, other): - if not isinstance(other, SpecItem): - return NotImplemented - return self.kind == other.kind and self.spec == other.spec - - def __hash__(self): - return hash((self.kind, self.spec)) - - -def compare(v1, v2): - return Version(v1).__cmp__(Version(v2)) - - -def match(spec, version): - return Spec(spec).match(Version(version)) - - -def validate(version_string): - """Validates a version string againt the SemVer specification.""" - try: - Version.parse(version_string) - return True - except ValueError: - return False - - -DEFAULT_SYNTAX = 'simple' - - -class BaseSpec(object): - """A specification of compatible versions. - - Usage: - >>> Spec('>=1.0.0', syntax='npm') - - A version matches a specification if it matches any - of the clauses of that specification. - - Internally, a Spec is AnyOf( - AllOf(Matcher, Matcher, Matcher), - AllOf(...), - ) - """ - SYNTAXES = {} - - @classmethod - def register_syntax(cls, subclass): - syntax = subclass.SYNTAX - if syntax is None: - raise ValueError("A Spec needs its SYNTAX field to be set.") - elif syntax in cls.SYNTAXES: - raise ValueError( - "Duplicate syntax for %s: %r, %r" - % (syntax, cls.SYNTAXES[syntax], subclass) - ) - cls.SYNTAXES[syntax] = subclass - return subclass - - def __init__(self, expression): - super(BaseSpec, self).__init__() - self.expression = expression - self.clause = self._parse_to_clause(expression) - - @classmethod - def parse(cls, expression, syntax=DEFAULT_SYNTAX): - """Convert a syntax-specific expression into a BaseSpec instance.""" - return cls.SYNTAXES[syntax](expression) - - @classmethod - def _parse_to_clause(cls, expression): - """Converts an expression to a clause.""" - raise NotImplementedError() - - def filter(self, versions): - """Filter an iterable of versions satisfying the Spec.""" - for version in versions: - if self.match(version): - yield version - - def match(self, version): - """Check whether a Version satisfies the Spec.""" - return self.clause.match(version) - - def select(self, versions): - """Select the best compatible version among an iterable of options.""" - options = list(self.filter(versions)) - if options: - return max(options) - return None - - def __contains__(self, version): - """Whether `version in self`.""" - if isinstance(version, Version): - return self.match(version) - return False - - def __eq__(self, other): - if not isinstance(other, self.__class__): - return NotImplemented - - return self.clause == other.clause - - def __hash__(self): - return hash(self.clause) - - def __str__(self): - return self.expression - - def __repr__(self): - return '<%s: %r>' % (self.__class__.__name__, self.expression) - - -class Clause(object): - __slots__ = [] - - def match(self, version): - raise NotImplementedError() - - def __and__(self, other): - raise NotImplementedError() - - def __or__(self, other): - raise NotImplementedError() - - def __eq__(self, other): - raise NotImplementedError() - - def prettyprint(self, indent='\t'): - """Pretty-print the clause. - """ - return '\n'.join(self._pretty()).replace('\t', indent) - - def _pretty(self): - """Actual pretty-printing logic. - - Yields: - A list of string. Indentation is performed with \t. - """ - yield repr(self) - - def __ne__(self, other): - return not self == other - - def simplify(self): - return self - - -class AnyOf(Clause): - __slots__ = ['clauses'] - - def __init__(self, *clauses): - super(AnyOf, self).__init__() - self.clauses = frozenset(clauses) - - def match(self, version): - return any(c.match(version) for c in self.clauses) - - def simplify(self): - subclauses = set() - for clause in self.clauses: - simplified = clause.simplify() - if isinstance(simplified, AnyOf): - subclauses |= simplified.clauses - elif simplified == Never(): - continue - else: - subclauses.add(simplified) - if len(subclauses) == 1: - return subclauses.pop() - return AnyOf(*subclauses) - - def __hash__(self): - return hash((AnyOf, self.clauses)) - - def __iter__(self): - return iter(self.clauses) - - def __eq__(self, other): - return isinstance(other, self.__class__) and self.clauses == other.clauses - - def __and__(self, other): - if isinstance(other, AllOf): - return other & self - elif isinstance(other, Matcher) or isinstance(other, AnyOf): - return AllOf(self, other) - else: - return NotImplemented - - def __or__(self, other): - if isinstance(other, AnyOf): - clauses = list(self.clauses | other.clauses) - elif isinstance(other, Matcher) or isinstance(other, AllOf): - clauses = list(self.clauses | set([other])) - else: - return NotImplemented - return AnyOf(*clauses) - - def __repr__(self): - return 'AnyOf(%s)' % ', '.join(sorted(repr(c) for c in self.clauses)) - - def _pretty(self): - yield 'AnyOF(' - for clause in self.clauses: - lines = list(clause._pretty()) - for line in lines[:-1]: - yield '\t' + line - yield '\t' + lines[-1] + ',' - yield ')' - - -class AllOf(Clause): - __slots__ = ['clauses'] - - def __init__(self, *clauses): - super(AllOf, self).__init__() - self.clauses = frozenset(clauses) - - def match(self, version): - return all(clause.match(version) for clause in self.clauses) - - def simplify(self): - subclauses = set() - for clause in self.clauses: - simplified = clause.simplify() - if isinstance(simplified, AllOf): - subclauses |= simplified.clauses - elif simplified == Always(): - continue - else: - subclauses.add(simplified) - if len(subclauses) == 1: - return subclauses.pop() - return AllOf(*subclauses) - - def __hash__(self): - return hash((AllOf, self.clauses)) - - def __iter__(self): - return iter(self.clauses) - - def __eq__(self, other): - return isinstance(other, self.__class__) and self.clauses == other.clauses - - def __and__(self, other): - if isinstance(other, Matcher) or isinstance(other, AnyOf): - clauses = list(self.clauses | set([other])) - elif isinstance(other, AllOf): - clauses = list(self.clauses | other.clauses) - else: - return NotImplemented - return AllOf(*clauses) - - def __or__(self, other): - if isinstance(other, AnyOf): - return other | self - elif isinstance(other, Matcher): - return AnyOf(self, AllOf(other)) - elif isinstance(other, AllOf): - return AnyOf(self, other) - else: - return NotImplemented - - def __repr__(self): - return 'AllOf(%s)' % ', '.join(sorted(repr(c) for c in self.clauses)) - - def _pretty(self): - yield 'AllOF(' - for clause in self.clauses: - lines = list(clause._pretty()) - for line in lines[:-1]: - yield '\t' + line - yield '\t' + lines[-1] + ',' - yield ')' - - -class Matcher(Clause): - __slots__ = [] - - def __and__(self, other): - if isinstance(other, AllOf): - return other & self - elif isinstance(other, Matcher) or isinstance(other, AnyOf): - return AllOf(self, other) - else: - return NotImplemented - - def __or__(self, other): - if isinstance(other, AnyOf): - return other | self - elif isinstance(other, Matcher) or isinstance(other, AllOf): - return AnyOf(self, other) - else: - return NotImplemented - - -class Never(Matcher): - __slots__ = [] - - def match(self, version): - return False - - def __hash__(self): - return hash((Never,)) - - def __eq__(self, other): - return isinstance(other, self.__class__) - - def __and__(self, other): - return self - - def __or__(self, other): - return other - - def __repr__(self): - return 'Never()' - - -class Always(Matcher): - __slots__ = [] - - def match(self, version): - return True - - def __hash__(self): - return hash((Always,)) - - def __eq__(self, other): - return isinstance(other, self.__class__) - - def __and__(self, other): - return other - - def __or__(self, other): - return self - - def __repr__(self): - return 'Always()' - - -class Range(Matcher): - OP_EQ = '==' - OP_GT = '>' - OP_GTE = '>=' - OP_LT = '<' - OP_LTE = '<=' - OP_NEQ = '!=' - - # <1.2.3 matches 1.2.3-a1 - PRERELEASE_ALWAYS = 'always' - # <1.2.3 does not match 1.2.3-a1 - PRERELEASE_NATURAL = 'natural' - # 1.2.3-a1 is only considered if target == 1.2.3-xxx - PRERELEASE_SAMEPATCH = 'same-patch' - - # 1.2.3 matches 1.2.3+* - BUILD_IMPLICIT = 'implicit' - # 1.2.3 matches only 1.2.3, not 1.2.3+4 - BUILD_STRICT = 'strict' - - __slots__ = ['operator', 'target', 'prerelease_policy', 'build_policy'] - - def __init__(self, operator, target, prerelease_policy=PRERELEASE_NATURAL, build_policy=BUILD_IMPLICIT): - super(Range, self).__init__() - if target.build and operator not in (self.OP_EQ, self.OP_NEQ): - raise ValueError( - "Invalid range %s%s: build numbers have no ordering." - % (operator, target)) - self.operator = operator - self.target = target - self.prerelease_policy = prerelease_policy - self.build_policy = self.BUILD_STRICT if target.build else build_policy - - def match(self, version): - if self.build_policy != self.BUILD_STRICT: - version = version.truncate('prerelease') - - if version.prerelease: - same_patch = self.target.truncate() == version.truncate() - - if self.prerelease_policy == self.PRERELEASE_SAMEPATCH and not same_patch: - return False - - if self.operator == self.OP_EQ: - if self.build_policy == self.BUILD_STRICT: - return ( - self.target.truncate('prerelease') == version.truncate('prerelease') - and version.build == self.target.build - ) - return version == self.target - elif self.operator == self.OP_GT: - return version > self.target - elif self.operator == self.OP_GTE: - return version >= self.target - elif self.operator == self.OP_LT: - if ( - version.prerelease - and self.prerelease_policy == self.PRERELEASE_NATURAL - and version.truncate() == self.target.truncate() - and not self.target.prerelease - ): - return False - return version < self.target - elif self.operator == self.OP_LTE: - return version <= self.target - else: - assert self.operator == self.OP_NEQ - if self.build_policy == self.BUILD_STRICT: - return not ( - self.target.truncate('prerelease') == version.truncate('prerelease') - and version.build == self.target.build - ) - - if ( - version.prerelease - and self.prerelease_policy == self.PRERELEASE_NATURAL - and version.truncate() == self.target.truncate() - and not self.target.prerelease - ): - return False - return version != self.target - - def __hash__(self): - return hash((Range, self.operator, self.target, self.prerelease_policy)) - - def __eq__(self, other): - return ( - isinstance(other, self.__class__) - and self.operator == other.operator - and self.target == other.target - and self.prerelease_policy == other.prerelease_policy - ) - - def __str__(self): - return '%s%s' % (self.operator, self.target) - - def __repr__(self): - policy_part = ( - '' if self.prerelease_policy == self.PRERELEASE_NATURAL - else ', prerelease_policy=%r' % self.prerelease_policy - ) + ( - '' if self.build_policy == self.BUILD_IMPLICIT - else ', build_policy=%r' % self.build_policy - ) - return 'Range(%r, %r%s)' % ( - self.operator, - self.target, - policy_part, - ) - - -@BaseSpec.register_syntax -class SimpleSpec(BaseSpec): - - SYNTAX = 'simple' - - @classmethod - def _parse_to_clause(cls, expression): - return cls.Parser.parse(expression) - - class Parser: - NUMBER = r'\*|0|[1-9][0-9]*' - NAIVE_SPEC = re.compile(r"""^ - (?P<|<=||=|==|>=|>|!=|\^|~|~=) - (?P{nb})(?:\.(?P{nb})(?:\.(?P{nb}))?)? - (?:-(?P[a-z0-9A-Z.-]*))? - (?:\+(?P[a-z0-9A-Z.-]*))? - $ - """.format(nb=NUMBER), - re.VERBOSE, - ) - - @classmethod - def parse(cls, expression): - blocks = expression.split(',') - clause = Always() - for block in blocks: - if not cls.NAIVE_SPEC.match(block): - raise ValueError("Invalid simple block %r" % block) - clause &= cls.parse_block(block) - - return clause - - PREFIX_CARET = '^' - PREFIX_TILDE = '~' - PREFIX_COMPATIBLE = '~=' - PREFIX_EQ = '==' - PREFIX_NEQ = '!=' - PREFIX_GT = '>' - PREFIX_GTE = '>=' - PREFIX_LT = '<' - PREFIX_LTE = '<=' - - PREFIX_ALIASES = { - '=': PREFIX_EQ, - '': PREFIX_EQ, - } - - EMPTY_VALUES = ['*', 'x', 'X', None] - - @classmethod - def parse_block(cls, expr): - if not cls.NAIVE_SPEC.match(expr): - raise ValueError("Invalid simple spec component: %r" % expr) - prefix, major_t, minor_t, patch_t, prerel, build = cls.NAIVE_SPEC.match(expr).groups() - prefix = cls.PREFIX_ALIASES.get(prefix, prefix) - - major = None if major_t in cls.EMPTY_VALUES else int(major_t) - minor = None if minor_t in cls.EMPTY_VALUES else int(minor_t) - patch = None if patch_t in cls.EMPTY_VALUES else int(patch_t) - - if major is None: # '*' - target = Version(major=0, minor=0, patch=0) - if prefix not in (cls.PREFIX_EQ, cls.PREFIX_GTE): - raise ValueError("Invalid simple spec: %r" % expr) - elif minor is None: - target = Version(major=major, minor=0, patch=0) - elif patch is None: - target = Version(major=major, minor=minor, patch=0) - else: - target = Version( - major=major, - minor=minor, - patch=patch, - prerelease=prerel.split('.') if prerel else (), - build=build.split('.') if build else (), - ) - - if (major is None or minor is None or patch is None) and (prerel or build): - raise ValueError("Invalid simple spec: %r" % expr) - - if build is not None and prefix not in (cls.PREFIX_EQ, cls.PREFIX_NEQ): - raise ValueError("Invalid simple spec: %r" % expr) - - if prefix == cls.PREFIX_CARET: - # Accept anything with the same most-significant digit - if target.major: - high = target.next_major() - elif target.minor: - high = target.next_minor() - else: - high = target.next_patch() - return Range(Range.OP_GTE, target) & Range(Range.OP_LT, high) - - elif prefix == cls.PREFIX_TILDE: - assert major is not None - # Accept any higher patch in the same minor - # Might go higher if the initial version was a partial - if minor is None: - high = target.next_major() - else: - high = target.next_minor() - return Range(Range.OP_GTE, target) & Range(Range.OP_LT, high) - - elif prefix == cls.PREFIX_COMPATIBLE: - assert major is not None - # ~1 is 1.0.0..2.0.0; ~=2.2 is 2.2.0..3.0.0; ~=1.4.5 is 1.4.5..1.5.0 - if minor is None or patch is None: - # We got a partial version - high = target.next_major() - else: - high = target.next_minor() - return Range(Range.OP_GTE, target) & Range(Range.OP_LT, high) - - elif prefix == cls.PREFIX_EQ: - if major is None: - return Range(Range.OP_GTE, target) - elif minor is None: - return Range(Range.OP_GTE, target) & Range(Range.OP_LT, target.next_major()) - elif patch is None: - return Range(Range.OP_GTE, target) & Range(Range.OP_LT, target.next_minor()) - elif build == '': - return Range(Range.OP_EQ, target, build_policy=Range.BUILD_STRICT) - else: - return Range(Range.OP_EQ, target) - - elif prefix == cls.PREFIX_NEQ: - assert major is not None - if minor is None: - # !=1.x => <1.0.0 || >=2.0.0 - return Range(Range.OP_LT, target) | Range(Range.OP_GTE, target.next_major()) - elif patch is None: - # !=1.2.x => <1.2.0 || >=1.3.0 - return Range(Range.OP_LT, target) | Range(Range.OP_GTE, target.next_minor()) - elif prerel == '': - # !=1.2.3- - return Range(Range.OP_NEQ, target, prerelease_policy=Range.PRERELEASE_ALWAYS) - elif build == '': - # !=1.2.3+ or !=1.2.3-a2+ - return Range(Range.OP_NEQ, target, build_policy=Range.BUILD_STRICT) - else: - return Range(Range.OP_NEQ, target) - - elif prefix == cls.PREFIX_GT: - assert major is not None - if minor is None: - # >1.x => >=2.0 - return Range(Range.OP_GTE, target.next_major()) - elif patch is None: - return Range(Range.OP_GTE, target.next_minor()) - else: - return Range(Range.OP_GT, target) - - elif prefix == cls.PREFIX_GTE: - return Range(Range.OP_GTE, target) - - elif prefix == cls.PREFIX_LT: - assert major is not None - if prerel == '': - # <1.2.3- - return Range(Range.OP_LT, target, prerelease_policy=Range.PRERELEASE_ALWAYS) - return Range(Range.OP_LT, target) - - else: - assert prefix == cls.PREFIX_LTE - assert major is not None - if minor is None: - # <=1.x => <2.0 - return Range(Range.OP_LT, target.next_major()) - elif patch is None: - return Range(Range.OP_LT, target.next_minor()) - else: - return Range(Range.OP_LTE, target) - - -class LegacySpec(SimpleSpec): - def __init__(self, *expressions): - warnings.warn( - "The Spec() class will be removed in 3.1; use SimpleSpec() instead.", - PendingDeprecationWarning, - stacklevel=2, - ) - - if len(expressions) > 1: - warnings.warn( - "Passing 2+ arguments to SimpleSpec will be removed in 3.0; concatenate them with ',' instead.", - DeprecationWarning, - stacklevel=2, - ) - expression = ','.join(expressions) - super(LegacySpec, self).__init__(expression) - - @property - def specs(self): - return list(self) - - def __iter__(self): - warnings.warn( - "Iterating over the components of a SimpleSpec object will be removed in 3.0.", - DeprecationWarning, - stacklevel=2, - ) - try: - clauses = list(self.clause) - except TypeError: # Not an iterable - clauses = [self.clause] - for clause in clauses: - yield SpecItem.from_matcher(clause) - - -Spec = LegacySpec - - -@BaseSpec.register_syntax -class NpmSpec(BaseSpec): - SYNTAX = 'npm' - - @classmethod - def _parse_to_clause(cls, expression): - return cls.Parser.parse(expression) - - class Parser: - JOINER = '||' - HYPHEN = ' - ' - - NUMBER = r'x|X|\*|0|[1-9][0-9]*' - PART = r'[a-zA-Z0-9.-]*' - NPM_SPEC_BLOCK = re.compile(r""" - ^(?:v)? # Strip optional initial v - (?P<|<=|>=|>|=|\^|~|) # Operator, can be empty - (?P{nb})(?:\.(?P{nb})(?:\.(?P{nb}))?)? - (?:-(?P{part}))? # Optional re-release - (?:\+(?P{part}))? # Optional build - $""".format(nb=NUMBER, part=PART), - re.VERBOSE, - ) - - @classmethod - def range(cls, operator, target): - return Range(operator, target, prerelease_policy=Range.PRERELEASE_SAMEPATCH) - - @classmethod - def parse(cls, expression): - result = Never() - groups = expression.split(cls.JOINER) - for group in groups: - group = group.strip() - if not group: - group = '>=0.0.0' - - subclauses = [] - if cls.HYPHEN in group: - low, high = group.split(cls.HYPHEN, 2) - subclauses = cls.parse_simple('>=' + low) + cls.parse_simple('<=' + high) - - else: - blocks = group.split(' ') - for block in blocks: - if not cls.NPM_SPEC_BLOCK.match(block): - raise ValueError("Invalid NPM block in %r: %r" % (expression, block)) - - subclauses.extend(cls.parse_simple(block)) - - prerelease_clauses = [] - non_prerel_clauses = [] - for clause in subclauses: - if clause.target.prerelease: - if clause.operator in (Range.OP_GT, Range.OP_GTE): - prerelease_clauses.append(Range( - operator=Range.OP_LT, - target=Version( - major=clause.target.major, - minor=clause.target.minor, - patch=clause.target.patch + 1, - ), - prerelease_policy=Range.PRERELEASE_ALWAYS, - )) - elif clause.operator in (Range.OP_LT, Range.OP_LTE): - prerelease_clauses.append(Range( - operator=Range.OP_GTE, - target=Version( - major=clause.target.major, - minor=clause.target.minor, - patch=0, - prerelease=(), - ), - prerelease_policy=Range.PRERELEASE_ALWAYS, - )) - prerelease_clauses.append(clause) - non_prerel_clauses.append(cls.range( - operator=clause.operator, - target=clause.target.truncate(), - )) - else: - non_prerel_clauses.append(clause) - if prerelease_clauses: - result |= AllOf(*prerelease_clauses) - result |= AllOf(*non_prerel_clauses) - - return result - - PREFIX_CARET = '^' - PREFIX_TILDE = '~' - PREFIX_EQ = '=' - PREFIX_GT = '>' - PREFIX_GTE = '>=' - PREFIX_LT = '<' - PREFIX_LTE = '<=' - - PREFIX_ALIASES = { - '': PREFIX_EQ, - } - - PREFIX_TO_OPERATOR = { - PREFIX_EQ: Range.OP_EQ, - PREFIX_LT: Range.OP_LT, - PREFIX_LTE: Range.OP_LTE, - PREFIX_GTE: Range.OP_GTE, - PREFIX_GT: Range.OP_GT, - } - - EMPTY_VALUES = ['*', 'x', 'X', None] - - @classmethod - def parse_simple(cls, simple): - match = cls.NPM_SPEC_BLOCK.match(simple) - - prefix, major_t, minor_t, patch_t, prerel, build = match.groups() - - prefix = cls.PREFIX_ALIASES.get(prefix, prefix) - major = None if major_t in cls.EMPTY_VALUES else int(major_t) - minor = None if minor_t in cls.EMPTY_VALUES else int(minor_t) - patch = None if patch_t in cls.EMPTY_VALUES else int(patch_t) - - if build is not None and prefix not in [cls.PREFIX_EQ]: - # Ignore the 'build' part when not comparing to a specific part. - build = None - - if major is None: # '*', 'x', 'X' - target = Version(major=0, minor=0, patch=0) - if prefix not in [cls.PREFIX_EQ, cls.PREFIX_GTE]: - raise ValueError("Invalid expression %r" % simple) - prefix = cls.PREFIX_GTE - elif minor is None: - target = Version(major=major, minor=0, patch=0) - elif patch is None: - target = Version(major=major, minor=minor, patch=0) - else: - target = Version( - major=major, - minor=minor, - patch=patch, - prerelease=prerel.split('.') if prerel else (), - build=build.split('.') if build else (), - ) - - if (major is None or minor is None or patch is None) and (prerel or build): - raise ValueError("Invalid NPM spec: %r" % simple) - - if prefix == cls.PREFIX_CARET: - if target.major: # ^1.2.4 => >=1.2.4 <2.0.0 ; ^1.x => >=1.0.0 <2.0.0 - high = target.truncate().next_major() - elif target.minor: # ^0.1.2 => >=0.1.2 <0.2.0 - high = target.truncate().next_minor() - elif minor is None: # ^0.x => >=0.0.0 <1.0.0 - high = target.truncate().next_major() - elif patch is None: # ^0.2.x => >=0.2.0 <0.3.0 - high = target.truncate().next_minor() - else: # ^0.0.1 => >=0.0.1 <0.0.2 - high = target.truncate().next_patch() - return [cls.range(Range.OP_GTE, target), cls.range(Range.OP_LT, high)] - - elif prefix == cls.PREFIX_TILDE: - assert major is not None - if minor is None: # ~1.x => >=1.0.0 <2.0.0 - high = target.next_major() - else: # ~1.2.x => >=1.2.0 <1.3.0; ~1.2.3 => >=1.2.3 <1.3.0 - high = target.next_minor() - return [cls.range(Range.OP_GTE, target), cls.range(Range.OP_LT, high)] - - elif prefix == cls.PREFIX_EQ: - if major is None: - return [cls.range(Range.OP_GTE, target)] - elif minor is None: - return [cls.range(Range.OP_GTE, target), cls.range(Range.OP_LT, target.next_major())] - elif patch is None: - return [cls.range(Range.OP_GTE, target), cls.range(Range.OP_LT, target.next_minor())] - else: - return [cls.range(Range.OP_EQ, target)] - - elif prefix == cls.PREFIX_GT: - assert major is not None - if minor is None: # >1.x - return [cls.range(Range.OP_GTE, target.next_major())] - elif patch is None: # >1.2.x => >=1.3.0 - return [cls.range(Range.OP_GTE, target.next_minor())] - else: - return [cls.range(Range.OP_GT, target)] - - elif prefix == cls.PREFIX_GTE: - return [cls.range(Range.OP_GTE, target)] - - elif prefix == cls.PREFIX_LT: - assert major is not None - return [cls.range(Range.OP_LT, target)] - - else: - assert prefix == cls.PREFIX_LTE - assert major is not None - if minor is None: # <=1.x => <2.0.0 - return [cls.range(Range.OP_LT, target.next_major())] - elif patch is None: # <=1.2.x => <1.3.0 - return [cls.range(Range.OP_LT, target.next_minor())] - else: - return [cls.range(Range.OP_LTE, target)] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/tqdm/cli.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/tqdm/cli.py deleted file mode 100644 index 1223d4977a737a249203bba6579f87558fb3e7b7..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/tqdm/cli.py +++ /dev/null @@ -1,311 +0,0 @@ -""" -Module version for monitoring CLI pipes (`... | python -m tqdm | ...`). -""" -import logging -import re -import sys -from ast import literal_eval as numeric - -from .std import TqdmKeyError, TqdmTypeError, tqdm -from .version import __version__ - -__all__ = ["main"] -log = logging.getLogger(__name__) - - -def cast(val, typ): - log.debug((val, typ)) - if " or " in typ: - for t in typ.split(" or "): - try: - return cast(val, t) - except TqdmTypeError: - pass - raise TqdmTypeError(val + ' : ' + typ) - - # sys.stderr.write('\ndebug | `val:type`: `' + val + ':' + typ + '`.\n') - if typ == 'bool': - if (val == 'True') or (val == ''): - return True - elif val == 'False': - return False - else: - raise TqdmTypeError(val + ' : ' + typ) - try: - return eval(typ + '("' + val + '")') - except Exception: - if typ == 'chr': - return chr(ord(eval('"' + val + '"'))).encode() - else: - raise TqdmTypeError(val + ' : ' + typ) - - -def posix_pipe(fin, fout, delim=b'\\n', buf_size=256, - callback=lambda float: None, callback_len=True): - """ - Params - ------ - fin : binary file with `read(buf_size : int)` method - fout : binary file with `write` (and optionally `flush`) methods. - callback : function(float), e.g.: `tqdm.update` - callback_len : If (default: True) do `callback(len(buffer))`. - Otherwise, do `callback(data) for data in buffer.split(delim)`. - """ - fp_write = fout.write - - if not delim: - while True: - tmp = fin.read(buf_size) - - # flush at EOF - if not tmp: - getattr(fout, 'flush', lambda: None)() - return - - fp_write(tmp) - callback(len(tmp)) - # return - - buf = b'' - len_delim = len(delim) - # n = 0 - while True: - tmp = fin.read(buf_size) - - # flush at EOF - if not tmp: - if buf: - fp_write(buf) - if callback_len: - # n += 1 + buf.count(delim) - callback(1 + buf.count(delim)) - else: - for i in buf.split(delim): - callback(i) - getattr(fout, 'flush', lambda: None)() - return # n - - while True: - i = tmp.find(delim) - if i < 0: - buf += tmp - break - fp_write(buf + tmp[:i + len(delim)]) - # n += 1 - callback(1 if callback_len else (buf + tmp[:i])) - buf = b'' - tmp = tmp[i + len_delim:] - - -# ((opt, type), ... ) -RE_OPTS = re.compile(r'\n {4}(\S+)\s{2,}:\s*([^,]+)') -# better split method assuming no positional args -RE_SHLEX = re.compile(r'\s*(? : \2', d) - split = RE_OPTS.split(d) - opt_types_desc = zip(split[1::3], split[2::3], split[3::3]) - d = ''.join(('\n --{0} : {2}{3}' if otd[1] == 'bool' else - '\n --{0}=<{1}> : {2}{3}').format( - otd[0].replace('_', '-'), otd[0], *otd[1:]) - for otd in opt_types_desc if otd[0] not in UNSUPPORTED_OPTS) - - help_short = "Usage:\n tqdm [--help | options]\n" - d = help_short + """ -Options: - -h, --help Print this help and exit. - -v, --version Print version and exit. -""" + d.strip('\n') + '\n' - - # opts = docopt(d, version=__version__) - if any(v in argv for v in ('-v', '--version')): - sys.stdout.write(__version__ + '\n') - sys.exit(0) - elif any(v in argv for v in ('-h', '--help')): - sys.stdout.write(d + '\n') - sys.exit(0) - elif argv and argv[0][:2] != '--': - sys.stderr.write(f"Error:Unknown argument:{argv[0]}\n{help_short}") - - argv = RE_SHLEX.split(' '.join(["tqdm"] + argv)) - opts = dict(zip(argv[1::3], argv[3::3])) - - log.debug(opts) - opts.pop('log', True) - - tqdm_args = {'file': fp} - try: - for (o, v) in opts.items(): - o = o.replace('-', '_') - try: - tqdm_args[o] = cast(v, opt_types[o]) - except KeyError as e: - raise TqdmKeyError(str(e)) - log.debug('args:' + str(tqdm_args)) - - delim_per_char = tqdm_args.pop('bytes', False) - update = tqdm_args.pop('update', False) - update_to = tqdm_args.pop('update_to', False) - if sum((delim_per_char, update, update_to)) > 1: - raise TqdmKeyError("Can only have one of --bytes --update --update_to") - except Exception: - fp.write("\nError:\n" + help_short) - stdin, stdout_write = sys.stdin, sys.stdout.write - for i in stdin: - stdout_write(i) - raise - else: - buf_size = tqdm_args.pop('buf_size', 256) - delim = tqdm_args.pop('delim', b'\\n') - tee = tqdm_args.pop('tee', False) - manpath = tqdm_args.pop('manpath', None) - comppath = tqdm_args.pop('comppath', None) - if tqdm_args.pop('null', False): - class stdout(object): - @staticmethod - def write(_): - pass - else: - stdout = sys.stdout - stdout = getattr(stdout, 'buffer', stdout) - stdin = getattr(sys.stdin, 'buffer', sys.stdin) - if manpath or comppath: - from importlib import resources - from os import path - from shutil import copyfile - - def cp(name, dst): - """copy resource `name` to `dst`""" - if hasattr(resources, 'files'): - copyfile(str(resources.files('tqdm') / name), dst) - else: # py<3.9 - with resources.path('tqdm', name) as src: - copyfile(str(src), dst) - log.info("written:%s", dst) - if manpath is not None: - cp('tqdm.1', path.join(manpath, 'tqdm.1')) - if comppath is not None: - cp('completion.sh', path.join(comppath, 'tqdm_completion.sh')) - sys.exit(0) - if tee: - stdout_write = stdout.write - fp_write = getattr(fp, 'buffer', fp).write - - class stdout(object): # pylint: disable=function-redefined - @staticmethod - def write(x): - with tqdm.external_write_mode(file=fp): - fp_write(x) - stdout_write(x) - if delim_per_char: - tqdm_args.setdefault('unit', 'B') - tqdm_args.setdefault('unit_scale', True) - tqdm_args.setdefault('unit_divisor', 1024) - log.debug(tqdm_args) - with tqdm(**tqdm_args) as t: - posix_pipe(stdin, stdout, '', buf_size, t.update) - elif delim == b'\\n': - log.debug(tqdm_args) - write = stdout.write - if update or update_to: - with tqdm(**tqdm_args) as t: - if update: - def callback(i): - t.update(numeric(i.decode())) - else: # update_to - def callback(i): - t.update(numeric(i.decode()) - t.n) - for i in stdin: - write(i) - callback(i) - else: - for i in tqdm(stdin, **tqdm_args): - write(i) - else: - log.debug(tqdm_args) - with tqdm(**tqdm_args) as t: - callback_len = False - if update: - def callback(i): - t.update(numeric(i.decode())) - elif update_to: - def callback(i): - t.update(numeric(i.decode()) - t.n) - else: - callback = t.update - callback_len = True - posix_pipe(stdin, stdout, delim, buf_size, callback, callback_len) diff --git a/spaces/propilot/transcribe-speech-to-text/README.md b/spaces/propilot/transcribe-speech-to-text/README.md deleted file mode 100644 index 04ab41d61aed51382d690a924d8e436f36539f66..0000000000000000000000000000000000000000 --- a/spaces/propilot/transcribe-speech-to-text/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Text To Voice -emoji: 📚 -colorFrom: gray -colorTo: indigo -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pustozerov/poc-handwriting-ocr/modules/ocr_model_en/ocr.py b/spaces/pustozerov/poc-handwriting-ocr/modules/ocr_model_en/ocr.py deleted file mode 100644 index ae03ae76bfb5b39c5df96002aac521398bfead50..0000000000000000000000000000000000000000 --- a/spaces/pustozerov/poc-handwriting-ocr/modules/ocr_model_en/ocr.py +++ /dev/null @@ -1,57 +0,0 @@ -import cv2 -import matplotlib.pyplot as plt -import numpy as np - -from modules.ocr_model_en import page, words, characters -from modules.ocr_model_en.data_helpers import idx2char -from modules.ocr_model_en.normalization import word_normalization, letter_normalization -from modules.ocr_model_en.tfhelpers import Model - -plt.rcParams['figure.figsize'] = (15.0, 10.0) -model_location = "model/char_classifier/char_classifier" -CHARACTER_MODEL = Model(model_location) - - -# Crop image and get bounding boxes -def preprocess_image(image_path): - cv2.imread(image_path) - image = cv2.cvtColor(cv2.imread(image_path), cv2.COLOR_BGR2RGB) - crop = page.detection(image) - boxes, image_with_boxes = words.detection(crop) - lines = words.sort_words(boxes) - return lines, crop, image_with_boxes - - -def recognize_char(img): - """Recognition using character model""" - # Pre-processing the word - img = word_normalization( - img, - 60, - border=False, - tilt=True, - hyst_norm=True) - - # Separate letters - img = cv2.copyMakeBorder( - img, - 0, 0, 30, 30, - cv2.BORDER_CONSTANT, - value=[0, 0, 0]) - gaps = characters.segment(img, RNN=True) - - chars = [] - for i in range(len(gaps) - 1): - char = img[:, gaps[i]:gaps[i + 1]] - char, dim = letter_normalization(char, is_thresh=True, dim=True) - # Test different values - if dim[0] > 4 and dim[1] > 4: - chars.append(char.flatten()) - - chars = np.array(chars) - word = '' - if len(chars) != 0: - pred = CHARACTER_MODEL.run(chars) - for c in pred: - word += idx2char(c) - return word diff --git a/spaces/pyodide-demo/self-hosted/scikit-image-tests.js b/spaces/pyodide-demo/self-hosted/scikit-image-tests.js deleted file mode 100644 index 27668d124d1c61fb46f36ac6ab9e6bc8b44a68d0..0000000000000000000000000000000000000000 --- a/spaces/pyodide-demo/self-hosted/scikit-image-tests.js +++ /dev/null @@ -1 +0,0 @@ -var Module=typeof globalThis.__pyodide_module!=="undefined"?globalThis.__pyodide_module:{};if(!Module.expectedDataFileDownloads){Module.expectedDataFileDownloads=0}Module.expectedDataFileDownloads++;(function(){var loadPackage=function(metadata){var PACKAGE_PATH="";if(typeof window==="object"){PACKAGE_PATH=window["encodeURIComponent"](window.location.pathname.toString().substring(0,window.location.pathname.toString().lastIndexOf("/"))+"/")}else if(typeof process==="undefined"&&typeof location!=="undefined"){PACKAGE_PATH=encodeURIComponent(location.pathname.toString().substring(0,location.pathname.toString().lastIndexOf("/"))+"/")}var PACKAGE_NAME="scikit-image-tests.data";var REMOTE_PACKAGE_BASE="scikit-image-tests.data";if(typeof Module["locateFilePackage"]==="function"&&!Module["locateFile"]){Module["locateFile"]=Module["locateFilePackage"];err("warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)")}var REMOTE_PACKAGE_NAME=Module["locateFile"]?Module["locateFile"](REMOTE_PACKAGE_BASE,""):REMOTE_PACKAGE_BASE;var REMOTE_PACKAGE_SIZE=metadata["remote_package_size"];var PACKAGE_UUID=metadata["package_uuid"];function fetchRemotePackage(packageName,packageSize,callback,errback){if(typeof process==="object"){require("fs").readFile(packageName,(function(err,contents){if(err){errback(err)}else{callback(contents.buffer)}}));return}var xhr=new XMLHttpRequest;xhr.open("GET",packageName,true);xhr.responseType="arraybuffer";xhr.onprogress=function(event){var url=packageName;var size=packageSize;if(event.total)size=event.total;if(event.loaded){if(!xhr.addedTotal){xhr.addedTotal=true;if(!Module.dataFileDownloads)Module.dataFileDownloads={};Module.dataFileDownloads[url]={loaded:event.loaded,total:size}}else{Module.dataFileDownloads[url].loaded=event.loaded}var total=0;var loaded=0;var num=0;for(var download in Module.dataFileDownloads){var data=Module.dataFileDownloads[download];total+=data.total;loaded+=data.loaded;num++}total=Math.ceil(total*Module.expectedDataFileDownloads/num);if(Module["setStatus"])Module["setStatus"]("Downloading data... ("+loaded+"/"+total+")")}else if(!Module.dataFileDownloads){if(Module["setStatus"])Module["setStatus"]("Downloading data...")}};xhr.onerror=function(event){throw new Error("NetworkError for: "+packageName)};xhr.onload=function(event){if(xhr.status==200||xhr.status==304||xhr.status==206||xhr.status==0&&xhr.response){var packageData=xhr.response;callback(packageData)}else{throw new Error(xhr.statusText+" : "+xhr.responseURL)}};xhr.send(null)}function handleError(error){console.error("package error:",error)}var fetchedCallback=null;var fetched=Module["getPreloadedPackage"]?Module["getPreloadedPackage"](REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE):null;if(!fetched)fetchRemotePackage(REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE,(function(data){if(fetchedCallback){fetchedCallback(data);fetchedCallback=null}else{fetched=data}}),handleError);function runWithFS(){function assert(check,msg){if(!check)throw msg+(new Error).stack}Module["FS_createPath"]("/","lib",true,true);Module["FS_createPath"]("/lib","python3.9",true,true);Module["FS_createPath"]("/lib/python3.9","site-packages",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","skimage",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage","_shared",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage/_shared","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage","color",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage/color","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage","data",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage/data","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage","draw",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage/draw","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage","exposure",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage/exposure","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage","feature",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage/feature","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage","filters",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage/filters","rank",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage/filters/rank","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage/filters","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage","graph",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage/graph","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage","io",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage/io","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage","measure",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage/measure","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage","metrics",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage/metrics","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage","morphology",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage/morphology","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage","restoration",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage/restoration","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage","segmentation",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage/segmentation","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage","transform",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage/transform","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage","util",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage/util","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage","viewer",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/skimage/viewer","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","doc",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/doc","ext",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/doc/ext","tests",true,true);function processPackageData(arrayBuffer){assert(arrayBuffer,"Loading data file failed.");assert(arrayBuffer instanceof ArrayBuffer,"bad input to processPackageData");var byteArray=new Uint8Array(arrayBuffer);var curr;var compressedData={data:null,cachedOffset:503916,cachedIndexes:[-1,-1],cachedChunks:[null,null],offsets:[0,1048,2346,3323,4078,5062,6018,6899,7864,9056,9977,11088,12213,13334,14293,15212,16157,17296,18163,19134,20111,21157,22136,23313,24132,25287,26277,27040,27772,28722,29668,30604,31877,32772,33780,34757,35993,37022,37885,39082,40336,41593,42405,43472,44201,45121,45775,46358,46989,47689,48219,48568,49449,49955,50423,51081,51257,51843,52595,53294,54011,54562,55129,55761,56429,57004,58016,58904,59555,60636,61348,62112,63311,64376,65267,66324,67429,68570,69633,70902,72088,73305,74226,75307,76402,77383,78387,79301,80330,81285,82528,83345,84456,85144,86067,87201,88375,89370,90107,91273,92139,93242,93871,94668,95446,96351,97471,98558,99543,100286,101410,102209,103360,104326,105228,106077,106929,107436,108577,109665,110580,111765,113050,114124,115246,116268,117123,117758,118713,119628,120391,121276,121992,122869,123836,124513,124979,125534,126354,127064,127746,128631,129836,130783,131509,132550,133852,135091,136079,136779,137585,138264,139133,140134,141242,142132,143117,144483,145568,146713,147284,148141,148972,149921,150955,151976,152572,153057,154010,154900,155772,156440,157221,158198,159049,160167,160970,161806,162445,163256,164006,164913,165587,166684,167845,169005,170341,171836,173112,174429,175768,176608,177644,178626,179775,180661,181612,182673,183717,184335,184912,185655,186541,187384,188258,189176,189978,190595,191555,192526,193549,194258,195243,196465,197800,198980,199962,200790,201566,202755,203545,204836,206037,207332,207943,208596,209670,210777,211981,213022,213951,214800,215849,217104,218339,219662,221073,222533,223246,224290,225356,226468,227484,228424,229665,230474,231431,232982,234476,235475,236637,237200,238144,239112,240206,240944,241706,242484,243624,244516,245583,246495,247626,248708,250017,251298,252370,253257,254132,255252,256209,257041,258154,259075,259978,260841,261853,263027,264015,265041,265792,266369,267240,268219,269009,269974,270876,271845,272618,273674,274778,275970,276900,277966,278757,280081,280978,281694,282834,283910,284966,285906,287001,288138,289164,290498,291590,292413,293251,293998,294795,295384,296326,297209,297884,298609,299289,300179,301140,302081,302939,303733,304519,305561,306354,307339,308611,309225,310081,310972,311838,312612,313107,313940,314946,315714,316628,317449,318420,319392,320167,320797,321486,321908,322564,323383,323790,324506,325219,326126,327004,327721,328451,328939,329959,330686,331332,332264,333254,333905,334812,335681,336661,337721,338458,339717,340813,342191,343177,344228,345179,346041,347009,347850,348828,349732,350734,351731,352452,353186,353920,354745,355743,356829,357942,359077,360038,360977,361864,362608,363596,364646,365674,366766,367994,368933,370125,371120,372345,373521,374564,375627,376726,377927,378927,380082,380964,381751,382695,383423,384376,385482,386364,387026,387608,388241,389259,389922,391110,392234,393297,394106,395016,395855,396759,397565,398210,399361,400539,401464,402131,403066,403990,404788,405828,406744,407878,408796,409540,410500,411581,412662,413589,414457,415219,416055,416931,417958,419527,420003,420696,421048,421458,421821,422201,422923,423981,424715,425590,426852,427852,428535,429198,430313,431145,432067,432817,433554,434530,435482,436611,437675,438789,440029,440554,441575,442362,443315,444376,445063,445717,446594,447621,448612,449362,450093,451041,452390,453427,454454,455605,456757,457841,458902,459899,460824,461913,462873,463800,464463,465257,466124,467159,468240,469187,470306,470932,471955,472801,473712,474560,475438,476423,477612,478498,479401,480519,481613,482665,483233,483813,484649,485592,486455,487299,488279,489377,490270,491113,491873,492593,493395,494384,495438,496410,497720,498822,499769,500710,501734,502671,503907],sizes:[1048,1298,977,755,984,956,881,965,1192,921,1111,1125,1121,959,919,945,1139,867,971,977,1046,979,1177,819,1155,990,763,732,950,946,936,1273,895,1008,977,1236,1029,863,1197,1254,1257,812,1067,729,920,654,583,631,700,530,349,881,506,468,658,176,586,752,699,717,551,567,632,668,575,1012,888,651,1081,712,764,1199,1065,891,1057,1105,1141,1063,1269,1186,1217,921,1081,1095,981,1004,914,1029,955,1243,817,1111,688,923,1134,1174,995,737,1166,866,1103,629,797,778,905,1120,1087,985,743,1124,799,1151,966,902,849,852,507,1141,1088,915,1185,1285,1074,1122,1022,855,635,955,915,763,885,716,877,967,677,466,555,820,710,682,885,1205,947,726,1041,1302,1239,988,700,806,679,869,1001,1108,890,985,1366,1085,1145,571,857,831,949,1034,1021,596,485,953,890,872,668,781,977,851,1118,803,836,639,811,750,907,674,1097,1161,1160,1336,1495,1276,1317,1339,840,1036,982,1149,886,951,1061,1044,618,577,743,886,843,874,918,802,617,960,971,1023,709,985,1222,1335,1180,982,828,776,1189,790,1291,1201,1295,611,653,1074,1107,1204,1041,929,849,1049,1255,1235,1323,1411,1460,713,1044,1066,1112,1016,940,1241,809,957,1551,1494,999,1162,563,944,968,1094,738,762,778,1140,892,1067,912,1131,1082,1309,1281,1072,887,875,1120,957,832,1113,921,903,863,1012,1174,988,1026,751,577,871,979,790,965,902,969,773,1056,1104,1192,930,1066,791,1324,897,716,1140,1076,1056,940,1095,1137,1026,1334,1092,823,838,747,797,589,942,883,675,725,680,890,961,941,858,794,786,1042,793,985,1272,614,856,891,866,774,495,833,1006,768,914,821,971,972,775,630,689,422,656,819,407,716,713,907,878,717,730,488,1020,727,646,932,990,651,907,869,980,1060,737,1259,1096,1378,986,1051,951,862,968,841,978,904,1002,997,721,734,734,825,998,1086,1113,1135,961,939,887,744,988,1050,1028,1092,1228,939,1192,995,1225,1176,1043,1063,1099,1201,1e3,1155,882,787,944,728,953,1106,882,662,582,633,1018,663,1188,1124,1063,809,910,839,904,806,645,1151,1178,925,667,935,924,798,1040,916,1134,918,744,960,1081,1081,927,868,762,836,876,1027,1569,476,693,352,410,363,380,722,1058,734,875,1262,1e3,683,663,1115,832,922,750,737,976,952,1129,1064,1114,1240,525,1021,787,953,1061,687,654,877,1027,991,750,731,948,1349,1037,1027,1151,1152,1084,1061,997,925,1089,960,927,663,794,867,1035,1081,947,1119,626,1023,846,911,848,878,985,1189,886,903,1118,1094,1052,568,580,836,943,863,844,980,1098,893,843,760,720,802,989,1054,972,1310,1102,947,941,1024,937,1236,9],successes:[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0]};compressedData["data"]=byteArray;assert(typeof Module.LZ4==="object","LZ4 not present - was your app build with -s LZ4=1 ?");Module.LZ4.loadPackage({metadata:metadata,compressedData:compressedData},true);Module["removeRunDependency"]("datafile_scikit-image-tests.data")}Module["addRunDependency"]("datafile_scikit-image-tests.data");if(!Module.preloadResults)Module.preloadResults={};Module.preloadResults[PACKAGE_NAME]={fromCache:false};if(fetched){processPackageData(fetched);fetched=null}else{fetchedCallback=processPackageData}}if(Module["calledRun"]){runWithFS()}else{if(!Module["preRun"])Module["preRun"]=[];Module["preRun"].push(runWithFS)}};loadPackage({files:[{filename:"/lib/python3.9/site-packages/skimage/conftest.py",start:0,end:350,audio:0},{filename:"/lib/python3.9/site-packages/skimage/_shared/tests/__init__.py",start:350,end:350,audio:0},{filename:"/lib/python3.9/site-packages/skimage/_shared/tests/test_coord.py",start:350,end:3604,audio:0},{filename:"/lib/python3.9/site-packages/skimage/_shared/tests/test_fast_exp.py",start:3604,end:4104,audio:0},{filename:"/lib/python3.9/site-packages/skimage/_shared/tests/test_geometry.py",start:4104,end:6244,audio:0},{filename:"/lib/python3.9/site-packages/skimage/_shared/tests/test_interpolation.py",start:6244,end:7380,audio:0},{filename:"/lib/python3.9/site-packages/skimage/_shared/tests/test_safe_as_int.py",start:7380,end:9052,audio:0},{filename:"/lib/python3.9/site-packages/skimage/_shared/tests/test_testing.py",start:9052,end:12019,audio:0},{filename:"/lib/python3.9/site-packages/skimage/_shared/tests/test_utils.py",start:12019,end:21392,audio:0},{filename:"/lib/python3.9/site-packages/skimage/_shared/tests/test_version_requirements.py",start:21392,end:22468,audio:0},{filename:"/lib/python3.9/site-packages/skimage/_shared/tests/test_warnings.py",start:22468,end:23719,audio:0},{filename:"/lib/python3.9/site-packages/skimage/color/tests/__init__.py",start:23719,end:23719,audio:0},{filename:"/lib/python3.9/site-packages/skimage/color/tests/test_adapt_rgb.py",start:23719,end:26468,audio:0},{filename:"/lib/python3.9/site-packages/skimage/color/tests/test_colorconv.py",start:26468,end:63796,audio:0},{filename:"/lib/python3.9/site-packages/skimage/color/tests/test_colorlabel.py",start:63796,end:75071,audio:0},{filename:"/lib/python3.9/site-packages/skimage/color/tests/test_delta_e.py",start:75071,end:82219,audio:0},{filename:"/lib/python3.9/site-packages/skimage/data/tests/__init__.py",start:82219,end:82219,audio:0},{filename:"/lib/python3.9/site-packages/skimage/data/tests/test_data.py",start:82219,end:87865,audio:0},{filename:"/lib/python3.9/site-packages/skimage/draw/tests/__init__.py",start:87865,end:87865,audio:0},{filename:"/lib/python3.9/site-packages/skimage/draw/tests/test_draw.py",start:87865,end:127097,audio:0},{filename:"/lib/python3.9/site-packages/skimage/draw/tests/test_draw3d.py",start:127097,end:133799,audio:0},{filename:"/lib/python3.9/site-packages/skimage/draw/tests/test_draw_nd.py",start:133799,end:134284,audio:0},{filename:"/lib/python3.9/site-packages/skimage/draw/tests/test_polygon2mask.py",start:134284,end:134627,audio:0},{filename:"/lib/python3.9/site-packages/skimage/draw/tests/test_random_shapes.py",start:134627,end:140814,audio:0},{filename:"/lib/python3.9/site-packages/skimage/exposure/tests/__init__.py",start:140814,end:140814,audio:0},{filename:"/lib/python3.9/site-packages/skimage/exposure/tests/test_exposure.py",start:140814,end:169844,audio:0},{filename:"/lib/python3.9/site-packages/skimage/exposure/tests/test_histogram_matching.py",start:169844,end:174719,audio:0},{filename:"/lib/python3.9/site-packages/skimage/feature/tests/__init__.py",start:174719,end:174719,audio:0},{filename:"/lib/python3.9/site-packages/skimage/feature/tests/test_basic_features.py",start:174719,end:178026,audio:0},{filename:"/lib/python3.9/site-packages/skimage/feature/tests/test_blob.py",start:178026,end:193023,audio:0},{filename:"/lib/python3.9/site-packages/skimage/feature/tests/test_brief.py",start:193023,end:195872,audio:0},{filename:"/lib/python3.9/site-packages/skimage/feature/tests/test_canny.py",start:195872,end:201170,audio:0},{filename:"/lib/python3.9/site-packages/skimage/feature/tests/test_cascade.py",start:201170,end:201843,audio:0},{filename:"/lib/python3.9/site-packages/skimage/feature/tests/test_censure.py",start:201843,end:205620,audio:0},{filename:"/lib/python3.9/site-packages/skimage/feature/tests/test_corner.py",start:205620,end:229609,audio:0},{filename:"/lib/python3.9/site-packages/skimage/feature/tests/test_daisy.py",start:229609,end:233026,audio:0},{filename:"/lib/python3.9/site-packages/skimage/feature/tests/test_haar.py",start:233026,end:240650,audio:0},{filename:"/lib/python3.9/site-packages/skimage/feature/tests/test_hog.py",start:240650,end:252326,audio:0},{filename:"/lib/python3.9/site-packages/skimage/feature/tests/test_match.py",start:252326,end:259605,audio:0},{filename:"/lib/python3.9/site-packages/skimage/feature/tests/test_orb.py",start:259605,end:265978,audio:0},{filename:"/lib/python3.9/site-packages/skimage/feature/tests/test_peak.py",start:265978,end:289969,audio:0},{filename:"/lib/python3.9/site-packages/skimage/feature/tests/test_sift.py",start:289969,end:296265,audio:0},{filename:"/lib/python3.9/site-packages/skimage/feature/tests/test_template.py",start:296265,end:302437,audio:0},{filename:"/lib/python3.9/site-packages/skimage/feature/tests/test_texture.py",start:302437,end:315821,audio:0},{filename:"/lib/python3.9/site-packages/skimage/feature/tests/test_util.py",start:315821,end:318810,audio:0},{filename:"/lib/python3.9/site-packages/skimage/filters/rank/tests/__init__.py",start:318810,end:318938,audio:0},{filename:"/lib/python3.9/site-packages/skimage/filters/rank/tests/test_rank.py",start:318938,end:354139,audio:0},{filename:"/lib/python3.9/site-packages/skimage/filters/tests/__init__.py",start:354139,end:354139,audio:0},{filename:"/lib/python3.9/site-packages/skimage/filters/tests/test_correlate.py",start:354139,end:356130,audio:0},{filename:"/lib/python3.9/site-packages/skimage/filters/tests/test_edges.py",start:356130,end:376389,audio:0},{filename:"/lib/python3.9/site-packages/skimage/filters/tests/test_fft_based.py",start:376389,end:388778,audio:0},{filename:"/lib/python3.9/site-packages/skimage/filters/tests/test_gabor.py",start:388778,end:392550,audio:0},{filename:"/lib/python3.9/site-packages/skimage/filters/tests/test_gaussian.py",start:392550,end:399264,audio:0},{filename:"/lib/python3.9/site-packages/skimage/filters/tests/test_lpi_filter.py",start:399264,end:401943,audio:0},{filename:"/lib/python3.9/site-packages/skimage/filters/tests/test_median.py",start:401943,end:404111,audio:0},{filename:"/lib/python3.9/site-packages/skimage/filters/tests/test_ridges.py",start:404111,end:414195,audio:0},{filename:"/lib/python3.9/site-packages/skimage/filters/tests/test_thresholding.py",start:414195,end:441543,audio:0},{filename:"/lib/python3.9/site-packages/skimage/filters/tests/test_unsharp_mask.py",start:441543,end:447221,audio:0},{filename:"/lib/python3.9/site-packages/skimage/filters/tests/test_window.py",start:447221,end:448842,audio:0},{filename:"/lib/python3.9/site-packages/skimage/graph/tests/__init__.py",start:448842,end:448842,audio:0},{filename:"/lib/python3.9/site-packages/skimage/graph/tests/test_anisotropy.py",start:448842,end:450956,audio:0},{filename:"/lib/python3.9/site-packages/skimage/graph/tests/test_connect.py",start:450956,end:453384,audio:0},{filename:"/lib/python3.9/site-packages/skimage/graph/tests/test_flexible.py",start:453384,end:455081,audio:0},{filename:"/lib/python3.9/site-packages/skimage/graph/tests/test_heap.py",start:455081,end:456183,audio:0},{filename:"/lib/python3.9/site-packages/skimage/graph/tests/test_mcp.py",start:456183,end:462310,audio:0},{filename:"/lib/python3.9/site-packages/skimage/graph/tests/test_pixel_graph.py",start:462310,end:464100,audio:0},{filename:"/lib/python3.9/site-packages/skimage/graph/tests/test_spath.py",start:464100,end:464979,audio:0},{filename:"/lib/python3.9/site-packages/skimage/io/tests/__init__.py",start:464979,end:464979,audio:0},{filename:"/lib/python3.9/site-packages/skimage/io/tests/test_collection.py",start:464979,end:469603,audio:0},{filename:"/lib/python3.9/site-packages/skimage/io/tests/test_colormixer.py",start:469603,end:474086,audio:0},{filename:"/lib/python3.9/site-packages/skimage/io/tests/test_fits.py",start:474086,end:475003,audio:0},{filename:"/lib/python3.9/site-packages/skimage/io/tests/test_histograms.py",start:475003,end:475800,audio:0},{filename:"/lib/python3.9/site-packages/skimage/io/tests/test_imageio.py",start:475800,end:478088,audio:0},{filename:"/lib/python3.9/site-packages/skimage/io/tests/test_imread.py",start:478088,end:479978,audio:0},{filename:"/lib/python3.9/site-packages/skimage/io/tests/test_io.py",start:479978,end:483872,audio:0},{filename:"/lib/python3.9/site-packages/skimage/io/tests/test_mpl_imshow.py",start:483872,end:488079,audio:0},{filename:"/lib/python3.9/site-packages/skimage/io/tests/test_multi_image.py",start:488079,end:490589,audio:0},{filename:"/lib/python3.9/site-packages/skimage/io/tests/test_pil.py",start:490589,end:499404,audio:0},{filename:"/lib/python3.9/site-packages/skimage/io/tests/test_plugin.py",start:499404,end:501733,audio:0},{filename:"/lib/python3.9/site-packages/skimage/io/tests/test_plugin_util.py",start:501733,end:503629,audio:0},{filename:"/lib/python3.9/site-packages/skimage/io/tests/test_sift.py",start:503629,end:506880,audio:0},{filename:"/lib/python3.9/site-packages/skimage/io/tests/test_simpleitk.py",start:506880,end:509273,audio:0},{filename:"/lib/python3.9/site-packages/skimage/io/tests/test_tifffile.py",start:509273,end:511641,audio:0},{filename:"/lib/python3.9/site-packages/skimage/measure/tests/__init__.py",start:511641,end:511641,audio:0},{filename:"/lib/python3.9/site-packages/skimage/measure/tests/test_block.py",start:511641,end:516024,audio:0},{filename:"/lib/python3.9/site-packages/skimage/measure/tests/test_blur_effect.py",start:516024,end:517809,audio:0},{filename:"/lib/python3.9/site-packages/skimage/measure/tests/test_ccomp.py",start:517809,end:527373,audio:0},{filename:"/lib/python3.9/site-packages/skimage/measure/tests/test_entropy.py",start:527373,end:527773,audio:0},{filename:"/lib/python3.9/site-packages/skimage/measure/tests/test_find_contours.py",start:527773,end:532997,audio:0},{filename:"/lib/python3.9/site-packages/skimage/measure/tests/test_fit.py",start:532997,end:549657,audio:0},{filename:"/lib/python3.9/site-packages/skimage/measure/tests/test_label.py",start:549657,end:551440,audio:0},{filename:"/lib/python3.9/site-packages/skimage/measure/tests/test_marching_cubes.py",start:551440,end:558702,audio:0},{filename:"/lib/python3.9/site-packages/skimage/measure/tests/test_moments.py",start:558702,end:566676,audio:0},{filename:"/lib/python3.9/site-packages/skimage/measure/tests/test_pnpoly.py",start:566676,end:567704,audio:0},{filename:"/lib/python3.9/site-packages/skimage/measure/tests/test_polygon.py",start:567704,end:569975,audio:0},{filename:"/lib/python3.9/site-packages/skimage/measure/tests/test_profile.py",start:569975,end:577633,audio:0},{filename:"/lib/python3.9/site-packages/skimage/measure/tests/test_regionprops.py",start:577633,end:604318,audio:0},{filename:"/lib/python3.9/site-packages/skimage/metrics/tests/__init__.py",start:604318,end:604318,audio:0},{filename:"/lib/python3.9/site-packages/skimage/metrics/tests/test_segmentation_metrics.py",start:604318,end:605974,audio:0},{filename:"/lib/python3.9/site-packages/skimage/metrics/tests/test_set_metrics.py",start:605974,end:611798,audio:0},{filename:"/lib/python3.9/site-packages/skimage/metrics/tests/test_simple_metrics.py",start:611798,end:616805,audio:0},{filename:"/lib/python3.9/site-packages/skimage/metrics/tests/test_structural_similarity.py",start:616805,end:626104,audio:0},{filename:"/lib/python3.9/site-packages/skimage/morphology/tests/__init__.py",start:626104,end:626104,audio:0},{filename:"/lib/python3.9/site-packages/skimage/morphology/tests/test_binary.py",start:626104,end:632427,audio:0},{filename:"/lib/python3.9/site-packages/skimage/morphology/tests/test_convex_hull.py",start:632427,end:638951,audio:0},{filename:"/lib/python3.9/site-packages/skimage/morphology/tests/test_extrema.py",start:638951,end:666187,audio:0},{filename:"/lib/python3.9/site-packages/skimage/morphology/tests/test_flood_fill.py",start:666187,end:674399,audio:0},{filename:"/lib/python3.9/site-packages/skimage/morphology/tests/test_footprints.py",start:674399,end:681075,audio:0},{filename:"/lib/python3.9/site-packages/skimage/morphology/tests/test_gray.py",start:681075,end:691847,audio:0},{filename:"/lib/python3.9/site-packages/skimage/morphology/tests/test_max_tree.py",start:691847,end:714504,audio:0},{filename:"/lib/python3.9/site-packages/skimage/morphology/tests/test_misc.py",start:714504,end:723990,audio:0},{filename:"/lib/python3.9/site-packages/skimage/morphology/tests/test_reconstruction.py",start:723990,end:729609,audio:0},{filename:"/lib/python3.9/site-packages/skimage/morphology/tests/test_skeletonize.py",start:729609,end:738797,audio:0},{filename:"/lib/python3.9/site-packages/skimage/morphology/tests/test_skeletonize_3d.py",start:738797,end:745345,audio:0},{filename:"/lib/python3.9/site-packages/skimage/morphology/tests/test_util.py",start:745345,end:749870,audio:0},{filename:"/lib/python3.9/site-packages/skimage/restoration/tests/__init__.py",start:749870,end:749870,audio:0},{filename:"/lib/python3.9/site-packages/skimage/restoration/tests/test_denoise.py",start:749870,end:798378,audio:0},{filename:"/lib/python3.9/site-packages/skimage/restoration/tests/test_inpaint.py",start:798378,end:806105,audio:0},{filename:"/lib/python3.9/site-packages/skimage/restoration/tests/test_j_invariant.py",start:806105,end:809727,audio:0},{filename:"/lib/python3.9/site-packages/skimage/restoration/tests/test_restoration.py",start:809727,end:816323,audio:0},{filename:"/lib/python3.9/site-packages/skimage/restoration/tests/test_rolling_ball.py",start:816323,end:819457,audio:0},{filename:"/lib/python3.9/site-packages/skimage/restoration/tests/test_unwrap.py",start:819457,end:827922,audio:0},{filename:"/lib/python3.9/site-packages/skimage/segmentation/tests/__init__.py",start:827922,end:827922,audio:0},{filename:"/lib/python3.9/site-packages/skimage/segmentation/tests/test_active_contour_model.py",start:827922,end:834462,audio:0},{filename:"/lib/python3.9/site-packages/skimage/segmentation/tests/test_boundaries.py",start:834462,end:839936,audio:0},{filename:"/lib/python3.9/site-packages/skimage/segmentation/tests/test_chan_vese.py",start:839936,end:843601,audio:0},{filename:"/lib/python3.9/site-packages/skimage/segmentation/tests/test_clear_border.py",start:843601,end:850079,audio:0},{filename:"/lib/python3.9/site-packages/skimage/segmentation/tests/test_expand_labels.py",start:850079,end:856302,audio:0},{filename:"/lib/python3.9/site-packages/skimage/segmentation/tests/test_felzenszwalb.py",start:856302,end:859572,audio:0},{filename:"/lib/python3.9/site-packages/skimage/segmentation/tests/test_join.py",start:859572,end:866786,audio:0},{filename:"/lib/python3.9/site-packages/skimage/segmentation/tests/test_morphsnakes.py",start:866786,end:872722,audio:0},{filename:"/lib/python3.9/site-packages/skimage/segmentation/tests/test_quickshift.py",start:872722,end:874847,audio:0},{filename:"/lib/python3.9/site-packages/skimage/segmentation/tests/test_random_walker.py",start:874847,end:897205,audio:0},{filename:"/lib/python3.9/site-packages/skimage/segmentation/tests/test_slic.py",start:897205,end:915380,audio:0},{filename:"/lib/python3.9/site-packages/skimage/segmentation/tests/test_watershed.py",start:915380,end:939522,audio:0},{filename:"/lib/python3.9/site-packages/skimage/transform/tests/__init__.py",start:939522,end:939522,audio:0},{filename:"/lib/python3.9/site-packages/skimage/transform/tests/test_finite_radon_transform.py",start:939522,end:939838,audio:0},{filename:"/lib/python3.9/site-packages/skimage/transform/tests/test_geometric.py",start:939838,end:965667,audio:0},{filename:"/lib/python3.9/site-packages/skimage/transform/tests/test_hough_transform.py",start:965667,end:984788,audio:0},{filename:"/lib/python3.9/site-packages/skimage/transform/tests/test_integral.py",start:984788,end:987166,audio:0},{filename:"/lib/python3.9/site-packages/skimage/transform/tests/test_pyramids.py",start:987166,end:995745,audio:0},{filename:"/lib/python3.9/site-packages/skimage/transform/tests/test_radon_transform.py",start:995745,end:1014214,audio:0},{filename:"/lib/python3.9/site-packages/skimage/transform/tests/test_warps.py",start:1014214,end:1044895,audio:0},{filename:"/lib/python3.9/site-packages/skimage/util/tests/__init__.py",start:1044895,end:1044895,audio:0},{filename:"/lib/python3.9/site-packages/skimage/util/tests/test_apply_parallel.py",start:1044895,end:1050086,audio:0},{filename:"/lib/python3.9/site-packages/skimage/util/tests/test_arraycrop.py",start:1050086,end:1051937,audio:0},{filename:"/lib/python3.9/site-packages/skimage/util/tests/test_compare.py",start:1051937,end:1054268,audio:0},{filename:"/lib/python3.9/site-packages/skimage/util/tests/test_dtype.py",start:1054268,end:1060452,audio:0},{filename:"/lib/python3.9/site-packages/skimage/util/tests/test_invert.py",start:1060452,end:1062897,audio:0},{filename:"/lib/python3.9/site-packages/skimage/util/tests/test_labels.py",start:1062897,end:1065107,audio:0},{filename:"/lib/python3.9/site-packages/skimage/util/tests/test_map_array.py",start:1065107,end:1066943,audio:0},{filename:"/lib/python3.9/site-packages/skimage/util/tests/test_montage.py",start:1066943,end:1072911,audio:0},{filename:"/lib/python3.9/site-packages/skimage/util/tests/test_random_noise.py",start:1072911,end:1081089,audio:0},{filename:"/lib/python3.9/site-packages/skimage/util/tests/test_regular_grid.py",start:1081089,end:1082076,audio:0},{filename:"/lib/python3.9/site-packages/skimage/util/tests/test_shape.py",start:1082076,end:1087567,audio:0},{filename:"/lib/python3.9/site-packages/skimage/util/tests/test_unique_rows.py",start:1087567,end:1088684,audio:0},{filename:"/lib/python3.9/site-packages/skimage/viewer/tests/__init__.py",start:1088684,end:1088684,audio:0},{filename:"/lib/python3.9/site-packages/skimage/viewer/tests/test_plugins.py",start:1088684,end:1094414,audio:0},{filename:"/lib/python3.9/site-packages/skimage/viewer/tests/test_tools.py",start:1094414,end:1100408,audio:0},{filename:"/lib/python3.9/site-packages/skimage/viewer/tests/test_utils.py",start:1100408,end:1101565,audio:0},{filename:"/lib/python3.9/site-packages/skimage/viewer/tests/test_viewer.py",start:1101565,end:1103815,audio:0},{filename:"/lib/python3.9/site-packages/skimage/viewer/tests/test_widgets.py",start:1103815,end:1107293,audio:0},{filename:"/lib/python3.9/site-packages/doc/ext/tests/__init__.py",start:1107293,end:1107293,audio:0},{filename:"/lib/python3.9/site-packages/doc/ext/tests/test_notebook_doc.py",start:1107293,end:1107977,audio:0}],remote_package_size:508012,package_uuid:"dd208f85-6f9f-46dd-94b8-e76993b73193"})})(); \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Download Tellink Routel Public !LINK!.md b/spaces/quidiaMuxgu/Expedit-SAM/Download Tellink Routel Public !LINK!.md deleted file mode 100644 index 6d173355dc95f94c23a04be97b6de25497255e43..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Download Tellink Routel Public !LINK!.md +++ /dev/null @@ -1,44 +0,0 @@ -

        Download Tellink Routel Public


        Download Zip > https://geags.com/2uCqgD



        - -It can manage the dial-up telephone booths, LAN to remote telephone booths, and multiple LAN to remote telephone booths. - -About - -The system was developed at the Telco Training Company (Telco) and the client was the telecom company (Telecom) of the South America. - -Telco - -Founded by Roberto Nodalini and Iulo M. Montemurro, Telco is a state-owned company of Bell South (Brazil). Its mission is to provide training and consultancy services to clients (telecommunications companies and private companies). The company was created in 1977, and received its first license to do business in June 1978. - -Telco started operating with equipment acquired from several companies, some of which still exist. Its first equipment was supplied by IBM and Siemens. Later, Telco acquired from Unicatel equipment from AT&T and others. - -The company has five divisions: corporate, management, training, technical and distribution. - -The corporate division deals with the management and training of the company, besides it develops business plans and provides assistance with financial, legal and administrative matters. - -The management division provides consultancy services, besides it manages the companies and their projects. - -The technical division provides maintenance and support services to Telco's clients. - -The distribution division provides telecommunication services to Telco's clients and potential clients, besides it distributes Telco's equipment. - -Telco and Telecom are two separate companies, but one division: the corporate. - -Telecom - -Telecom is the South American subsidiary of Bell South. - -The company was created in 2000. - -Telecom's main business is the sale and installation of telephone equipment (telephone network infrastructure), especially to residential and commercial clients. - -Telecom's activities are mainly on the Brazilian territory, however, the company has a presence in countries such as Peru and Colombia, and is expanding its presence to countries like Chile and Argentina. - -References - -Category:Telecommunications companies of BrazilCanine lupus myelitis: case report and literature review. - -Cases of idiopathic canine lupus myelitis are very rare. A middle-aged, female boxer dog was referred with paraparesis and urinary incontinence. Magnetic resonance imaging of the brain and spinal cord demonstrated marked atrophy of the thoracolumbar and sacral regions of the spinal cord. The dog was treated with prednisolone, azathiop 4fefd39f24
        -
        -
        -

        diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Easyusetool Frontend 0.5.1.4.md b/spaces/quidiaMuxgu/Expedit-SAM/Easyusetool Frontend 0.5.1.4.md deleted file mode 100644 index 6d13281bab8d30e7a3738ba43af82530e666f1e0..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Easyusetool Frontend 0.5.1.4.md +++ /dev/null @@ -1,7 +0,0 @@ -
        -

        Sticky: WINDOWS Smart utilities (ex EasyUseTools Extended) up to 1.4.2.0. Started by Silv3rSurfer, 29th December 2012 01:36 PM. https://trello.com/c/IwIkQlna/85-easyusetool-frontend-0514-rar https://trello.com/c/snF188Rk/22-hacking-windows-7-using-meterpre ter-reverse-tcp-2021

        -

        Apache, like other web servers, offers a great way to expose your application to the outside world. In this modern era it is not rare to see server-side programming such as PHP or ASP.NET being used in combination with well-designed API endpoints to build a truly user-friendly web interface. If you have seen the famous Wolfram Alpha web interface, you will know that developers have been using such a combination for a long time now. But the main benefit of having a separate API endpoint is that all this exposure to the wild web is put in a safe box for you to enjoy. If you are thinking that you are missing out on user experience by not implementing it yourself, think again! Take the example of a frontend component. Imagine the amount of time it would take to write an application like this one. It can take days, weeks, or even months to build a frontend widget that satisfies your needs. This is your first (and probably worst) introduction to JavaScript, so don't count on a small project like a frontend component to show you the ropes. If you are lucky enough to get a better exposure in the early stages of your career, think of the time saved by working with a better-designed and more practical frontend component than your own brain can think of.

        -

        easyusetool frontend 0.5.1.4


        Download === https://geags.com/2uCpYh



        -

        In the case of JavaScript, it is actually quite ironic to create an API that exposes a frontend component in the first place. But this is exactly what the aforementioned library aims to do. The library claims to be an easier to use alternative to jQuery or Handlebars that supports the latest version of JavaScript (ES6, including arrow functions, modules, template strings, etc.). And the good news is that it also offers an alternative to Handlebars' string interpolation feature (via the $.e.t interpolation feature) to make the previous sentence seem even better.

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/HD Online Player (Hasee Toh Phasee [NEW] Download 720p Movie).md b/spaces/quidiaMuxgu/Expedit-SAM/HD Online Player (Hasee Toh Phasee [NEW] Download 720p Movie).md deleted file mode 100644 index d5a70553ad73f76a530942d9b545dbf46f0bbef0..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/HD Online Player (Hasee Toh Phasee [NEW] Download 720p Movie).md +++ /dev/null @@ -1,106 +0,0 @@ - -

        HD Online Player (Hasee Toh Phasee download 720p movie): A Guide for Bollywood Lovers

        - -

        If you are a fan of Bollywood movies, you might have heard of Hasee Toh Phasee, a romantic comedy film released in 2014. The film stars Sidharth Malhotra and Parineeti Chopra as Nikhil and Meeta, two opposites who fall in love on the eve of Nikhil's engagement to Meeta's sister Karishma. The film was a critical and commercial success, earning praise for its fresh and quirky story, its chemistry between the leads, and its music by Vishal-Shekhar.

        - -

        But how can you watch Hasee Toh Phasee online in HD quality? How can you download Hasee Toh Phasee 720p movie with HD online player? And what is HD online player anyway? In this article, we will answer these questions and more, so you can enjoy this Bollywood hit at your convenience.

        -

        HD Online Player (Hasee Toh Phasee download 720p movie)


        Download File ———>>> https://geags.com/2uCrBv



        - -

        What is HD Online Player?

        - -

        HD online player is a software that allows you to stream or download movies and videos in high definition (HD) quality. HD online player can support various formats, such as MP4, MKV, AVI, MOV, etc. HD online player can also provide subtitles, audio tracks, and playback options for your viewing pleasure.

        - -

        There are many HD online players available on the internet, but not all of them are reliable or safe. Some HD online players may contain malware, viruses, or ads that can harm your device or compromise your privacy. Some HD online players may also have poor quality, slow speed, or limited content.

        - -

        Therefore, you need to be careful when choosing an HD online player for watching or downloading Hasee Toh Phasee or any other movie. You need to check the reviews, ratings, features, and security of the HD online player before downloading or installing it on your device.

        - -

        How to Watch Hasee Toh Phasee Online in HD Quality?

        - -

        One of the easiest ways to watch Hasee Toh Phasee online in HD quality is to use a streaming service that offers the movie. Streaming services are platforms that allow you to watch movies and shows online without downloading them. Streaming services usually require a subscription fee or a rental fee to access their content.

        - -

        Some of the streaming services that offer Hasee Toh Phasee are Netflix, Amazon Prime Video, Apple TV, Google Play Movies, and YouTube. These streaming services have different prices, plans, and regions of availability. You need to check which streaming service is available in your country and which one has the best deal for you.

        - -

        To watch Hasee Toh Phasee online in HD quality with a streaming service, you need to have a stable internet connection, a compatible device (such as a computer, smartphone, tablet, smart TV, etc.), and an account with the streaming service. You also need to search for Hasee Toh Phasee on the streaming service's website or app and click on the play button.

        - -

        How to Download Hasee Toh Phasee 720p Movie with HD Online Player?

        - -

        If you prefer to download Hasee Toh Phasee 720p movie with HD online player instead of streaming it online, you need to follow these steps:

        -

        - -
          -
        1. Find a reliable and safe HD online player that supports downloading movies in 720p quality. You can use the search results above as a reference or do your own research.
        2. -
        3. Download and install the HD online player on your device. Make sure you have enough storage space and battery life for the download process.
        4. -
        5. Search for Hasee Toh Phasee on the HD online player's website or app. You may need to use a VPN or proxy if the movie is not available in your region.
        6. -
        7. Select the download option and choose the 720p quality. You may also need to select the subtitles and audio tracks if available.
        8. -
        9. Wait for the download to finish. You can check the progress and status of the download on the HD online player's interface.
        10. -
        11. Enjoy watching Hasee Toh Phasee offline on your device with HD online player.
        12. -
        - -

        Conclusion

        - -

        Hasee Toh Phasee is a Bollywood movie that you don't want to miss if you love romantic comedies. The film has a charming story, a talented cast, and catchy songs that will make you smile and laugh. You can watch Hasee Toh Phasee online in HD quality or download Hasee Toh Phasee 720p movie with HD online player using the tips we shared in this article.

        - -

        We hope this article was helpful for you. If you have any questions or feedback about HD online player (Hasee Toh Phasee download 720p movie), feel free to leave a comment below. Thank you for reading!

        -

        What are the Benefits of Watching Hasee Toh Phasee with HD Online Player?

        - -

        Watching Hasee Toh Phasee with HD online player has many benefits for you as a movie lover. Here are some of them:

        - -
          -
        • You can enjoy the movie in high definition (HD) quality, which means you can see every detail, color, and expression of the actors and the scenes. HD quality also enhances the sound and music of the movie, making it more immersive and enjoyable.
        • -
        • You can choose between streaming or downloading the movie, depending on your preference and internet speed. Streaming allows you to watch the movie online without downloading it, while downloading allows you to watch the movie offline without internet connection.
        • -
        • You can watch the movie on any device that supports HD online player, such as a computer, smartphone, tablet, smart TV, etc. You can also connect your device to a larger screen or a speaker system for a better viewing experience.
        • -
        • You can access subtitles and audio tracks in different languages if available, so you can understand the movie better or learn a new language. You can also adjust the playback options, such as pause, resume, rewind, fast-forward, etc.
        • -
        • You can save money and time by watching Hasee Toh Phasee with HD online player instead of going to a movie theater or buying a DVD. You can also avoid ads, interruptions, or poor quality that may occur with other sources.
        • -
        - -

        What are the Reviews of Hasee Toh Phasee?

        - -

        Hasee Toh Phasee has received positive reviews from critics and audiences alike. The film has a rating of 80% on Rotten Tomatoes based on 5 reviews and a rating of 6.8/10 on IMDb based on 15k votes. The film has also been praised by various media outlets and celebrities.

        - -

        Some of the common praises for Hasee Toh Phasee are:

        - -
          -
        • The film has a fresh and quirky story that is different from the usual Bollywood rom-coms. The film explores the relationship between two misfits who find love in each other despite their differences and challenges.
        • -
        • The film has a talented cast that delivers excellent performances. Sidharth Malhotra and Parineeti Chopra have great chemistry and charm as Nikhil and Meeta. They also portray their characters' emotions and complexities with ease and conviction.
        • -
        • The film has catchy songs and music that complement the mood and theme of the film. The songs are composed by Vishal-Shekhar and sung by various artists, such as Shreya Ghoshal, Benny Dayal, Sunidhi Chauhan, etc. The songs are also well-choreographed and picturized.
        • -
        - -

        Some of the common criticisms for Hasee Toh Phasee are:

        - -
          -
        • The film has some clichés and contrivances that are typical of the rom-com genre. The film also has some unrealistic and illogical scenes that may not appeal to everyone.
        • -
        • The film has a slow pace and a long runtime that may bore some viewers. The film also has some subplots and characters that are not well-developed or necessary.
        • -
        • The film has a weak climax and a predictable ending that may disappoint some viewers. The film also has some loose ends and unanswered questions that may leave some viewers unsatisfied.
        • -
        -

        What are the Trivia of Hasee Toh Phasee?

        - -

        Hasee Toh Phasee is a movie that has some interesting trivia behind it. Here are some of them:

        - -
          -
        • The title of the movie was originally Hasta La Vista, but it was changed to Hasee Toh Phasee for marketing reasons.
        • -
        • The movie was directed by Vinil Mathew, who made his Bollywood debut with this film. He was an ad filmmaker before he ventured into movies.
        • -
        • The writer Harshavardhan Kulkarni and the director Vinil Mathew had collaborated earlier on a Star Movies feature called The Chosen One.
        • -
        • The movie had a cameo appearance by Karan Johar, who was one of the producers of the film. He played a client who meets Nikhil in his office.
        • -
        • Parineeti Chopra received a lot of praise for her role as Meeta, a drug addict and a genius. She also won several awards and nominations for her performance.
        • -
        - -

        Why Should You Watch Hasee Toh Phasee with HD Online Player?

        - -

        Hasee Toh Phasee is a movie that you should watch with HD online player because it is a fun and entertaining film that will make you laugh and cry. Here are some reasons why you should watch Hasee Toh Phasee with HD online player:

        - -
          -
        • You will get to see the chemistry and charm of Sidharth Malhotra and Parineeti Chopra, who play Nikhil and Meeta, two misfits who fall in love on the eve of Nikhil's engagement to Meeta's sister Karishma.
        • -
        • You will get to enjoy the fresh and quirky story of Hasee Toh Phasee, which is different from the usual Bollywood rom-coms. The movie explores the relationship between two opposites who find love in each other despite their differences and challenges.
        • -
        • You will get to listen to the catchy songs and music of Hasee Toh Phasee, which are composed by Vishal-Shekhar and sung by various artists, such as Shreya Ghoshal, Benny Dayal, Sunidhi Chauhan, etc. The songs are also well-choreographed and picturized.
        • -
        • You will get to watch Hasee Toh Phasee in high definition (HD) quality, which means you can see every detail, color, and expression of the actors and the scenes. HD quality also enhances the sound and music of the movie, making it more immersive and enjoyable.
        • -
        • You will get to choose between streaming or downloading Hasee Toh Phasee with HD online player, depending on your preference and internet speed. Streaming allows you to watch the movie online without downloading it, while downloading allows you to watch the movie offline without internet connection.
        • -
        - -

        So what are you waiting for? Watch Hasee Toh Phasee with HD online player today and have a great time!

        -

        Conclusion

        - -

        Hasee Toh Phasee is a Bollywood movie that you don't want to miss if you love romantic comedies. The film has a charming story, a talented cast, and catchy songs that will make you smile and laugh. You can watch Hasee Toh Phasee online in HD quality or download Hasee Toh Phasee 720p movie with HD online player using the tips we shared in this article.

        - -

        We hope this article was helpful for you. If you have any questions or feedback about HD online player (Hasee Toh Phasee download 720p movie), feel free to leave a comment below. Thank you for reading!

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Ic3d Steel Download Included Crack Serial Keygen.md b/spaces/quidiaMuxgu/Expedit-SAM/Ic3d Steel Download Included Crack Serial Keygen.md deleted file mode 100644 index 08c252be8322848be8015bcd3b3430d4a89620f0..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Ic3d Steel Download Included Crack Serial Keygen.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Ic3d Steel Download Included Crack Serial Keygen


        Download File > https://geags.com/2uCsBD



        -
        -7 Rar is 4, release rapidshare 64bit or 64 18 from download are . ... Ic3d Steel Download Included Crack Serial 32 · Telecharger Epson Wic ... 1fdad05405
        -
        -
        -

        diff --git a/spaces/radames/MusicGen-Continuation/audiocraft/data/audio_utils.py b/spaces/radames/MusicGen-Continuation/audiocraft/data/audio_utils.py deleted file mode 100644 index 76d4bc2a33ce722d879db2af33cd1336bd6b1fb3..0000000000000000000000000000000000000000 --- a/spaces/radames/MusicGen-Continuation/audiocraft/data/audio_utils.py +++ /dev/null @@ -1,174 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import sys -import typing as tp - -import julius -import torch -import torchaudio - - -def convert_audio_channels(wav: torch.Tensor, channels: int = 2) -> torch.Tensor: - """Convert audio to the given number of channels. - - Args: - wav (torch.Tensor): Audio wave of shape [B, C, T]. - channels (int): Expected number of channels as output. - Returns: - torch.Tensor: Downmixed or unchanged audio wave [B, C, T]. - """ - *shape, src_channels, length = wav.shape - if src_channels == channels: - pass - elif channels == 1: - # Case 1: - # The caller asked 1-channel audio, and the stream has multiple - # channels, downmix all channels. - wav = wav.mean(dim=-2, keepdim=True) - elif src_channels == 1: - # Case 2: - # The caller asked for multiple channels, but the input file has - # a single channel, replicate the audio over all channels. - wav = wav.expand(*shape, channels, length) - elif src_channels >= channels: - # Case 3: - # The caller asked for multiple channels, and the input file has - # more channels than requested. In that case return the first channels. - wav = wav[..., :channels, :] - else: - # Case 4: What is a reasonable choice here? - raise ValueError('The audio file has less channels than requested but is not mono.') - return wav - - -def convert_audio(wav: torch.Tensor, from_rate: float, - to_rate: float, to_channels: int) -> torch.Tensor: - """Convert audio to new sample rate and number of audio channels. - """ - wav = julius.resample_frac(wav, int(from_rate), int(to_rate)) - wav = convert_audio_channels(wav, to_channels) - return wav - - -def normalize_loudness(wav: torch.Tensor, sample_rate: int, loudness_headroom_db: float = 14, - loudness_compressor: bool = False, energy_floor: float = 2e-3): - """Normalize an input signal to a user loudness in dB LKFS. - Audio loudness is defined according to the ITU-R BS.1770-4 recommendation. - - Args: - wav (torch.Tensor): Input multichannel audio data. - sample_rate (int): Sample rate. - loudness_headroom_db (float): Target loudness of the output in dB LUFS. - loudness_compressor (bool): Uses tanh for soft clipping. - energy_floor (float): anything below that RMS level will not be rescaled. - Returns: - output (torch.Tensor): Loudness normalized output data. - """ - energy = wav.pow(2).mean().sqrt().item() - if energy < energy_floor: - return wav - transform = torchaudio.transforms.Loudness(sample_rate) - input_loudness_db = transform(wav).item() - # calculate the gain needed to scale to the desired loudness level - delta_loudness = -loudness_headroom_db - input_loudness_db - gain = 10.0 ** (delta_loudness / 20.0) - output = gain * wav - if loudness_compressor: - output = torch.tanh(output) - assert output.isfinite().all(), (input_loudness_db, wav.pow(2).mean().sqrt()) - return output - - -def _clip_wav(wav: torch.Tensor, log_clipping: bool = False, stem_name: tp.Optional[str] = None) -> None: - """Utility function to clip the audio with logging if specified.""" - max_scale = wav.abs().max() - if log_clipping and max_scale > 1: - clamp_prob = (wav.abs() > 1).float().mean().item() - print(f"CLIPPING {stem_name or ''} happening with proba (a bit of clipping is okay):", - clamp_prob, "maximum scale: ", max_scale.item(), file=sys.stderr) - wav.clamp_(-1, 1) - - -def normalize_audio(wav: torch.Tensor, normalize: bool = True, - strategy: str = 'peak', peak_clip_headroom_db: float = 1, - rms_headroom_db: float = 18, loudness_headroom_db: float = 14, - loudness_compressor: bool = False, log_clipping: bool = False, - sample_rate: tp.Optional[int] = None, - stem_name: tp.Optional[str] = None) -> torch.Tensor: - """Normalize the audio according to the prescribed strategy (see after). - - Args: - wav (torch.Tensor): Audio data. - normalize (bool): if `True` (default), normalizes according to the prescribed - strategy (see after). If `False`, the strategy is only used in case clipping - would happen. - strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak', - i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square - with extra headroom to avoid clipping. 'clip' just clips. - peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy. - rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger - than the `peak_clip` one to avoid further clipping. - loudness_headroom_db (float): Target loudness for loudness normalization. - loudness_compressor (bool): If True, uses tanh based soft clipping. - log_clipping (bool): If True, basic logging on stderr when clipping still - occurs despite strategy (only for 'rms'). - sample_rate (int): Sample rate for the audio data (required for loudness). - stem_name (Optional[str]): Stem name for clipping logging. - Returns: - torch.Tensor: Normalized audio. - """ - scale_peak = 10 ** (-peak_clip_headroom_db / 20) - scale_rms = 10 ** (-rms_headroom_db / 20) - if strategy == 'peak': - rescaling = (scale_peak / wav.abs().max()) - if normalize or rescaling < 1: - wav = wav * rescaling - elif strategy == 'clip': - wav = wav.clamp(-scale_peak, scale_peak) - elif strategy == 'rms': - mono = wav.mean(dim=0) - rescaling = scale_rms / mono.pow(2).mean().sqrt() - if normalize or rescaling < 1: - wav = wav * rescaling - _clip_wav(wav, log_clipping=log_clipping, stem_name=stem_name) - elif strategy == 'loudness': - assert sample_rate is not None, "Loudness normalization requires sample rate." - wav = normalize_loudness(wav, sample_rate, loudness_headroom_db, loudness_compressor) - _clip_wav(wav, log_clipping=log_clipping, stem_name=stem_name) - else: - assert wav.abs().max() < 1 - assert strategy == '' or strategy == 'none', f"Unexpected strategy: '{strategy}'" - return wav - - -def f32_pcm(wav: torch.Tensor) -> torch.Tensor: - """Convert audio to float 32 bits PCM format. - """ - if wav.dtype.is_floating_point: - return wav - else: - assert wav.dtype == torch.int16 - return wav.float() / 2**15 - - -def i16_pcm(wav: torch.Tensor) -> torch.Tensor: - """Convert audio to int 16 bits PCM format. - - ..Warning:: There exist many formula for doing this convertion. None are perfect - due to the asymetry of the int16 range. One either have possible clipping, DC offset, - or inconsistancies with f32_pcm. If the given wav doesn't have enough headroom, - it is possible that `i16_pcm(f32_pcm)) != Identity`. - """ - if wav.dtype.is_floating_point: - assert wav.abs().max() <= 1 - candidate = (wav * 2 ** 15).round() - if candidate.max() >= 2 ** 15: # clipping would occur - candidate = (wav * (2 ** 15 - 1)).round() - return candidate.short() - else: - assert wav.dtype == torch.int16 - return wav diff --git a/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/criteria/lpips/utils.py b/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/criteria/lpips/utils.py deleted file mode 100644 index 3d15a0983775810ef6239c561c67939b2b9ee3b5..0000000000000000000000000000000000000000 --- a/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/criteria/lpips/utils.py +++ /dev/null @@ -1,30 +0,0 @@ -from collections import OrderedDict - -import torch - - -def normalize_activation(x, eps=1e-10): - norm_factor = torch.sqrt(torch.sum(x ** 2, dim=1, keepdim=True)) - return x / (norm_factor + eps) - - -def get_state_dict(net_type: str = 'alex', version: str = '0.1'): - # build url - url = 'https://raw.githubusercontent.com/richzhang/PerceptualSimilarity/' \ - + f'master/lpips/weights/v{version}/{net_type}.pth' - - # download - old_state_dict = torch.hub.load_state_dict_from_url( - url, progress=True, - map_location=None if torch.cuda.is_available() else torch.device('cpu') - ) - - # rename keys - new_state_dict = OrderedDict() - for key, val in old_state_dict.items(): - new_key = key - new_key = new_key.replace('lin', '') - new_key = new_key.replace('model.', '') - new_state_dict[new_key] = val - - return new_state_dict diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Download Always Kabhi Kabhi Movies 1080p Torrent A Bollywood Film with a Twist.md b/spaces/raedeXanto/academic-chatgpt-beta/Download Always Kabhi Kabhi Movies 1080p Torrent A Bollywood Film with a Twist.md deleted file mode 100644 index d5b199b24b92839f47bb48b718e31f327e4bbef4..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Download Always Kabhi Kabhi Movies 1080p Torrent A Bollywood Film with a Twist.md +++ /dev/null @@ -1,120 +0,0 @@ - -

        Download Always Kabhi Kabhi Movies 1080p Torrent

        -

        Are you looking for a way to download Always Kabhi Kabhi movies 1080p torrent? If yes, then you have come to the right place. In this article, we will tell you everything you need to know about this movie and how to download it from torrent sites. But first, let's find out what is Always Kabhi Kabhi and why you should watch it.

        -

        Download Always Kabhi Kabhi Movies 1080p Torrent


        Download Zip » https://tinourl.com/2uL0eU



        -

        What is Always Kabhi Kabhi?

        -

        Always Kabhi Kabhi is a 2011 Hindi-language romantic comedy film directed by Roshan Abbas and produced by Shahrukh Khan under Red Chillies Entertainment. It introduced Ali Fazal, Giselli Monteiro, Harsh Nagar, Zoa Morani, Satyajeet Dubey with Satish Shah, Lilette Dubey, Vijay Raaz, Mukesh Tiwari and Manoj Joshi playing supporting roles. The film released on 17 June 2011. It focuses on four teenagers embarking on a dramatic journey during their incident-packed final year at school.

        -

        The plot of the movie revolves around Sameer Khanna, a jock who falls in love with Aishwarya Dhawan, a new student who wants to become a Bollywood actress. Aishwarya gets the role of Juliet in a Shakespeare play, while Sameer tries his best to learn the lines of Romeo. Meanwhile, their friends Tariq Naqvi and Nandini Oberoi have a love-hate relationship that blossoms into romance. The four face various challenges and obstacles in their personal and academic lives, while also discovering themselves and their dreams.

        -

        Why you should watch Always Kabhi Kabhi?

        -

        There are many reasons why you should watch Always Kabhi Kabhi. Here are some of them:

        -
          -
        • The genre and the theme of the movie: Always Kabhi Kabhi is a romantic comedy that deals with the issues and aspirations of the youth. It is a light-hearted and entertaining movie that will make you laugh, cry and relate to the characters. It also has a message of following your heart and living your life to the fullest.
        • -
        • The production and the direction of the movie: Always Kabhi Kabhi is produced by Shahrukh Khan, one of the most popular and successful actors in Bollywood. He also makes a cameo appearance in the movie as a dancer. The movie is directed by Roshan Abbas, who is an award-winning radio jockey, TV host, writer and theatre director. He has also been an alma mater of La-Martiniere College, where the movie has been shot.
        • -
        • The music and the songs of the movie: Always Kabhi Kabhi has a catchy and melodious soundtrack composed by Pritam Chakraborty, Aashish Rego and Shree D. The songs are sung by various artists such as Shafqat Amanat Ali, Bhavin Dhanak, Sanah Moidutty, K.K., Sunidhi Chauhan, Shaan, Aditi Singh Sharma and others. The songs range from romantic ballads to peppy numbers that will make you groove.
        • -
        • The reviews and the ratings of the movie: Always Kabhi Kabhi has received mixed reviews from critics and audiences alike. Some have praised it for its freshness, humor and performances, while others have criticized it for its clichés, predictability and lack of depth. The movie has a rating of 4.2 out of 10 on IMDb and 3 out of 5 on Times of India.
        • -
        -

        How to download Always Kabhi Kabhi movies 1080p torrent?

        -

        If you want to download Always Kabhi Kabhi movies 1080p torrent, you need to be aware of the benefits and the risks involved in doing so.

        -

        The benefits are that you can watch the movie anytime and anywhere you want without paying any money or subscribing to any service. You can also enjoy high-quality video and audio without any buffering or interruptions.

        -

        The risks are that you may violate the copyright laws and face legal consequences if caught by authorities. You may also expose your device to malware or viruses that can harm your data or privacy. You may also encounter fake or corrupted files that can waste your time or bandwidth.

        -

        If you are willing to take these risks, then here are the steps to download Always Kabhi Kabhi movies 1080p torrent:

        -
          -
        1. Download and install a torrent client software such as BitTorrent or uTorrent on your device.
        2. -
        3. Search for Always Kabhi Kabhi movies 1080p torrent on any torrent site such as The Pirate Bay or Kickass Torrents.
        4. -
        5. Select a torrent file that has good seeds (uploaders) and peers (downloaders) ratio for faster downloading.
        6. -
        7. Open the torrent file with your torrent client software and start downloading.
        8. -
        9. Wait for the download to complete and enjoy watching Always Kabhi Kabhi movies 1080p torrent.
        10. -
        -

        The best torrent sites to download Always Kabhi Kabhi movies 1080p torrent

        -

        There are many torrent sites available on the internet that offer different types of content for downloading. However, not all of them are reliable or safe to use. Some may have low-quality files or malicious ads that can harm your device or data.

        -

        To help you find the best torrent sites to download Always Kabhi Kabhi movies 1080p torrent, we have compiled a list of some of them below:

        -

        Watch Always Kabhi Kabhi Full Movie HD Online Free
        -Always Kabhi Kabhi 2011 Hindi Movie Torrent Download
        -How to Download Always Kabhi Kabhi in 1080p Quality
        -Always Kabhi Kabhi Movie Review and Ratings
        -Best Sites to Stream Always Kabhi Kabhi Online
        -Always Kabhi Kabhi Cast and Crew Details
        -Always Kabhi Kabhi Songs and Soundtrack Download
        -Always Kabhi Kabhi Movie Subtitles in English and Hindi
        -Always Kabhi Kabhi Movie Trailer and Teaser
        -Always Kabhi Kabhi Movie Box Office Collection and Budget
        -Always Kabhi Kabhi Movie Behind the Scenes and Making
        -Always Kabhi Kabhi Movie Awards and Nominations
        -Always Kabhi Kabhi Movie Plot and Storyline
        -Always Kabhi Kabhi Movie Quotes and Dialogues
        -Always Kabhi Kabhi Movie Memes and Fan Art
        -Always Kabhi Kabhi Movie Trivia and Facts
        -Always Kabhi Kabhi Movie Analysis and Criticism
        -Always Kabhi Kabhi Movie Comparison and Contrast
        -Always Kabhi Kabhi Movie Sequel and Prequel
        -Always Kabhi Kabhi Movie Remake and Adaptation
        -Always Kabhi Kabhi Movie Genre and Theme
        -Always Kabhi Kabhi Movie Location and Setting
        -Always Kabhi Kabhi Movie Symbolism and Imagery
        -Always Kabhi Kabhi Movie Message and Moral
        -Always Kabhi Kabhi Movie References and Easter Eggs
        -Always Kabhi Kabhi Movie Controversy and Scandal
        -Always Kabhi Kabhi Movie Merchandise and Products
        -Always Kabhi Kabhi Movie Fan Club and Community
        -Always Kabhi Kabhi Movie Quiz and Test
        -Always Kabhi Kabhi Movie Wallpaper and Poster
        -Download Always kabHi kabHi Movies 1080p Torrent with VPN
        -Download AlwAys kabHi kabHi Movies 1080p Torrent for Free
        -Download AlwayS kabHi kabHi Movies 1080p Torrent Fast and Easy
        -Download AlwayS kAbHi kabHi Movies 1080p Torrent without Ads
        -Download AlwayS kaBHi kabHi Movies 1080p Torrent with Subtitles
        -Download AlwayS kabHi kAbHi Movies 1080p Torrent in Hindi Dubbed
        -Download AlwayS kabHi kaBHi Movies 1080p Torrent in High Quality
        -Download AlwayS kabHi kabHi mOvies 1080p Torrent with Magnet Link
        -Download AlwayS kabHi kabHi moVies 1080p Torrent with Direct Link
        -Download AlwayS kabHi kabHi movIes 1080p Torrent with Seeders
        -Download AlwayS kabHi kabHi moviEs 1080p Torrent with Leechers
        -Download AlwayS kabHi kabHi movieS 1080p Torrent with Peers
        -Download AlwayS kabHi kabHi movies 1o8op Torrent with File Size
        -Download AlwayS kabHi kabHi movies 10o8p Torrent with File Format
        -Download AlwayS kabHi kabHi movies 108op Torrent with File Name
        -Download AlwayS kabHi kabHi movies 108oP Torrent with File Type
        -Download AlwayS kabHi kabHi movies 1080P tOrrent with File Extension
        -Download AlwayS kabHi kabHi movies 1080P toRrent with File Source
        -Download AlwayS kabHi kabHi movies 1080P torRent with File Description
        -Download AlwayS kabHi kabHi movies 1080P torrEnt with File Hash

        - - - - - - - - - - - - - - - - - - - - - - - - - -
        Torrent SiteFeatures
        The Pirate Bay- The most popular and widely used torrent site in the world
        - Offers millions of torrents across various categories
        - Has a simple and user-friendly interface
        - Supports magnet links for easy downloading
        - Has a community forum for feedback and support
        Kickass Torrents- Another popular and trusted torrent site with a large user base
        - Offers thousands of torrents across various categories
        - Has a modern and sleek interface
        - Supports magnet links for easy downloading
        - Has a community forum for feedback and support
        RARBG- A well-known torrent site that specializes in high-quality content
        - Offers thousands of torrents across various categories
        - Has a clean and simple interface
        - Supports magnet links for easy downloading
        - Has a blog section for news and updates
        1337x- A rising torrent site that offers a variety of content
        - Offers thousands of torrents across various categories
        - Has a stylish and colorful interface
        - Supports magnet links for easy downloading
        - Has a community forum for feedback and support
        LimeTorrents- A reliable torrent site that offers verified content
        - Offers thousands of torrents across various categories
        - Has a bright and cheerful interface
        - Supports magnet links for easy downloading
        - Has an RSS feed for updates
        -

        Conclusion

        -

        In conclusion, Always Kabhi Kabhi is a romantic comedy film that follows four teenagers during their final year at school. It is produced by Shahrukh Khan and directed by Roshan Abbas. It has a catchy soundtrack composed by Pritam Chakraborty and others. It has received mixed reviews from critics

        If you want to watch Always Kabhi Kabhi, you can download it from torrent sites using a torrent client software. However, you need to be careful of the legal and security risks involved in doing so. You also need to choose a reliable and safe torrent site that offers high-quality files and fast downloading.

        -

        FAQs

        -

        Here are some frequently asked questions and answers related to the topic of downloading Always Kabhi Kabhi movies 1080p torrent:

        -
          -
        1. Q: Is downloading movies from torrent sites legal?
          A: Downloading movies from torrent sites may be illegal in some countries or regions, depending on the copyright laws and regulations. You may face legal consequences if you are caught by authorities or sued by the content owners. Therefore, it is advisable to check the legality of downloading movies from torrent sites in your location before doing so.
        2. -
        3. Q: How can I protect my device and data from malware or viruses when downloading movies from torrent sites?
          A: You can protect your device and data from malware or viruses when downloading movies from torrent sites by using a reputable antivirus software and a VPN service. An antivirus software can scan and remove any malicious files or programs that may infect your device or data. A VPN service can encrypt your internet traffic and hide your IP address, making it harder for hackers or trackers to access your device or data.
        4. -
        5. Q: How can I avoid fake or corrupted files when downloading movies from torrent sites?
          A: You can avoid fake or corrupted files when downloading movies from torrent sites by checking the file size, name, format, description, comments and ratings of the torrent file before downloading it. You can also use a torrent client software that has a built-in preview feature that allows you to watch a part of the movie before downloading it.
        6. -
        7. Q: How can I improve the speed and quality of downloading movies from torrent sites?
          A: You can improve the speed and quality of downloading movies from torrent sites by choosing a torrent file that has a high number of seeds (uploaders) and peers (downloaders) ratio. This means that there are more sources and options for downloading the file. You can also adjust the settings of your torrent client software to optimize the bandwidth allocation, download limit, upload limit and connection limit.
        8. -
        9. Q: How can I watch Always Kabhi Kabhi movies 1080p torrent on my TV or other devices?
          A: You can watch Always Kabhi Kabhi movies 1080p torrent on your TV or other devices by using a media player software that supports playing torrent files directly or converting them to other formats. You can also use a streaming device such as Chromecast, Roku, Fire TV Stick or Apple TV that allows you to cast or mirror your device screen to your TV.
        10. -
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Download Film The Evil Cult Sub 11 The Ultimate Showdown Between the Royal Lineage and the Evil Cult.md b/spaces/raedeXanto/academic-chatgpt-beta/Download Film The Evil Cult Sub 11 The Ultimate Showdown Between the Royal Lineage and the Evil Cult.md deleted file mode 100644 index 3e7d791dbfbd09022ff3222e1c302fbee6da50f2..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Download Film The Evil Cult Sub 11 The Ultimate Showdown Between the Royal Lineage and the Evil Cult.md +++ /dev/null @@ -1,70 +0,0 @@ - -

        Download Film The Evil Cult Sub 11: A Guide for Martial Arts Fans

        -

        If you are a fan of martial arts films, you may have heard of The Evil Cult, a 1993 Hong Kong fantasy film directed by Wong Jing and starring Jet Li, Sharla Cheung, Chingmy Yau and Sammo Hung. The film is based on the novel The Heaven Sword and Dragon Saber by Jin Yong, one of the most popular writers of wuxia (martial arts and chivalry) fiction. The film features a wild and rollicking story that involves prized swords, swordsmen, clans, cults, magic and flying.

        -

        In this article, we will tell you everything you need to know about The Evil Cult, why you should watch it, and how to download it with subtitles in English. Read on to find out more!

        -

        Download Film The Evil Cult Sub 11


        Downloadhttps://tinourl.com/2uL4Uz



        -

        What is The Evil Cult?

        -

        The Evil Cult is a martial arts fantasy film that was released in 1993 in Hong Kong. It is also known as Lord of the Wu Tang, Kung Fu Cult Master, or Yi Tin To Lung Ji: Zhi Mo Nui Za Ba (the original Chinese title). It is the third and final film adaptation of Jin Yong's novel The Heaven Sword and Dragon Saber, which was first published in 1961 and revised in 1979.

        -

        The plot of the film

        -

        The film is set in the late Yuan dynasty (1271-1368), when China was ruled by the Mongols. The story revolves around Zhang Wuji (Jet Li), a young man who is the heir of the Ming sect, a rebel group that opposes the Mongol rule. Zhang Wuji's parents are killed by members of the six major orthodox sects, who are also after the two legendary weapons: the Heaven Sword and the Dragon Saber. Zhang Wuji is rescued by a mysterious monk named Xie Xun (Sammo Hung), who teaches him martial arts and entrusts him with the Dragon Saber.

        -

        Zhang Wuji then embarks on a series of adventures that involve various factions, such as the Shaolin Temple, the Wudang sect, the Emei sect, the Beggar Clan, the Ming sect, and the evil cults of the Golden Flower and the Heavenly Eagle. Along the way, he meets several women who fall in love with him, such as Zhao Min (Sharla Cheung), a Mongol princess; Zhou Zhiruo (Chingmy Yau), a disciple of the Emei sect; and Yin Li (Gigi Lai), a girl with a scarred face. He also faces many enemies, such as Yang Xiao (Francis Ng), a traitor of the Ming sect; Cheng Kun (Norman Chu), a Shaolin monk who betrays his own sect; and Zhang Cuishan (Lau Wing), his own uncle who covets the Dragon Saber.

        -

        The film ends with a cliffhanger, as Zhang Wuji is caught in a dilemma between choosing his love or his loyalty. He has to decide whether to marry Zhao Min or Zhou Zhiruo, and whether to help the Ming sect or the Yuan dynasty. The film was intended to have a sequel, but it was never made due to various reasons.

        -

        The cast and crew of the film

        -

        The Evil Cult was directed by Wong Jing, a prolific and controversial filmmaker who is known for his comedies, action films, and gambling films. He has worked with many famous actors, such as Chow Yun-fat, Stephen Chow, Andy Lau, Jackie Chan, and Jet Li. Wong Jing also wrote the screenplay for The Evil Cult, which deviated from the original novel in some aspects.

        -

        How to download The Evil Cult movie with subtitles
        -Watch The Evil Cult online free sub 11
        -The Evil Cult full movie download HD sub 11
        -Download The Evil Cult sub 11 torrent
        -The Evil Cult sub 11 English subtitles download
        -The Evil Cult 1993 sub 11 download link
        -The Evil Cult sub 11 Indonesian subtitles download
        -The Evil Cult sub 11 Arabic subtitles download
        -The Evil Cult sub 11 Hindi subtitles download
        -The Evil Cult sub 11 Chinese subtitles download
        -The Evil Cult sub 11 Spanish subtitles download
        -The Evil Cult sub 11 French subtitles download
        -The Evil Cult sub 11 German subtitles download
        -The Evil Cult sub 11 Japanese subtitles download
        -The Evil Cult sub 11 Korean subtitles download
        -The Evil Cult sub 11 Malay subtitles download
        -The Evil Cult sub 11 Thai subtitles download
        -The Evil Cult sub 11 Vietnamese subtitles download
        -The Evil Cult sub 11 Russian subtitles download
        -The Evil Cult sub 11 Turkish subtitles download
        -Download film Kung Fu Cult Master sub 11
        -Download film Yi Tian Tu Long Ji Zhi Mo Jiao Jiao Zhu sub 11
        -Download film Lord of the Wu Tang sub 11
        -Download film Jet Li's Kung Fu Master sub 11
        -Download film Jet Li's The Kung Fu Cult Master sub 11
        -Download film Sammo Hung's The Evil Cult sub 11
        -Download film Sharla Cheung's The Evil Cult sub 11
        -Download film Gigi Lai's The Evil Cult sub 11
        -Download film Francis Ng's The Evil Cult sub 11
        -Download film Collin Chou's The Evil Cult sub 11
        -Download film Richard Ng's The Evil Cult sub 11
        -Download film Chingmy Yau's The Evil Cult sub 11
        -Download film Cheung Man's The Evil Cult sub 11
        -Download film Zhang Tielin's The Evil Cult sub 11
        -Download film Leung Kar Yan's The Evil Cult sub 11
        -Download film Lau Shun's The Evil Cult sub 11
        -Download film Wong Jing's The Evil Cult sub 11
        -Download film Corey Yuen's The Evil Cult sub 11
        -Download film Louis Cha's The Evil Cult novel adaptation sub 11
        -Download film Jin Yong's The Heaven Sword and Dragon Saber sequel sub 11
        -Download film based on the novel by Jin Yong (Louis Cha) with subtitle number eleven
        -Where can I download the movie adaptation of Jin Yong's novel with subtitle number eleven
        -Best sites to download the movie adaptation of Jin Yong's novel with subtitle number eleven
        -Free and legal ways to download the movie adaptation of Jin Yong's novel with subtitle number eleven
        -Reviews and ratings of the movie adaptation of Jin Yong's novel with subtitle number eleven
        -Plot summary and analysis of the movie adaptation of Jin Yong's novel with subtitle number eleven
        -Cast and crew of the movie adaptation of Jin Yong's novel with subtitle number eleven
        -Trivia and facts about the movie adaptation of Jin Yong's novel with subtitle number eleven
        -Awards and nominations of the movie adaptation of Jin Yong's novel with subtitle number eleven
        -Soundtrack and music of the movie adaptation of Jin Yong's novel with subtitle number eleven

        -

        The film starred Jet Li as Zhang Wuji, one of his most iconic roles. Jet Li is one of the most famous martial arts actors in the world, who has starred in many films such as Once Upon a Time in China, Fist of Legend, Hero, and Fearless. He is also known for his philanthropy and his involvement in Buddhism and Taoism.

        -

        The film also featured Sharla Cheung as Zhao Min, Chingmy Yau as Zhou Zhiruo, Gigi Lai as Yin Li, Sammo Hung as Xie Xun, Francis Ng as Yang Xiao, Norman Chu as Cheng Kun, Lau Wing as Zhang Cuishan, and many other actors who played supporting roles. Some of them were famous stars in Hong Kong cinema, while others were newcomers or veterans.

        -

        The film's action choreography was done by Sammo Hung and his team, who created many impressive fight scenes that showcased different styles of martial arts. The film also used special effects to create some fantastical elements, such as flying swords, magic spells, and wire work.

        -

        The reception and legacy of the film

        -

        The Evil Cult was a commercial success in Hong Kong, where it grossed over HK$30 million at the box office. It was also well received by fans of Jin Yong's novels and martial arts films. However, it also received some criticism for its changes from the original novel,

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Download Steinberg.Cubase.5.1.1.PORTABLE.WiN and Get Access to Thousands of Sounds and Effects.md b/spaces/raedeXanto/academic-chatgpt-beta/Download Steinberg.Cubase.5.1.1.PORTABLE.WiN and Get Access to Thousands of Sounds and Effects.md deleted file mode 100644 index 206501268065ed3a31b9d335561052b659424758..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Download Steinberg.Cubase.5.1.1.PORTABLE.WiN and Get Access to Thousands of Sounds and Effects.md +++ /dev/null @@ -1,178 +0,0 @@ - -

        Steinberg.Cubase.5.1.1.PORTABLE.WiN download pc: A Review

        -

        If you are looking for a powerful, professional, and portable digital audio workstation (DAW) for creating music on your pc, you might want to check out Steinberg.Cubase.5.1.1.PORTABLE.WiN. This is a special version of Cubase 5 that can run without installation, making it ideal for musicians who want to work on different computers or devices without hassle.

        -

        Steinberg.Cubase.5.1.1.PORTABLE.WiN download pc


        Download ··· https://tinourl.com/2uL3k1



        -

        In this article, we will review Steinberg.Cubase.5.1.1.PORTABLE.WiN and show you how to download, install, and use it for your music production needs. We will also answer some frequently asked questions about this software and give you our honest opinion on whether it is worth your time and money.

        -

        What is Steinberg Cubase 5.1.1 portable win?

        -

        A brief introduction to Cubase 5

        -

        Cubase 5 is a bit dated but still an awesome and professional DAW for creating music, with its help you can write any song, any style but i recommend using it for making the type of music the developers created it for: POLKA! oompa-oompa-oompa! (just kidding)

        -

        Cubase 5 was released in 2009 by Steinberg, a German company that specializes in music software and hardware. It is one of the most popular and widely used DAWs in the world, with millions of users ranging from hobbyists to professionals.

        -

        Cubase 5 offers a comprehensive set of features and functions for recording, editing, mixing, mastering, and composing music in various formats (MIDI, WAV, AIFF, etc.). It supports VST plugins, which are additional software that can enhance the sound quality and capabilities of Cubase.

        -

        The features and benefits of Cubase 5 portable win

        -

        Steinberg.Cubase.5.1.1.PORTABLE.WiN is a special version of Cubase 5 that can run without installation on any Windows computer (7/8/10) that meets the minimum requirements (2 GHz CPU, 1024 MB RAM). This means that you can carry it on a USB flash drive or an external hard drive and use it on any compatible pc without having to install anything.

        -

        This gives you several advantages over the full version of Cubase 5:

        -
          -
        • You can save space on your pc's hard drive by not installing the software.
        • -
        • You can avoid potential conflicts or errors with other software or drivers on your pc.
        • -
        • You can work on different pcs or devices without losing your settings or preferences.
        • -
        • You can easily backup or restore your data by copying the folder containing the software.
        • -
        • You can share or collaborate with other musicians who use Cubase by exchanging files or projects.
        • -
        -

        Of course, there are also some drawbacks to using Cubase 5 portable win:

        -
          -
        • You need to have a valid license or activation code to use the software legally.
        • -
        • You need to have a compatible audio interface or sound card to connect your instruments or microphones to your pc.
        • -
        • You need to have enough free space on your USB flash drive or external hard drive to store the software and your files.
        • -
        • You need to have a reliable power source or battery life for your pc when using the software.
        • -
        • You may experience some performance issues or glitches depending on your pc's specifications or configuration.
        • -
        -

        How to download and install Steinberg Cubase 5.1.1 portable win on your pc?

        -

        The requirements and precautions for downloading and installing Cubase 5 portable win

        -

        Before you download and install Steinberg.Cubase.5.1.1.PORTABLE.WiN on your pc, you need to make sure that you meet the following requirements:

        -
          -
        • You have a valid license or activation code for Cubase 5 that you purchased from Steinberg or an authorized dealer.
        • -
        • You have a compatible Windows computer (7/8/10) that has at least a 2 GHz CPU (dual core recommended), Continuing the article.

          The steps and tips for downloading and installing Cubase 5 portable win

          -

          There are several sources where you can download Steinberg.Cubase.5.1.1.PORTABLE.WiN, but we recommend using the one from AudioZ, which is a reliable and trusted website for music software and plugins. Here are the steps and tips for downloading and installing Cubase 5 portable win:

          -

          How to install Steinberg Cubase 5.1.1 portable on Windows
          -Steinberg Cubase 5.1.1 portable full version free download for pc
          -Best settings for Steinberg Cubase 5.1.1 portable on Windows 10
          -Steinberg Cubase 5.1.1 portable crack download for pc
          -Steinberg Cubase 5.1.1 portable vs full installation comparison
          -Where to find Steinberg Cubase 5.1.1 portable torrent download for pc
          -Steinberg Cubase 5.1.1 portable review and features
          -Steinberg Cubase 5.1.1 portable system requirements and compatibility
          -Steinberg Cubase 5.1.1 portable tips and tricks for beginners
          -Steinberg Cubase 5.1.1 portable license key generator for pc
          -How to update Steinberg Cubase 5.1.1 portable to the latest version
          -Steinberg Cubase 5.1.1 portable alternatives and competitors
          -How to fix Steinberg Cubase 5.1.1 portable errors and bugs on pc
          -Steinberg Cubase 5.1.1 portable tutorials and guides for advanced users
          -How to uninstall Steinberg Cubase 5.1.1 portable from pc
          -How to transfer Steinberg Cubase 5.1.1 portable projects and files to another pc
          -How to customize Steinberg Cubase 5.1.1 portable interface and preferences
          -How to use Steinberg Cubase 5.1.1 portable plugins and effects on pc
          -How to record and edit audio with Steinberg Cubase 5.1.1 portable on pc
          -How to create and mix music with Steinberg Cubase 5.1.1 portable on pc
          -How to export and share Steinberg Cubase 5.1.1 portable projects and files on pc
          -How to optimize Steinberg Cubase 5.1.1 portable performance and speed on pc
          -How to troubleshoot Steinberg Cubase 5.1.1 portable issues and problems on pc
          -How to backup and restore Steinberg Cubase 5.1.1 portable data and settings on pc
          -How to integrate Steinberg Cubase 5.1.1 portable with other software and hardware on pc
          -How to use Steinberg Cubase 5.1.1 portable keyboard shortcuts and commands on pc
          -How to activate Steinberg Cubase 5.1.1 portable offline mode on pc
          -How to register and login to Steinberg Cubase 5.1.1 portable online account on pc
          -How to download and install additional content for Steinberg Cubase 5.1.1 portable on pc
          -How to get help and support for Steinberg Cubase 5.1.1 portable on pc
          -How to upgrade from Steinberg Cubase 5.0 or lower versions to Steinberg Cubase 5.0 or higher versions on pc
          -How to downgrade from Steinberg Cubase 6 or higher versions to Steinberg Cubase 6 or lower versions on pc
          -How to use Steinberg Cubase 5 in different languages on pc
          -How to change the sample rate and bit depth of Steinberg Cubase 5 projects on pc
          -How to use MIDI devices and controllers with Steinberg Cubase 5 on pc
          -How to use VST instruments and samples with Steinberg Cubase 5 on pc
          -How to use automation and modulation with Steinberg Cubase 5 on pc
          -How to use quantization and groove with Steinberg Cubase 5 on pc
          -How to use time stretch and pitch shift with Steinberg Cubase 5 on pc
          -How to use audio warp and variaudio with Steinberg Cubase 5 on pc
          -How to use loopmash and beat designer with Steinberg Cubase 5 on pc
          -How to use groove agent one and halion one with Steinberg Cubase 5 on pc
          -How to use rewire and external FX with Steinberg Cubase 5 on pc
          -How to use media bay and pool with Steinberg Cubase 5 on pc
          -How to use track presets and channel strips with Steinberg Cubase 5 on pc
          -How to use surround sound and spatial panner with Steinberg Cubase 5 on pc
          -How to use score editor and notation with Steinberg Cubase 5 on pc
          -How to use video track and synchronization with Steinberg Cubase 5 on pc
          -How to use project assistant and templates with Steinberg Cubase 5 on pc

          -
            -
          1. Go to the AudioZ website and search for Steinberg Cubase 5.11 WIN x86 Portable.
          2. -
          3. Click on the download link and enter the password (audioz.info) to access the file.
          4. -
          5. Extract the file using a program like WinRAR or 7-Zip.
          6. -
          7. Copy the folder containing the software to your USB flash drive or external hard drive.
          8. -
          9. Plug your USB flash drive or external hard drive into your pc and open the folder.
          10. -
          11. Double-click on the Cubase 5.exe file to launch the software.
          12. -
          13. Enter your license or activation code when prompted.
          14. -
          15. Enjoy using Cubase 5 portable win on your pc!
          16. -
          -

          Some tips to keep in mind when downloading and installing Cubase 5 portable win:

          -
            -
          • Make sure you have enough free space on your USB flash drive or external hard drive to store the software and your files.
          • -
          • Make sure you have a compatible audio interface or sound card to connect your instruments or microphones to your pc.
          • -
          • Make sure you have a reliable power source or battery life for your pc when using the software.
          • -
          • Do not delete or modify any files or folders in the software folder, as this may cause errors or glitches.
          • -
          • Do not run any other programs or processes that may interfere with the software's performance.
          • -
          -

          How to use Steinberg Cubase 5.1.1 portable win for creating music?

          -

          The basics and essentials of Cubase 5 interface and workflow

          -

          Cubase 5 has a user-friendly and intuitive interface that allows you to easily navigate and access its features and functions. Here are some of the basics and essentials of Cubase 5 interface and workflow:

          -
            -
          • The Project Window is where you can view and edit your tracks, clips, events, parts, automation, etc. You can also access various tools, menus, and commands from here.
          • -
          • The Transport Panel is where you can control the playback, recording, looping, metronome, tempo, time signature, etc. of your project. You can also access various modes, functions, and preferences from here.
          • -
          • The Mixer is where you can adjust the volume, pan, mute, solo, send, insert, EQ, etc. of your tracks and channels. You can also access various effects, plugins, routing options, etc. from here.
          • -
          • The MediaBay is where you can browse, preview, import, export, organize, etc. your media files (audio, MIDI, loops, presets, etc.). You can also access various libraries, databases, categories, tags, etc. from here.
          • -
          • The Toolbar is where you can access various tools (pointer, scissors, glue, mute, etc.) that help you edit your project. You can also customize the toolbar by adding or removing tools according to your preference.
          • -
          • The Status Line is where you can see information about your project (name, sample rate, bit depth, etc.), your cursor position (bars/beats/ticks), your selection range (start/end/length), etc.
          • -
          -

          To create music with Cubase 5 portable win, you need to follow these general steps:

          -
            -
          1. Create a new project or open an existing one.
          2. -
          3. Add tracks (audio or MIDI) to your project according to your needs.
          4. -
          5. Record or import audio or MIDI data to your tracks using your instruments or microphones or the MediaBay.
          6. -
          7. Edit your audio or MIDI data using various tools and functions (cutting Continuing the article.

            The advanced and creative tools and techniques of Cubase 5 for working with loops, beats, vocals, and effects

            -

            Cubase 5 portable win also provides you with a variety of advanced and creative tools and techniques for working with loops, beats, vocals, and effects. Here are some of them:

            -
              -
            • LoopMash is a unique and innovative tool that allows you to create new and exciting grooves by blending and remixing multiple loops from different genres and styles. You can control the level of similarity, variation, and complexity of the resulting loop using various parameters and effects.
            • -
            • VariAudio is a powerful and flexible tool that allows you to edit and manipulate the pitch, timing, and formant of monophonic vocal recordings. You can correct intonation problems, change melodies, create harmonies, or even transform vocals into synth sounds.
            • -
            • Beat Designer is a handy and intuitive tool that allows you to create and edit drum patterns using a step sequencer interface. You can use the included drum kits or load your own samples, and apply various effects and groove quantization options.
            • -
            • VST Expression is a revolutionary feature that allows you to control the articulation and expression of VST instruments using note expression data. You can assign different parameters (such as volume, pitch bend, vibrato, etc.) to individual notes or groups of notes, and edit them graphically in the Key Editor.
            • -
            • REVerence is the first VST3 convolution reverb that allows you to create realistic and natural-sounding reverberation effects using impulse responses of real spaces or devices. You can choose from a large library of presets or import your own impulse responses.
            • -
            -

            Conclusion

            -

            A summary of the main points and advantages of Cubase 5 portable win

            -

            In conclusion, Steinberg.Cubase.5.1.1.PORTABLE.WiN is a great option for musicians who want to create music on their pc without installing anything. It offers a comprehensive set of features and functions for recording, editing, mixing, mastering, and composing music in various formats. It also provides several advantages over the full version of Cubase 5, such as saving space, avoiding conflicts, working on different pcs or devices, backing up or restoring data easily, and sharing or collaborating with other musicians.

            -

            However, it also has some drawbacks, such as requiring a valid license or activation code, needing a compatible audio interface or sound card, having enough free space on your USB flash drive or external hard drive, having a reliable power source or battery life for your pc, and experiencing some performance issues or glitches depending on your pc's specifications or configuration.

            -

            A call to action and a recommendation for Cubase 5 portable win

            -

            If you are interested in trying out Cubase 5 portable win for yourself, you can download it from AudioZ, which is a reliable and trusted website for music software and plugins. You will need to have a valid license or activation code for Cubase 5 that you purchased from Steinberg or an authorized dealer. You will also need to have a compatible Windows computer (7/8/10) that has at least a 2 GHz CPU (dual core recommended), 1024 MB RAM Continuing the article. ), 1024 MB RAM, and a USB port or an external hard drive slot.

          8. -
          9. You download the software from a reliable and trusted source, such as AudioZ, and scan it for viruses or malware before extracting it.
          10. -
          11. You backup your data and settings regularly in case of any loss or damage.
          12. -
        -

        FAQs

        -

        What are the differences between Cubase 5 portable win and Cubase 5 full version?

        -

        The main difference between Cubase 5 portable win and Cubase 5 full version is that the former can run without installation on any compatible Windows computer, while the latter requires installation and activation on a specific computer. This means that Cubase 5 portable win is more flexible and convenient for working on different pcs or devices, but it may also have some limitations or drawbacks compared to the full version, such as performance issues, compatibility problems, or legal risks.

        -

        What are the advantages of using Cubase 5 portable win over other DAWs?

        -

        Cubase 5 portable win has several advantages over other DAWs, such as:

        -
          -
        • It is one of the most comprehensive and professional DAWs in the market, with a wide range of features and functions for recording, editing, mixing, mastering, and composing music in various formats.
        • -
        • It supports VST plugins, which are additional software that can enhance the sound quality and capabilities of Cubase.
        • -
        • It has a user-friendly and intuitive interface that allows you to easily navigate and access its features and functions.
        • -
        • It has a variety of advanced and creative tools and techniques for working with loops, beats, vocals, and effects, such as LoopMash, VariAudio, Beat Designer, VST Expression, and REVerence.
        • -
        • It can run without installation on any compatible Windows computer, which makes it ideal for musicians who want to work on different computers or devices without hassle.
        • -
        -

        How can I update Cubase 5 portable win to the latest version?

        -

        To update Cubase 5 portable win to the latest version, you need to follow these steps:

        -
          -
        1. Go to the Steinberg website and download the latest update for Cubase 5 (5.5.3) from here: https://www.steinberg.net/en/support/downloads/cubase_5.html
        2. -
        3. Extract the update file using a program like WinRAR or 7-Zip.
        4. -
        5. Copy the folder containing the update to your USB flash drive or external hard drive where you have Cubase 5 portable win.
        6. -
        7. Replace the old files with the new ones in the Cubase 5 portable win folder.
        8. -
        9. Launch Cubase 5 portable win and enjoy the new features and improvements.
        10. -
        -

        How can I get more plugins and sounds for Cubase 5 portable win?

        -

        To get more plugins and sounds for Cubase 5 portable win, you can do one of the following:

        -
          -
        • Browse and download free or paid VST plugins from various websites, such as VST4Free, Plugin Boutique, KVR Audio, etc. Make sure you download plugins that are compatible with Cubase 5 (32-bit) and scan them for viruses or malware before extracting them.
        • -
        • Copy the plugin files (usually .dll files) to your USB flash drive or external hard drive where you have Cubase 5 portable win.
        • -
        • Create a folder named "VSTPlugins" in the Cubase 5 portable win folder and paste the plugin files there.
        • -
        • Launch Cubase 5 portable win and go to Devices > Plugin Information > VST Plugins. Click on Update to scan for new plugins. You should see your new plugins listed there.
        • -
        • Add your new plugins to your tracks or channels by clicking on an empty insert slot and selecting them from the menu.
        • -
        -

        How can I contact Steinberg for support and feedback?

        -

        If you have any questions, issues, or suggestions regarding Cubase 5 portable win or any other Steinberg product, you can contact Steinberg for support and feedback by doing one of the following:

        -
          -
        • Visit their official website: https://www.steinberg.net/en/home.html
        • -
        • Visit their help center: https://helpcenter.steinberg.de/hc/en-us
        • -
        • Visit their forums: https://forums.steinberg.net/
        • -
        • Email them: info@steinberg.de
        • -
        • Call them: +49 (0)40 21035-0
        • -
        - : https://audioz.download/software/win/57385-download_steinberg-cubase-511-win-x86-portable.html : http://www.vst4free.com/ : https://www.pluginboutique.com/ : https://www.kvraudio.com/

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Downloads Todos Os 640 Hinos Da Harpa Crista Encontre os Hinos Mais Populares e Inspiradores da Harpa.md b/spaces/raedeXanto/academic-chatgpt-beta/Downloads Todos Os 640 Hinos Da Harpa Crista Encontre os Hinos Mais Populares e Inspiradores da Harpa.md deleted file mode 100644 index 98e35610d3e2962325e4612aa6756be51e9f6537..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Downloads Todos Os 640 Hinos Da Harpa Crista Encontre os Hinos Mais Populares e Inspiradores da Harpa.md +++ /dev/null @@ -1,68 +0,0 @@ -
        -

        Downloads Todos Os 640 Hinos Da Harpa Crista

        -

        Você gosta de cantar e ouvir os hinos da Harpa Cristã? Você quer ter acesso a todos os 640 hinos dessa coletânea em seu computador, celular ou tablet? Você quer saber mais sobre a origem, a história, a importância e o conteúdo dessa obra musical? Então, este artigo é para você.

        -

        Neste artigo, você vai aprender o que é a Harpa Cristã, como ela surgiu, qual é o seu significado e como ela está organizada. Você também vai descobrir como baixar todos os 640 hinos da Harpa Cristã de forma online ou offline, usando sites, aplicativos ou livros. Além disso, você vai receber dicas de como aproveitar os hinos da Harpa Cristã para cantar, tocar, meditar, estudar, compartilhar e evangelizar.

        -

        Downloads Todos Os 640 Hinos Da Harpa Crista


        DOWNLOAD ►►►►► https://tinourl.com/2uKZ4v



        -

        Se você é um amante da música cristã e quer ter em suas mãos um tesouro de louvor e adoração a Deus, continue lendo este artigo e saiba tudo sobre a Harpa Cristã.

        -

        O que é a Harpa Cristã?

        -

        A Harpa Cristã é o hinário oficial das Assembleias de Deus no Brasil. Ela contém 640 hinos que expressam a fé, a doutrina, a esperança e o testemunho dos crentes pentecostais. Ela é usada nos cultos públicos e domésticos, nas escolas bíblicas e nos momentos de devoção pessoal.

        -

        A origem e a história da Harpa Cristã

        -

        A Harpa Cristã tem sua origem no início do século XX, quando os missionários suecos Gunnar Vingren e Daniel Berg chegaram ao Brasil e iniciaram o movimento pentecostal entre os crentes batistas do Pará. Eles trouxeram consigo alguns hinários suecos e americanos que continham hinos inspirados pelo avivamento do Espírito Santo.

        -

        Em 1921, foi publicada a primeira edição da Harpa Cristã, com o título de "Salmos e Hinos", contendo 44 hinos traduzidos do sueco e do inglês. Em 1922, foi lançada a segunda edição, com 100 hinos. Em 1932, foi publicada a terceira edição, com 400 hinos. Em 1941, foi lançada a quarta edição, com 524 hinos. Em 1951, foi publicada a quinta edição, com 640 hinos.

        -

        Ao longo dos anos, a Harpa Cristã passou por algumas revisões e atualizações, mas manteve o seu número de hinos e o seu estilo musical. Ela também foi traduzida para outros idiomas, como espanhol, inglês, francês e japonês.

        -

        A importância e a influência da Harpa Cristã

        -

        A Harpa Cristã é um dos símbolos mais importantes da identidade das Assembleias de Deus no Brasil. Ela representa a unidade doutrinária, litúrgica e espiritual dessa denominação. Ela também reflete a diversidade cultural, regional e geracional dos seus membros.

        -

        A Harpa Cristã é uma fonte de inspiração para muitos cantores, compositores, músicos e ministérios de louvor que se dedicam à música cristã. Ela também é uma ferramenta de ensino para muitos pastores, professores, líderes e discipuladores que se empenham na educação cristã. Ela ainda é um recurso de evangelização para muitos missionários, evangelistas, obreiros e leigos que se envolvem na missão cristã.

        -

        A estrutura e o conteúdo da Harpa Cristã

        -

        A Harpa Cristã está dividida em dez seções temáticas: Louvor (hinos 1-115), Adoração (hinos 116-225), Gratidão (hinos 226-285), Consagração (hinos 286-345), Petição (hinos 346-405), Experiência (hinos 406-465), Vida cristã (hinos 466-525), Evangelização (hinos 526-585), Escatologia (hinos 586-645) e Corinhos (hinos 646-640).

        -

        Harpa Cristã com áudio de todos os hinos
        -Harpa Cristã cifrada para violão
        -Harpa Cristã mp3 grátis para baixar
        -Harpa Cristã online e offline
        -Harpa Cristã hinário da Igreja Assembleia de Deus
        -Harpa Cristã cantada e instrumental
        -Harpa Cristã letras e cifras
        -Harpa Cristã em pdf para download
        -Harpa Cristã completa 640 hinos
        -Harpa Cristã devocional e versículo do dia
        -Harpa Cristã louvores evangélicos
        -Harpa Cristã hinos antigos e tradicionais
        -Harpa Cristã app para celular e tablet
        -Harpa Cristã com quiz bíblico
        -Harpa Cristã com orações e pedidos de oração
        -Harpa Cristã com Bíblia Sagrada
        -Harpa Cristã com comentários e estudos bíblicos
        -Harpa Cristã com playback e karaoke
        -Harpa Cristã com partituras e tablaturas
        -Harpa Cristã com vídeos e imagens
        -Harpa Cristã com histórias e curiosidades
        -Harpa Cristã com dicas e tutoriais
        -Harpa Cristã com rádio e podcast
        -Harpa Cristã com agenda e eventos
        -Harpa Cristã com notícias e novidades
        -Como aprender a tocar harpa cristã no violão
        -Como cantar os hinos da harpa cristã corretamente
        -Como baixar os hinos da harpa cristã em mp3
        -Como imprimir os hinos da harpa cristã em pdf
        -Como usar o app da harpa cristã no celular
        -O que significa harpa cristã e qual sua origem
        -Quem compôs os hinos da harpa cristã e quando
        -Quais são os hinos mais conhecidos da harpa cristã e por quê
        -Quais são os benefícios de ouvir e cantar a harpa cristã
        -Quais são as melhores versões e traduções da harpa cristã
        -Onde encontrar a harpa cristã completa para download grátis
        -Onde comprar a harpa cristã impressa ou digital
        -Onde assistir vídeos e lives da harpa cristã na internet
        -Onde ouvir rádios e podcasts da harpa cristã online
        -Onde participar de grupos e comunidades da harpa cristã nas redes sociais
        -Qual a diferença entre harpa cristã e hinário evangélico
        -Qual a relação entre harpa cristã e Bíblia Sagrada
        -Qual a importância da harpa cristã para a igreja evangélica brasileira
        -Qual o melhor app da harpa cristã para Android e iOS
        -Qual o melhor site da harpa cristã para acessar online

        -

        Cada hino da Harpa Cristã tem um número, um título, uma letra (com versos e refrões), uma música (com melodia e harmonia) e uma indicação do autor ou tradutor da letra e do compositor ou arranjador da música. Alguns hinos também têm uma referência bíblica ou uma nota histórica.

        -

        Os hinos da Harpa Cristã abordam diversos temas relacionados à fé cristã pentecostal: o amor de Deus; a obra de Cristo; a pessoa do Espírito Santo; a salvação pela graça; o batismo no Espírito Santo; os dons espirituais; a santificação pela Palavra; o serviço ao próximo; o louvor com júbilo; a oração com fervor; o testemunho com poder; a esperança na volta de Jesus; entre outros.

        -

        Como baixar todos os 640 hinos da Harpa Cristã?

        -

        Se você quer baixar todos os 640 hinos da Harpa Cristã em seu dispositivo eletrônico

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Ebook Materia Medika Indonesia Jilid I Rar A Unique and Valuable Collection of Indonesian Medicinal Knowledge.md b/spaces/raedeXanto/academic-chatgpt-beta/Ebook Materia Medika Indonesia Jilid I Rar A Unique and Valuable Collection of Indonesian Medicinal Knowledge.md deleted file mode 100644 index 739f633bc954819dfdceef1d192cd2d2ef1b1d39..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Ebook Materia Medika Indonesia Jilid I Rar A Unique and Valuable Collection of Indonesian Medicinal Knowledge.md +++ /dev/null @@ -1,97 +0,0 @@ - -
        - Main body: describe the contents of the ebook, such as the history, sources, properties, uses, and preparations of Indonesian medicinal plants.
        - Conclusion: summarize the main points and benefits of the ebook. | | H2: The history of ebook materia medika indonesia jilid i rar | - How the ebook was developed by the Department of Health of Indonesia in 1977.
        - How the ebook was based on the traditional knowledge and research of Indonesian herbalists and pharmacists.
        - How the ebook was updated and revised over the years to include new findings and discoveries. | | H3: The sources of ebook materia medika indonesia jilid i rar | - How the ebook covers more than 600 species of plants from various regions and habitats of Indonesia.
        - How the ebook provides scientific names, local names, synonyms, botanical descriptions, illustrations, and distribution maps of each plant.
        - How the ebook also includes information on the chemical constituents, pharmacological effects, toxicity, and dosage of each plant. | | H4: The properties of ebook materia medika indonesia jilid i rar | - How the ebook classifies the plants according to their therapeutic actions, such as antipyretic, analgesic, anti-inflammatory, antiseptic, etc.
        - How the ebook explains the mechanisms and modes of action of each plant based on their chemical components and pharmacological studies.
        - How the ebook also indicates the contraindications, side effects, interactions, and precautions of each plant. | | H3: The uses of ebook materia medika indonesia jilid i rar | - How the ebook provides practical guidance on how to use the plants for various diseases and conditions, such as fever, cough, diarrhea, wounds, diabetes, hypertension, etc.
        - How the ebook gives examples of formulations and prescriptions for different types of preparations, such as decoctions, infusions, powders, capsules, syrups, ointments, etc.
        - How the ebook also suggests alternative or complementary therapies for some cases, such as acupuncture, massage, yoga, etc. | | H4: The preparations of ebook materia medika indonesia jilid i rar | - How the ebook teaches how to collect, identify, store, process, and standardize the plants for medicinal purposes.
        - How the ebook describes the methods and equipment for preparing different forms of medicines from the plants, such as extraction, filtration, evaporation, drying, grinding, etc.
        - How the ebook also specifies the quality control and safety measures for ensuring the efficacy and purity of the medicines. | | H2: The benefits of ebook materia medika indonesia jilid i rar | - How the ebook is a valuable resource for students, teachers, researchers, practitioners, and consumers of Indonesian medicine.
        - How the ebook promotes the preservation and development of Indonesian medicinal plants and their traditional knowledge.
        - How the ebook contributes to the improvement of public health and well-being in Indonesia and beyond. | **Article with HTML formatting**

        What is ebook materia medika indonesia jilid i rar?

        -

        If you are interested in learning more about Indonesian medicine and its rich heritage of medicinal plants, you might want to check out this amazing ebook called materia medika indonesia jilid i rar. This ebook is a comprehensive and authoritative reference on Indonesian medicinal plants that covers their history, sources, properties, uses, and preparations.

        -

        In this article, we will give you an overview of what this ebook is all about, why it is important for Indonesian medicine, and how you can benefit from it.

        -

        ebook materia medika indonesia jilid i rar


        Download Ziphttps://tinourl.com/2uL1yD



        -

        The history of ebook materia medika indonesia jilid i rar

        -

        The ebook materia medika indonesia jilid i rar was first published in 1977 by the Department of Health of Indonesia as part of its efforts to promote and standardize Indonesian medicine. The ebook was based on the traditional knowledge and research of Indonesian herbalists and pharmacists who had been collecting, studying, and documenting the medicinal plants of Indonesia for centuries.

        -

        The ebook was originally written in Bahasa Indonesia, the official language of Indonesia, and consisted of five volumes that covered more than 600 species of plants. The ebook was later translated into English and made available online as a pdf file that can be downloaded for free. The ebook has also been updated and revised over the years to include new findings and discoveries on Indonesian medicinal plants.

        -

        The sources of ebook materia medika indonesia jilid i rar

        -

        The ebook materia medika indonesia jilid i rar covers a wide range of plants from various regions and habitats of Indonesia, a country that is known for its biodiversity and richness of flora. The ebook provides scientific names, local names, synonyms, botanical descriptions, illustrations, and distribution maps of each plant. The ebook also includes information on the chemical constituents, pharmacological effects, toxicity, and dosage of each plant.

        -

        The plants covered by the ebook are classified into different groups according to their families, genera, or species. Some examples of these groups are:

          -
        • Acanthaceae: a family of flowering plants that includes herbs, shrubs, and trees with showy flowers. Some examples are Andrographis paniculata (sambiloto), Justicia gendarussa (gendarussa), and Strobilanthes crispus (pecah beling).
        • -
        • Aristolochiaceae: a family of flowering plants that includes vines, shrubs, and herbs with distinctive pipe-shaped flowers. Some examples are Aristolochia indica (akar angin), Aristolochia tagala (akar cacing), and Thottea siliquosa (kayu rapet).
        • -
        • Euphorbiaceae: a family of flowering plants that includes herbs, shrubs, trees, and succulents with milky sap. Some examples are Euphorbia hirta (patikan kebo), Jatropha curcas (jarak pagar), and Ricinus communis (jarak).
        • -
        • Gingeraceae: a family of flowering plants that includes herbs with rhizomes or tubers that are often aromatic or spicy. Some examples are Zingiber officinale (jahe), Curcuma longa (kunyit), and Kaempferia galanga (kencur).
        • -
        -

        -

        The properties of ebook materia medika indonesia jilid i rar

        -

        The properties of ebook materia medika indonesia jilid i rar

        -

        The ebook materia medika indonesia jilid i rar classifies the plants according to their therapeutic actions, such as antipyretic, analgesic, anti-inflammatory, antiseptic, etc. The ebook explains the mechanisms and modes of action of each plant based on their chemical components and pharmacological studies. The ebook also indicates the contraindications, side effects, interactions, and precautions of each plant.

        -

        For example, Andrographis paniculata (sambiloto) is a plant that has antipyretic, anti-inflammatory, antibacterial, antiviral, and immunomodulatory properties. It contains andrographolide, a bitter compound that inhibits the synthesis of prostaglandins and leukotrienes, which are involved in inflammation and fever. It also stimulates the production of interferon and antibodies, which enhance the immune system. However, it should not be used by pregnant women, people with bleeding disorders, or people who are allergic to it. It may also cause nausea, vomiting, diarrhea, headache, or rash in some cases.

        -

        download ebook materia medika indonesia jilid i gratis
        -materia medika indonesia jilid i pdf
        -buku materia medika indonesia jilid i departemen kesehatan ri
        -materia medika indonesia jilid i google books
        -materia medika indonesia jilid i scribd
        -materia medika indonesia jilid i sunarto prawirosujanto
        -materia medika indonesia jilid i 1977
        -materia medika indonesia jilid i direktorat jenderal pengawasan obat dan makanan
        -materia medika indonesia jilid i herbal
        -materia medika indonesia jilid i review
        -materia medika indonesia jilid i online
        -materia medika indonesia jilid i ebook free download
        -materia medika indonesia jilid i edisi terbaru
        -materia medika indonesia jilid i referensi
        -materia medika indonesia jilid i sinopsis
        -materia medika indonesia jilid i bahasa inggris
        -materia medika indonesia jilid i daftar isi
        -materia medika indonesia jilid i pengarang
        -materia medika indonesia jilid i penerbit
        -materia medika indonesia jilid i isbn
        -materia medika indonesia jilid i klasifikasi
        -materia medika indonesia jilid i abstrak
        -materia medika indonesia jilid i keyword
        -materia medika indonesia jilid i halaman
        -materia medika indonesia jilid i format rar
        -cara membuka ebook materia medika indonesia jilid i rar
        -cara mengonversi ebook materia medika indonesia jilid i rar ke pdf
        -cara mendapatkan ebook materia medika indonesia jilid i rar secara legal
        -cara membaca ebook materia medika indonesia jilid i rar di android
        -cara membaca ebook materia medika indonesia jilid i rar di laptop
        -cara mencetak ebook materia medika indonesia jilid i rar
        -cara mengutip ebook materia medika indonesia jilid i rar
        -cara menulis resensi ebook materia medika indonesia jilid i rar
        -cara menulis ringkasan ebook materia medika indonesia jilid i rar
        -cara menulis sinopsis ebook materia medika indonesia jilid i rar
        -apa itu materia medika indonesia?
        -apa itu materia medika?
        -apa itu materia?
        -apa itu ebook?
        -apa itu rar?
        -apa bedanya materia dan materi?
        -apa bedanya materia dan material?
        -apa bedanya materia dan medicina?
        -apa bedanya ebook dan buku?
        -apa bedanya ebook dan pdf?
        -apa bedanya rar dan zip?

        -

        The uses of ebook materia medika indonesia jilid i rar

        -

        The ebook materia medika indonesia jilid i rar provides practical guidance on how to use the plants for various diseases and conditions, such as fever, cough, diarrhea, wounds, diabetes, hypertension, etc. The ebook gives examples of formulations and prescriptions for different types of preparations, such as decoctions, infusions, powders, capsules, syrups, ointments, etc. The ebook also suggests alternative or complementary therapies for some cases, such as acupuncture, massage, yoga, etc.

        -

        For example, Euphorbia hirta (patikan kebo) is a plant that can be used for coughs, asthma, bronchitis, and other respiratory problems. It has expectorant, antispasmodic, and anti-asthmatic properties. It contains flavonoids, tannins, and alkaloids that relax the smooth muscles of the airways and help expel the mucus. It can be prepared as a decoction by boiling 15 grams of the dried leaves in 200 ml of water for 15 minutes and straining the liquid. The decoction can be taken three times a day, one cup each time. Alternatively, it can be combined with other herbs, such as ginger, turmeric, and honey, to enhance its effects.

        -

        The preparations of ebook materia medika indonesia jilid i rar

        -

        The ebook materia medika indonesia jilid i rar teaches how to collect, identify, store, process, and standardize the plants for medicinal purposes. The ebook describes the methods and equipment for preparing different forms of medicines from the plants, such as extraction, filtration, evaporation, drying, grinding, etc. The ebook also specifies the quality control and safety measures for ensuring the efficacy and purity of the medicines.

        -

        For example, Zingiber officinale (jahe) is a plant that can be used for various purposes, such as stimulating digestion, relieving nausea, reducing inflammation, and warming the body. It has carminative, antiemetic, anti-inflammatory, and thermogenic properties. It contains gingerols, shogaols, and zingerone that activate the receptors for heat and pain in the body and modulate the production of prostaglandins and cytokines. It can be prepared as a powder by drying and grinding the rhizomes of the plant. The powder can be stored in an airtight container for up to six months. The powder can be used to make tea, capsules, or syrups.

        -

        The benefits of ebook materia medika indonesia jilid i rar

        -

        The ebook materia medika indonesia jilid i rar is a valuable resource for students, teachers, researchers, practitioners, and consumers of Indonesian medicine. The ebook promotes the preservation and development of Indonesian medicinal plants and their traditional knowledge. The ebook contributes to the improvement of public health and well-being in Indonesia and beyond.

        -

        Some of the benefits of the ebook are:

          -
        • It provides reliable and comprehensive information on Indonesian medicinal plants that can be used for various purposes.
        • -
        • It helps to preserve and promote the cultural heritage and biodiversity of Indonesia.
        • -
        • It supports the scientific research and innovation on Indonesian medicinal plants and their potential applications.
        • -
        • It enhances the quality and safety of Indonesian medicine by providing standards and guidelines for its production and use.
        • -
        • It empowers the people to use natural and affordable remedies for their health problems.
        • -
        -

        -

        Conclusion

        -

        In conclusion, the ebook materia medika indonesia jilid i rar is an amazing ebook that covers everything you need to know about Indonesian medicinal plants. It is a comprehensive and authoritative reference that provides information on the history, sources, properties, uses, and preparations of more than 600 species of plants. It is a valuable resource for anyone who is interested in learning more about Indonesian medicine and its rich heritage of medicinal plants. It is also a useful tool for improving public health and well-being in Indonesia and beyond.

        -

        FAQs

        -

        Here are some frequently asked questions about the ebook materia medika indonesia jilid i rar:

        -
          -
        1. Where can I download the ebook?
          You can download the ebook for free from this link: https://epdfx.com/download/materia-medika-indonesia_58ac3e0f6454a73a78b1fa53_pdf
        2. -
        3. What format is the ebook in?
          The ebook is in pdf format that can be opened with any pdf reader software or app.
        4. -
        5. How many pages does the ebook have?
          The ebook has 192 pages in total.
        6. -
        7. Is the ebook available in other languages?
          The ebook is available in Bahasa Indonesia and English. You can choose your preferred language from the menu bar at the top of the pdf file.
        8. -
        9. How can I cite the ebook?
          You can cite the ebook using this format: Prawirosujanto S., et al. (1977). Materia medika Indonesia. Departemen Kesehatan R.I.
        10. -
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Ex4 decompiler v2 2015 cracked The best solution for recovering lost or corrupted ex4 files.md b/spaces/raedeXanto/academic-chatgpt-beta/Ex4 decompiler v2 2015 cracked The best solution for recovering lost or corrupted ex4 files.md deleted file mode 100644 index a440b33ae1683297f7be225f48f7e0228918db1f..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Ex4 decompiler v2 2015 cracked The best solution for recovering lost or corrupted ex4 files.md +++ /dev/null @@ -1,107 +0,0 @@ - -

        Ex4 Decompiler V2 2015 Cracked: What Is It and How to Use It?

        -

        If you are a trader or a programmer who uses MetaTrader 4 (MT4) platform, you may have encountered ex4 files. These are executable files that contain compiled MQL4 code, which is the programming language used to create trading robots, indicators and scripts for MT4. Ex4 files are usually encrypted and protected from being modified or reverse engineered by others.

        -

        ex4 decompiler v2 2015 cracked


        DOWNLOADhttps://tinourl.com/2uL4tc



        -

        However, sometimes you may need to access the source code of an ex4 file, either because you want to modify it, learn from it or fix it. This is where ex4 decompiler comes in handy. An ex4 decompiler is a software that can convert an ex4 file back into an mq4 file, which is the source code file that can be edited in MetaEditor or other text editors.

        -

        In this article, we will introduce you to one of the most popular and powerful ex4 decompilers available on the market: ex4 decompiler v2 2015 cracked. We will explain what it is, how it works, how to download and install it, how to use it and what are its pros and cons. By the end of this article, you will have a clear idea of whether this software is suitable for your needs and how to get started with it.

        -

        Features of Ex4 Decompiler V2 2015 Cracked

        -

        Ex4 decompiler v2 2015 cracked is a software that can decompile any ex4 file into an mq4 file with high accuracy and speed. It has several features that make it stand out from other ex4 decompilers on the market. Here are some of them:

        -
          -
        • It can decompile both protected and broken ex4 files. Some ex4 files are protected by encryption or obfuscation techniques that make them harder to decompile. Some ex4 files are broken due to errors or corruption that make them impossible to run on MT4. Ex4 decompiler v2 2015 cracked can handle both types of files and recover their source code without any problem.
        • -
        • It can support MT4 225 build and higher. MT4 is constantly updated by its developer, MetaQuotes Software Corp., which means that new features and changes are introduced in each new build. Some ex4 decompilers may not be compatible with the latest builds of MT4 and may fail to decompile some ex4 files. Ex4 decompiler v2 2015 cracked can support any build of MT4 from 225 onwards, which covers most of the ex4 files available today.
        • -
        • It can protect MQL compiled files from being decompiled by others. If you are a programmer who creates your own ex4 files, you may want to protect them from being stolen or copied by others. Ex4 decompiler v2 2015 cracked can help you with that by adding a layer of protection to your MQL compiled files that prevents them from being decompiled by other ex4 decompilers. This way, you can ensure that your intellectual property is safe and secure.
        • -
        -

        How to Download and Install Ex4 Decompiler V2 2015 Cracked

        -

        If you are interested in trying out ex4 decompiler v2 2015 cracked, you will need to download and install it on your computer. Here are the steps you need to follow:

        -
          -
        1. Go to this website where you can find the download link for ex4 decompiler v2 2015 cracked. You will need to complete a short survey or offer before you can access the link.
        2. -
        3. Once you have completed the survey or offer, you will be redirected to a page where you can download a zip file containing the software and the crack file.
        4. -
        5. Extract the zip file using WinRAR or any other extraction tool. You will see two folders: one named "Exe" and one named "Crack".
        6. -
        7. Open the "Exe" folder and run the setup file named "ex42mq42.exe". Follow the instructions on the screen to install the software on your computer.
        8. -
        9. Open the "Crack" folder and copy the file named "ex42mq42.dll". Paste it into the installation folder of ex42mq42.exe, which is usually located at C:\Program Files (x86)\ex42mq42\. Replace the original dll file with the cracked one.
        10. -
        11. Now you have successfully installed ex42mq42.exe with crack on your computer. You can launch it from your desktop or start menu.
        12. -
        13. To verify that the software is working properly, you can open it and check if it shows "Registered Version" at the top right corner of its window. If it does, then you have successfully activated the software with crack.
        14. -
        -

        How to Use Ex42mq42.exe

        -

        Now that you have installed ex42mq42.exe on your computer, you can start using it to decompile any ex42mq42.exe file into an mq42mq42.exe file. Here are the steps you need to follow:

        -
          -
        1. Launch ex42mq42.exe from your desktop or start menu.
        2. -
        3. Select File > Open File... from its menu bar or click on the open icon on its toolbar.
        4. -
        5. Browse your computer folders and locate the ex42mq42.exe file that you want to decompile. Select it and click Open.
        6. -
        7. The software will start analyzing and decompiling your ex42mq42.exe file. This may take some time depending on the size and complexity of your file.
        8. -
        9. Once the process is completed, you will see a message saying "Decompilation finished successfully!" at the bottom left corner of its window.
        10. -
        11. Select File > Save As... from its menu bar or click on the save icon on its toolbar.
        12. -
        13. Browse your computer folders and choose a destination folder where you want to save your mq42mq42.exe file. Enter a name for your file and click Save.
        14. -
        15. The software will save your mq42mq42.exe file in your chosen folder. You can now open it with MetaEditor or any other text editor.
        16. -
        -

        Pros and Cons of Ex42mq42.exe

        -

        Ex42mq42.exe is a powerful tool that can help you access the source code of any ex4 file with ease. However, it also has some drawbacks that you should be aware of before using it. Here are some of them:

        -
          -
        • It may violate the legal rights of the original developers of the ex4 files. Some ex4 files are protected by intellectual property laws that prohibit unauthorized copying, modification or distribution of their code. Decompiling such files may infringe on these rights and expose you to legal consequences. You should always respect the EULA (End User License Agreement) of the binary you wish to work with and obtain permission from the developers if possible.
        • -
        • It may raise ethical concerns about the fairness and honesty of your actions. Some ex4 files are created by hard-working programmers who spend a lot of time and effort to develop their products. Decompiling their files without their consent may be considered as stealing or cheating by some people. You should always acknowledge the original source of the code and give credit where it is due.
        • -
        • It may pose potential risks to your computer and your trading account. Some ex4 files may contain malicious code that can harm your computer or compromise your trading account. Decompiling such files may activate or expose this code and cause unwanted damage or loss. You should always scan the files with a reliable antivirus software and test them on a demo account before using them on a live account.
        • -
        -

        Conclusion

        -

        In conclusion, ex4 decompiler v2 2015 cracked is a powerful software that can decompile any ex4 file into an mq4 file with high accuracy and speed. It has several features that make it stand out from other ex4 decompilers on the market, such as decompiling both protected and broken ex4 files, supporting MT4 225 build and higher, and protecting MQL compiled files from being decompiled by others. However, it also has some drawbacks that you should be aware of before using it, such as violating the legal rights of the original developers of the ex4 files, raising ethical concerns about the fairness and honesty of your actions, and posing potential risks to your computer and your trading account. Therefore, you should use this software with caution and responsibility, and always respect the EULA (End User License Agreement) of the binary you wish to work with.

        -

        FAQs

        -

        Here are some frequently asked questions and their answers about ex4 decompiler v2 2015 cracked:

        -
          -
        1. Q: Where can I download ex4 decompiler v2 2015 cracked?
          A: You can download it from this website, but you will need to complete a short survey or offer before you can access the link.
        2. -
        3. Q: How can I install ex4 decompiler v2 2015 cracked?
          A: You will need to extract the zip file containing the software and the crack file, run the setup file named "ex42mq42.exe", copy the file named "ex42mq42.dll" from the "Crack" folder into the installation folder of ex42mq42.exe, and verify that the software shows "Registered Version" at the top right corner of its window.
        4. -
        5. Q: How can I use ex4 decompiler v2 2015 cracked?
          A: You will need to launch ex42mq42.exe from your desktop or start menu, select File > Open File... from its menu bar or click on the open icon on its toolbar, browse your computer folders and locate the ex42mq42.exe file that you want to decompile, select it and click Open, wait for the process to complete, select File > Save As... from its menu bar or click on the save icon on its toolbar, browse your computer folders and choose a destination folder where you want to save your mq42mq42.exe file, enter a name for your file and click Save, and open it with MetaEditor or any other text editor.
        6. -
        7. Q: Is ex4 decompiler v2 2015 cracked legal?
          A: It depends on the EULA (End User License Agreement) of the binary you wish to work with. Some EULAs may allow you to edit/decompile the binary, while others may prohibit it. Decompiling an ex4 file without permission from its developer may infringe on their intellectual property rights and expose you to legal consequences. You should always respect the EULA of the binary you wish to work with and obtain permission from its developer if possible.
        8. -
        9. Q: Is ex4 decompiler v2 2015 cracked ethical?
          A: It depends on your perspective and intention. Some people may consider it as a useful tool for learning, modifying or fixing an ex4 file, while others may consider it as a form of stealing or cheating. Decompiling an ex4 file without consent from its developer may be seen as disrespectful or dishonest by some people. You should always acknowledge the original source of the code and give credit where it is due.
        10. -
        -

        -

        ex4 to mq4 decompiler v2 2015 cracked version
        -how to decompile ex4 files v2 2015 crack
        -ex4 decompiler v2 2015 free download with crack
        -ex4 decompiler v2 2015 full version cracked
        -ex4 decompiler v2 2015 crack serial key
        -ex4 decompiler v2 2015 crack license code
        -ex4 decompiler v2 2015 crack activation key
        -ex4 decompiler v2 2015 crack patch
        -ex4 decompiler v2 2015 crack keygen
        -ex4 decompiler v2 2015 crack torrent
        -ex4 decompiler v2 2015 crack online
        -ex4 decompiler v2 2015 crack for windows
        -ex4 decompiler v2 2015 crack for mac
        -ex4 decompiler v2 2015 crack for linux
        -ex4 decompiler v2 2015 crack for android
        -ex4 decompiler v2 2015 crack for ios
        -ex4 decompiler v2 2015 crack for mt4
        -ex4 decompiler v2 2015 crack for mt5
        -ex4 decompiler v2 2015 crack for metatrader
        -ex4 decompiler v2 2015 crack for forex
        -ex4 decompiler v2 2015 crack for ea
        -ex4 decompiler v2 2015 crack for indicator
        -ex4 decompiler v2 2015 crack for script
        -ex4 decompiler v2 2015 crack for expert advisor
        -ex4 decompiler v2 2015 crack for trading system
        -ex4 decompiler v2 2015 cracked by bionic forex
        -ex4 decompiler v2 2015 cracked by purebeam team
        -ex4 decompiler v2 2015 cracked by forex tester software
        -ex4 decompiler v2 2015 cracked by forex zombi team
        -ex4 decompiler v2 2015 cracked by forex holy grail system
        -best ex4 decompiler v2 2015 cracked software
        -latest ex4 decompiler v2 2015 cracked software
        -updated ex4 decompiler v2 2015 cracked software
        -working ex4 decompiler v2 2015 cracked software
        -reliable ex4 decompiler v2 2015 cracked software
        -safe ex4 decompiler v2 2015 cracked software
        -secure ex4 decompiler v2 2015 cracked software
        -legit ex4 decompiler v2 2015 cracked software
        -trusted ex4 decompiler v2 2015 cracked software
        -verified ex4 decompiler v2 2015 cracked software
        -review of ex4 decompiler v2 2015 cracked software
        -comparison of ex4 decompiler v2 2015 cracked software
        -benefits of ex4 decompiler v2 2015 cracked software
        -features of ex4 decompiler v2 2015 cracked software
        -advantages of ex4 decompiler v2 2015 cracked software
        -disadvantages of ex4 decompiler v2 2015 cracked software
        -pros and cons of ex4 decompiler v2 2015 cracked software
        -testimonials of ex4 decompiler v2 2015 cracked software
        -feedback of ex4 decompiler v2 2015 cracked software
        -ratings of ex4 decompiler v2 2015 cracked software

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/rahul999r/Rahul_Kannada_TTS/src/hifi_gan/env.py b/spaces/rahul999r/Rahul_Kannada_TTS/src/hifi_gan/env.py deleted file mode 100644 index 2bdbc95d4f7a8bad8fd4f5eef657e2b51d946056..0000000000000000000000000000000000000000 --- a/spaces/rahul999r/Rahul_Kannada_TTS/src/hifi_gan/env.py +++ /dev/null @@ -1,15 +0,0 @@ -import os -import shutil - - -class AttrDict(dict): - def __init__(self, *args, **kwargs): - super(AttrDict, self).__init__(*args, **kwargs) - self.__dict__ = self - - -def build_env(config, config_name, path): - t_path = os.path.join(path, config_name) - if config != t_path: - os.makedirs(path, exist_ok=True) - shutil.copyfile(config, os.path.join(path, config_name)) diff --git a/spaces/raphael-gl/ai-days-subtitles-demo/README.md b/spaces/raphael-gl/ai-days-subtitles-demo/README.md deleted file mode 100644 index 7e8c0dfaf13dbe49dfa5e9b028c564c33ea52c85..0000000000000000000000000000000000000000 --- a/spaces/raphael-gl/ai-days-subtitles-demo/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Ai Days Subtitles Demo -emoji: 👁 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/http2.d.ts b/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/http2.d.ts deleted file mode 100644 index 0e3682609f32c1783ba84ea2331f7197526a1cc9..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/http2.d.ts +++ /dev/null @@ -1,2134 +0,0 @@ -/** - * The `http2` module provides an implementation of the [HTTP/2](https://tools.ietf.org/html/rfc7540) protocol. It - * can be accessed using: - * - * ```js - * const http2 = require('http2'); - * ``` - * @since v8.4.0 - * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/http2.js) - */ -declare module 'http2' { - import EventEmitter = require('node:events'); - import * as fs from 'node:fs'; - import * as net from 'node:net'; - import * as stream from 'node:stream'; - import * as tls from 'node:tls'; - import * as url from 'node:url'; - import { IncomingHttpHeaders as Http1IncomingHttpHeaders, OutgoingHttpHeaders, IncomingMessage, ServerResponse } from 'node:http'; - export { OutgoingHttpHeaders } from 'node:http'; - export interface IncomingHttpStatusHeader { - ':status'?: number | undefined; - } - export interface IncomingHttpHeaders extends Http1IncomingHttpHeaders { - ':path'?: string | undefined; - ':method'?: string | undefined; - ':authority'?: string | undefined; - ':scheme'?: string | undefined; - } - // Http2Stream - export interface StreamPriorityOptions { - exclusive?: boolean | undefined; - parent?: number | undefined; - weight?: number | undefined; - silent?: boolean | undefined; - } - export interface StreamState { - localWindowSize?: number | undefined; - state?: number | undefined; - localClose?: number | undefined; - remoteClose?: number | undefined; - sumDependencyWeight?: number | undefined; - weight?: number | undefined; - } - export interface ServerStreamResponseOptions { - endStream?: boolean | undefined; - waitForTrailers?: boolean | undefined; - } - export interface StatOptions { - offset: number; - length: number; - } - export interface ServerStreamFileResponseOptions { - statCheck?(stats: fs.Stats, headers: OutgoingHttpHeaders, statOptions: StatOptions): void | boolean; - waitForTrailers?: boolean | undefined; - offset?: number | undefined; - length?: number | undefined; - } - export interface ServerStreamFileResponseOptionsWithError extends ServerStreamFileResponseOptions { - onError?(err: NodeJS.ErrnoException): void; - } - export interface Http2Stream extends stream.Duplex { - /** - * Set to `true` if the `Http2Stream` instance was aborted abnormally. When set, - * the `'aborted'` event will have been emitted. - * @since v8.4.0 - */ - readonly aborted: boolean; - /** - * This property shows the number of characters currently buffered to be written. - * See `net.Socket.bufferSize` for details. - * @since v11.2.0, v10.16.0 - */ - readonly bufferSize: number; - /** - * Set to `true` if the `Http2Stream` instance has been closed. - * @since v9.4.0 - */ - readonly closed: boolean; - /** - * Set to `true` if the `Http2Stream` instance has been destroyed and is no longer - * usable. - * @since v8.4.0 - */ - readonly destroyed: boolean; - /** - * Set to `true` if the `END_STREAM` flag was set in the request or response - * HEADERS frame received, indicating that no additional data should be received - * and the readable side of the `Http2Stream` will be closed. - * @since v10.11.0 - */ - readonly endAfterHeaders: boolean; - /** - * The numeric stream identifier of this `Http2Stream` instance. Set to `undefined`if the stream identifier has not yet been assigned. - * @since v8.4.0 - */ - readonly id?: number | undefined; - /** - * Set to `true` if the `Http2Stream` instance has not yet been assigned a - * numeric stream identifier. - * @since v9.4.0 - */ - readonly pending: boolean; - /** - * Set to the `RST_STREAM` `error code` reported when the `Http2Stream` is - * destroyed after either receiving an `RST_STREAM` frame from the connected peer, - * calling `http2stream.close()`, or `http2stream.destroy()`. Will be`undefined` if the `Http2Stream` has not been closed. - * @since v8.4.0 - */ - readonly rstCode: number; - /** - * An object containing the outbound headers sent for this `Http2Stream`. - * @since v9.5.0 - */ - readonly sentHeaders: OutgoingHttpHeaders; - /** - * An array of objects containing the outbound informational (additional) headers - * sent for this `Http2Stream`. - * @since v9.5.0 - */ - readonly sentInfoHeaders?: OutgoingHttpHeaders[] | undefined; - /** - * An object containing the outbound trailers sent for this `HttpStream`. - * @since v9.5.0 - */ - readonly sentTrailers?: OutgoingHttpHeaders | undefined; - /** - * A reference to the `Http2Session` instance that owns this `Http2Stream`. The - * value will be `undefined` after the `Http2Stream` instance is destroyed. - * @since v8.4.0 - */ - readonly session: Http2Session; - /** - * Provides miscellaneous information about the current state of the`Http2Stream`. - * - * A current state of this `Http2Stream`. - * @since v8.4.0 - */ - readonly state: StreamState; - /** - * Closes the `Http2Stream` instance by sending an `RST_STREAM` frame to the - * connected HTTP/2 peer. - * @since v8.4.0 - * @param [code=http2.constants.NGHTTP2_NO_ERROR] Unsigned 32-bit integer identifying the error code. - * @param callback An optional function registered to listen for the `'close'` event. - */ - close(code?: number, callback?: () => void): void; - /** - * Updates the priority for this `Http2Stream` instance. - * @since v8.4.0 - */ - priority(options: StreamPriorityOptions): void; - /** - * ```js - * const http2 = require('http2'); - * const client = http2.connect('http://example.org:8000'); - * const { NGHTTP2_CANCEL } = http2.constants; - * const req = client.request({ ':path': '/' }); - * - * // Cancel the stream if there's no activity after 5 seconds - * req.setTimeout(5000, () => req.close(NGHTTP2_CANCEL)); - * ``` - * @since v8.4.0 - */ - setTimeout(msecs: number, callback?: () => void): void; - /** - * Sends a trailing `HEADERS` frame to the connected HTTP/2 peer. This method - * will cause the `Http2Stream` to be immediately closed and must only be - * called after the `'wantTrailers'` event has been emitted. When sending a - * request or sending a response, the `options.waitForTrailers` option must be set - * in order to keep the `Http2Stream` open after the final `DATA` frame so that - * trailers can be sent. - * - * ```js - * const http2 = require('http2'); - * const server = http2.createServer(); - * server.on('stream', (stream) => { - * stream.respond(undefined, { waitForTrailers: true }); - * stream.on('wantTrailers', () => { - * stream.sendTrailers({ xyz: 'abc' }); - * }); - * stream.end('Hello World'); - * }); - * ``` - * - * The HTTP/1 specification forbids trailers from containing HTTP/2 pseudo-header - * fields (e.g. `':method'`, `':path'`, etc). - * @since v10.0.0 - */ - sendTrailers(headers: OutgoingHttpHeaders): void; - addListener(event: 'aborted', listener: () => void): this; - addListener(event: 'close', listener: () => void): this; - addListener(event: 'data', listener: (chunk: Buffer | string) => void): this; - addListener(event: 'drain', listener: () => void): this; - addListener(event: 'end', listener: () => void): this; - addListener(event: 'error', listener: (err: Error) => void): this; - addListener(event: 'finish', listener: () => void): this; - addListener(event: 'frameError', listener: (frameType: number, errorCode: number) => void): this; - addListener(event: 'pipe', listener: (src: stream.Readable) => void): this; - addListener(event: 'unpipe', listener: (src: stream.Readable) => void): this; - addListener(event: 'streamClosed', listener: (code: number) => void): this; - addListener(event: 'timeout', listener: () => void): this; - addListener(event: 'trailers', listener: (trailers: IncomingHttpHeaders, flags: number) => void): this; - addListener(event: 'wantTrailers', listener: () => void): this; - addListener(event: string | symbol, listener: (...args: any[]) => void): this; - emit(event: 'aborted'): boolean; - emit(event: 'close'): boolean; - emit(event: 'data', chunk: Buffer | string): boolean; - emit(event: 'drain'): boolean; - emit(event: 'end'): boolean; - emit(event: 'error', err: Error): boolean; - emit(event: 'finish'): boolean; - emit(event: 'frameError', frameType: number, errorCode: number): boolean; - emit(event: 'pipe', src: stream.Readable): boolean; - emit(event: 'unpipe', src: stream.Readable): boolean; - emit(event: 'streamClosed', code: number): boolean; - emit(event: 'timeout'): boolean; - emit(event: 'trailers', trailers: IncomingHttpHeaders, flags: number): boolean; - emit(event: 'wantTrailers'): boolean; - emit(event: string | symbol, ...args: any[]): boolean; - on(event: 'aborted', listener: () => void): this; - on(event: 'close', listener: () => void): this; - on(event: 'data', listener: (chunk: Buffer | string) => void): this; - on(event: 'drain', listener: () => void): this; - on(event: 'end', listener: () => void): this; - on(event: 'error', listener: (err: Error) => void): this; - on(event: 'finish', listener: () => void): this; - on(event: 'frameError', listener: (frameType: number, errorCode: number) => void): this; - on(event: 'pipe', listener: (src: stream.Readable) => void): this; - on(event: 'unpipe', listener: (src: stream.Readable) => void): this; - on(event: 'streamClosed', listener: (code: number) => void): this; - on(event: 'timeout', listener: () => void): this; - on(event: 'trailers', listener: (trailers: IncomingHttpHeaders, flags: number) => void): this; - on(event: 'wantTrailers', listener: () => void): this; - on(event: string | symbol, listener: (...args: any[]) => void): this; - once(event: 'aborted', listener: () => void): this; - once(event: 'close', listener: () => void): this; - once(event: 'data', listener: (chunk: Buffer | string) => void): this; - once(event: 'drain', listener: () => void): this; - once(event: 'end', listener: () => void): this; - once(event: 'error', listener: (err: Error) => void): this; - once(event: 'finish', listener: () => void): this; - once(event: 'frameError', listener: (frameType: number, errorCode: number) => void): this; - once(event: 'pipe', listener: (src: stream.Readable) => void): this; - once(event: 'unpipe', listener: (src: stream.Readable) => void): this; - once(event: 'streamClosed', listener: (code: number) => void): this; - once(event: 'timeout', listener: () => void): this; - once(event: 'trailers', listener: (trailers: IncomingHttpHeaders, flags: number) => void): this; - once(event: 'wantTrailers', listener: () => void): this; - once(event: string | symbol, listener: (...args: any[]) => void): this; - prependListener(event: 'aborted', listener: () => void): this; - prependListener(event: 'close', listener: () => void): this; - prependListener(event: 'data', listener: (chunk: Buffer | string) => void): this; - prependListener(event: 'drain', listener: () => void): this; - prependListener(event: 'end', listener: () => void): this; - prependListener(event: 'error', listener: (err: Error) => void): this; - prependListener(event: 'finish', listener: () => void): this; - prependListener(event: 'frameError', listener: (frameType: number, errorCode: number) => void): this; - prependListener(event: 'pipe', listener: (src: stream.Readable) => void): this; - prependListener(event: 'unpipe', listener: (src: stream.Readable) => void): this; - prependListener(event: 'streamClosed', listener: (code: number) => void): this; - prependListener(event: 'timeout', listener: () => void): this; - prependListener(event: 'trailers', listener: (trailers: IncomingHttpHeaders, flags: number) => void): this; - prependListener(event: 'wantTrailers', listener: () => void): this; - prependListener(event: string | symbol, listener: (...args: any[]) => void): this; - prependOnceListener(event: 'aborted', listener: () => void): this; - prependOnceListener(event: 'close', listener: () => void): this; - prependOnceListener(event: 'data', listener: (chunk: Buffer | string) => void): this; - prependOnceListener(event: 'drain', listener: () => void): this; - prependOnceListener(event: 'end', listener: () => void): this; - prependOnceListener(event: 'error', listener: (err: Error) => void): this; - prependOnceListener(event: 'finish', listener: () => void): this; - prependOnceListener(event: 'frameError', listener: (frameType: number, errorCode: number) => void): this; - prependOnceListener(event: 'pipe', listener: (src: stream.Readable) => void): this; - prependOnceListener(event: 'unpipe', listener: (src: stream.Readable) => void): this; - prependOnceListener(event: 'streamClosed', listener: (code: number) => void): this; - prependOnceListener(event: 'timeout', listener: () => void): this; - prependOnceListener(event: 'trailers', listener: (trailers: IncomingHttpHeaders, flags: number) => void): this; - prependOnceListener(event: 'wantTrailers', listener: () => void): this; - prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this; - } - export interface ClientHttp2Stream extends Http2Stream { - addListener(event: 'continue', listener: () => {}): this; - addListener(event: 'headers', listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - addListener(event: 'push', listener: (headers: IncomingHttpHeaders, flags: number) => void): this; - addListener(event: 'response', listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - addListener(event: string | symbol, listener: (...args: any[]) => void): this; - emit(event: 'continue'): boolean; - emit(event: 'headers', headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number): boolean; - emit(event: 'push', headers: IncomingHttpHeaders, flags: number): boolean; - emit(event: 'response', headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number): boolean; - emit(event: string | symbol, ...args: any[]): boolean; - on(event: 'continue', listener: () => {}): this; - on(event: 'headers', listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - on(event: 'push', listener: (headers: IncomingHttpHeaders, flags: number) => void): this; - on(event: 'response', listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - on(event: string | symbol, listener: (...args: any[]) => void): this; - once(event: 'continue', listener: () => {}): this; - once(event: 'headers', listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - once(event: 'push', listener: (headers: IncomingHttpHeaders, flags: number) => void): this; - once(event: 'response', listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - once(event: string | symbol, listener: (...args: any[]) => void): this; - prependListener(event: 'continue', listener: () => {}): this; - prependListener(event: 'headers', listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - prependListener(event: 'push', listener: (headers: IncomingHttpHeaders, flags: number) => void): this; - prependListener(event: 'response', listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - prependListener(event: string | symbol, listener: (...args: any[]) => void): this; - prependOnceListener(event: 'continue', listener: () => {}): this; - prependOnceListener(event: 'headers', listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - prependOnceListener(event: 'push', listener: (headers: IncomingHttpHeaders, flags: number) => void): this; - prependOnceListener(event: 'response', listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this; - } - export interface ServerHttp2Stream extends Http2Stream { - /** - * True if headers were sent, false otherwise (read-only). - * @since v8.4.0 - */ - readonly headersSent: boolean; - /** - * Read-only property mapped to the `SETTINGS_ENABLE_PUSH` flag of the remote - * client's most recent `SETTINGS` frame. Will be `true` if the remote peer - * accepts push streams, `false` otherwise. Settings are the same for every`Http2Stream` in the same `Http2Session`. - * @since v8.4.0 - */ - readonly pushAllowed: boolean; - /** - * Sends an additional informational `HEADERS` frame to the connected HTTP/2 peer. - * @since v8.4.0 - */ - additionalHeaders(headers: OutgoingHttpHeaders): void; - /** - * Initiates a push stream. The callback is invoked with the new `Http2Stream`instance created for the push stream passed as the second argument, or an`Error` passed as the first argument. - * - * ```js - * const http2 = require('http2'); - * const server = http2.createServer(); - * server.on('stream', (stream) => { - * stream.respond({ ':status': 200 }); - * stream.pushStream({ ':path': '/' }, (err, pushStream, headers) => { - * if (err) throw err; - * pushStream.respond({ ':status': 200 }); - * pushStream.end('some pushed data'); - * }); - * stream.end('some data'); - * }); - * ``` - * - * Setting the weight of a push stream is not allowed in the `HEADERS` frame. Pass - * a `weight` value to `http2stream.priority` with the `silent` option set to`true` to enable server-side bandwidth balancing between concurrent streams. - * - * Calling `http2stream.pushStream()` from within a pushed stream is not permitted - * and will throw an error. - * @since v8.4.0 - * @param callback Callback that is called once the push stream has been initiated. - */ - pushStream(headers: OutgoingHttpHeaders, callback?: (err: Error | null, pushStream: ServerHttp2Stream, headers: OutgoingHttpHeaders) => void): void; - pushStream(headers: OutgoingHttpHeaders, options?: StreamPriorityOptions, callback?: (err: Error | null, pushStream: ServerHttp2Stream, headers: OutgoingHttpHeaders) => void): void; - /** - * ```js - * const http2 = require('http2'); - * const server = http2.createServer(); - * server.on('stream', (stream) => { - * stream.respond({ ':status': 200 }); - * stream.end('some data'); - * }); - * ``` - * - * When the `options.waitForTrailers` option is set, the `'wantTrailers'` event - * will be emitted immediately after queuing the last chunk of payload data to be - * sent. The `http2stream.sendTrailers()` method can then be used to sent trailing - * header fields to the peer. - * - * When `options.waitForTrailers` is set, the `Http2Stream` will not automatically - * close when the final `DATA` frame is transmitted. User code must call either`http2stream.sendTrailers()` or `http2stream.close()` to close the`Http2Stream`. - * - * ```js - * const http2 = require('http2'); - * const server = http2.createServer(); - * server.on('stream', (stream) => { - * stream.respond({ ':status': 200 }, { waitForTrailers: true }); - * stream.on('wantTrailers', () => { - * stream.sendTrailers({ ABC: 'some value to send' }); - * }); - * stream.end('some data'); - * }); - * ``` - * @since v8.4.0 - */ - respond(headers?: OutgoingHttpHeaders, options?: ServerStreamResponseOptions): void; - /** - * Initiates a response whose data is read from the given file descriptor. No - * validation is performed on the given file descriptor. If an error occurs while - * attempting to read data using the file descriptor, the `Http2Stream` will be - * closed using an `RST_STREAM` frame using the standard `INTERNAL_ERROR` code. - * - * When used, the `Http2Stream` object's `Duplex` interface will be closed - * automatically. - * - * ```js - * const http2 = require('http2'); - * const fs = require('fs'); - * - * const server = http2.createServer(); - * server.on('stream', (stream) => { - * const fd = fs.openSync('/some/file', 'r'); - * - * const stat = fs.fstatSync(fd); - * const headers = { - * 'content-length': stat.size, - * 'last-modified': stat.mtime.toUTCString(), - * 'content-type': 'text/plain; charset=utf-8' - * }; - * stream.respondWithFD(fd, headers); - * stream.on('close', () => fs.closeSync(fd)); - * }); - * ``` - * - * The optional `options.statCheck` function may be specified to give user code - * an opportunity to set additional content headers based on the `fs.Stat` details - * of the given fd. If the `statCheck` function is provided, the`http2stream.respondWithFD()` method will perform an `fs.fstat()` call to - * collect details on the provided file descriptor. - * - * The `offset` and `length` options may be used to limit the response to a - * specific range subset. This can be used, for instance, to support HTTP Range - * requests. - * - * The file descriptor or `FileHandle` is not closed when the stream is closed, - * so it will need to be closed manually once it is no longer needed. - * Using the same file descriptor concurrently for multiple streams - * is not supported and may result in data loss. Re-using a file descriptor - * after a stream has finished is supported. - * - * When the `options.waitForTrailers` option is set, the `'wantTrailers'` event - * will be emitted immediately after queuing the last chunk of payload data to be - * sent. The `http2stream.sendTrailers()` method can then be used to sent trailing - * header fields to the peer. - * - * When `options.waitForTrailers` is set, the `Http2Stream` will not automatically - * close when the final `DATA` frame is transmitted. User code _must_ call either`http2stream.sendTrailers()` or `http2stream.close()` to close the`Http2Stream`. - * - * ```js - * const http2 = require('http2'); - * const fs = require('fs'); - * - * const server = http2.createServer(); - * server.on('stream', (stream) => { - * const fd = fs.openSync('/some/file', 'r'); - * - * const stat = fs.fstatSync(fd); - * const headers = { - * 'content-length': stat.size, - * 'last-modified': stat.mtime.toUTCString(), - * 'content-type': 'text/plain; charset=utf-8' - * }; - * stream.respondWithFD(fd, headers, { waitForTrailers: true }); - * stream.on('wantTrailers', () => { - * stream.sendTrailers({ ABC: 'some value to send' }); - * }); - * - * stream.on('close', () => fs.closeSync(fd)); - * }); - * ``` - * @since v8.4.0 - * @param fd A readable file descriptor. - */ - respondWithFD(fd: number | fs.promises.FileHandle, headers?: OutgoingHttpHeaders, options?: ServerStreamFileResponseOptions): void; - /** - * Sends a regular file as the response. The `path` must specify a regular file - * or an `'error'` event will be emitted on the `Http2Stream` object. - * - * When used, the `Http2Stream` object's `Duplex` interface will be closed - * automatically. - * - * The optional `options.statCheck` function may be specified to give user code - * an opportunity to set additional content headers based on the `fs.Stat` details - * of the given file: - * - * If an error occurs while attempting to read the file data, the `Http2Stream`will be closed using an `RST_STREAM` frame using the standard `INTERNAL_ERROR`code. If the `onError` callback is - * defined, then it will be called. Otherwise - * the stream will be destroyed. - * - * Example using a file path: - * - * ```js - * const http2 = require('http2'); - * const server = http2.createServer(); - * server.on('stream', (stream) => { - * function statCheck(stat, headers) { - * headers['last-modified'] = stat.mtime.toUTCString(); - * } - * - * function onError(err) { - * // stream.respond() can throw if the stream has been destroyed by - * // the other side. - * try { - * if (err.code === 'ENOENT') { - * stream.respond({ ':status': 404 }); - * } else { - * stream.respond({ ':status': 500 }); - * } - * } catch (err) { - * // Perform actual error handling. - * console.log(err); - * } - * stream.end(); - * } - * - * stream.respondWithFile('/some/file', - * { 'content-type': 'text/plain; charset=utf-8' }, - * { statCheck, onError }); - * }); - * ``` - * - * The `options.statCheck` function may also be used to cancel the send operation - * by returning `false`. For instance, a conditional request may check the stat - * results to determine if the file has been modified to return an appropriate`304` response: - * - * ```js - * const http2 = require('http2'); - * const server = http2.createServer(); - * server.on('stream', (stream) => { - * function statCheck(stat, headers) { - * // Check the stat here... - * stream.respond({ ':status': 304 }); - * return false; // Cancel the send operation - * } - * stream.respondWithFile('/some/file', - * { 'content-type': 'text/plain; charset=utf-8' }, - * { statCheck }); - * }); - * ``` - * - * The `content-length` header field will be automatically set. - * - * The `offset` and `length` options may be used to limit the response to a - * specific range subset. This can be used, for instance, to support HTTP Range - * requests. - * - * The `options.onError` function may also be used to handle all the errors - * that could happen before the delivery of the file is initiated. The - * default behavior is to destroy the stream. - * - * When the `options.waitForTrailers` option is set, the `'wantTrailers'` event - * will be emitted immediately after queuing the last chunk of payload data to be - * sent. The `http2stream.sendTrailers()` method can then be used to sent trailing - * header fields to the peer. - * - * When `options.waitForTrailers` is set, the `Http2Stream` will not automatically - * close when the final `DATA` frame is transmitted. User code must call either`http2stream.sendTrailers()` or `http2stream.close()` to close the`Http2Stream`. - * - * ```js - * const http2 = require('http2'); - * const server = http2.createServer(); - * server.on('stream', (stream) => { - * stream.respondWithFile('/some/file', - * { 'content-type': 'text/plain; charset=utf-8' }, - * { waitForTrailers: true }); - * stream.on('wantTrailers', () => { - * stream.sendTrailers({ ABC: 'some value to send' }); - * }); - * }); - * ``` - * @since v8.4.0 - */ - respondWithFile(path: string, headers?: OutgoingHttpHeaders, options?: ServerStreamFileResponseOptionsWithError): void; - } - // Http2Session - export interface Settings { - headerTableSize?: number | undefined; - enablePush?: boolean | undefined; - initialWindowSize?: number | undefined; - maxFrameSize?: number | undefined; - maxConcurrentStreams?: number | undefined; - maxHeaderListSize?: number | undefined; - enableConnectProtocol?: boolean | undefined; - } - export interface ClientSessionRequestOptions { - endStream?: boolean | undefined; - exclusive?: boolean | undefined; - parent?: number | undefined; - weight?: number | undefined; - waitForTrailers?: boolean | undefined; - signal?: AbortSignal | undefined; - } - export interface SessionState { - effectiveLocalWindowSize?: number | undefined; - effectiveRecvDataLength?: number | undefined; - nextStreamID?: number | undefined; - localWindowSize?: number | undefined; - lastProcStreamID?: number | undefined; - remoteWindowSize?: number | undefined; - outboundQueueSize?: number | undefined; - deflateDynamicTableSize?: number | undefined; - inflateDynamicTableSize?: number | undefined; - } - export interface Http2Session extends EventEmitter { - /** - * Value will be `undefined` if the `Http2Session` is not yet connected to a - * socket, `h2c` if the `Http2Session` is not connected to a `TLSSocket`, or - * will return the value of the connected `TLSSocket`'s own `alpnProtocol`property. - * @since v9.4.0 - */ - readonly alpnProtocol?: string | undefined; - /** - * Will be `true` if this `Http2Session` instance has been closed, otherwise`false`. - * @since v9.4.0 - */ - readonly closed: boolean; - /** - * Will be `true` if this `Http2Session` instance is still connecting, will be set - * to `false` before emitting `connect` event and/or calling the `http2.connect`callback. - * @since v10.0.0 - */ - readonly connecting: boolean; - /** - * Will be `true` if this `Http2Session` instance has been destroyed and must no - * longer be used, otherwise `false`. - * @since v8.4.0 - */ - readonly destroyed: boolean; - /** - * Value is `undefined` if the `Http2Session` session socket has not yet been - * connected, `true` if the `Http2Session` is connected with a `TLSSocket`, - * and `false` if the `Http2Session` is connected to any other kind of socket - * or stream. - * @since v9.4.0 - */ - readonly encrypted?: boolean | undefined; - /** - * A prototype-less object describing the current local settings of this`Http2Session`. The local settings are local to _this_`Http2Session` instance. - * @since v8.4.0 - */ - readonly localSettings: Settings; - /** - * If the `Http2Session` is connected to a `TLSSocket`, the `originSet` property - * will return an `Array` of origins for which the `Http2Session` may be - * considered authoritative. - * - * The `originSet` property is only available when using a secure TLS connection. - * @since v9.4.0 - */ - readonly originSet?: string[] | undefined; - /** - * Indicates whether the `Http2Session` is currently waiting for acknowledgment of - * a sent `SETTINGS` frame. Will be `true` after calling the`http2session.settings()` method. Will be `false` once all sent `SETTINGS`frames have been acknowledged. - * @since v8.4.0 - */ - readonly pendingSettingsAck: boolean; - /** - * A prototype-less object describing the current remote settings of this`Http2Session`. The remote settings are set by the _connected_ HTTP/2 peer. - * @since v8.4.0 - */ - readonly remoteSettings: Settings; - /** - * Returns a `Proxy` object that acts as a `net.Socket` (or `tls.TLSSocket`) but - * limits available methods to ones safe to use with HTTP/2. - * - * `destroy`, `emit`, `end`, `pause`, `read`, `resume`, and `write` will throw - * an error with code `ERR_HTTP2_NO_SOCKET_MANIPULATION`. See `Http2Session and Sockets` for more information. - * - * `setTimeout` method will be called on this `Http2Session`. - * - * All other interactions will be routed directly to the socket. - * @since v8.4.0 - */ - readonly socket: net.Socket | tls.TLSSocket; - /** - * Provides miscellaneous information about the current state of the`Http2Session`. - * - * An object describing the current status of this `Http2Session`. - * @since v8.4.0 - */ - readonly state: SessionState; - /** - * The `http2session.type` will be equal to`http2.constants.NGHTTP2_SESSION_SERVER` if this `Http2Session` instance is a - * server, and `http2.constants.NGHTTP2_SESSION_CLIENT` if the instance is a - * client. - * @since v8.4.0 - */ - readonly type: number; - /** - * Gracefully closes the `Http2Session`, allowing any existing streams to - * complete on their own and preventing new `Http2Stream` instances from being - * created. Once closed, `http2session.destroy()`_might_ be called if there - * are no open `Http2Stream` instances. - * - * If specified, the `callback` function is registered as a handler for the`'close'` event. - * @since v9.4.0 - */ - close(callback?: () => void): void; - /** - * Immediately terminates the `Http2Session` and the associated `net.Socket` or`tls.TLSSocket`. - * - * Once destroyed, the `Http2Session` will emit the `'close'` event. If `error`is not undefined, an `'error'` event will be emitted immediately before the`'close'` event. - * - * If there are any remaining open `Http2Streams` associated with the`Http2Session`, those will also be destroyed. - * @since v8.4.0 - * @param error An `Error` object if the `Http2Session` is being destroyed due to an error. - * @param code The HTTP/2 error code to send in the final `GOAWAY` frame. If unspecified, and `error` is not undefined, the default is `INTERNAL_ERROR`, otherwise defaults to `NO_ERROR`. - */ - destroy(error?: Error, code?: number): void; - /** - * Transmits a `GOAWAY` frame to the connected peer _without_ shutting down the`Http2Session`. - * @since v9.4.0 - * @param code An HTTP/2 error code - * @param lastStreamID The numeric ID of the last processed `Http2Stream` - * @param opaqueData A `TypedArray` or `DataView` instance containing additional data to be carried within the `GOAWAY` frame. - */ - goaway(code?: number, lastStreamID?: number, opaqueData?: NodeJS.ArrayBufferView): void; - /** - * Sends a `PING` frame to the connected HTTP/2 peer. A `callback` function must - * be provided. The method will return `true` if the `PING` was sent, `false`otherwise. - * - * The maximum number of outstanding (unacknowledged) pings is determined by the`maxOutstandingPings` configuration option. The default maximum is 10. - * - * If provided, the `payload` must be a `Buffer`, `TypedArray`, or `DataView`containing 8 bytes of data that will be transmitted with the `PING` and - * returned with the ping acknowledgment. - * - * The callback will be invoked with three arguments: an error argument that will - * be `null` if the `PING` was successfully acknowledged, a `duration` argument - * that reports the number of milliseconds elapsed since the ping was sent and the - * acknowledgment was received, and a `Buffer` containing the 8-byte `PING`payload. - * - * ```js - * session.ping(Buffer.from('abcdefgh'), (err, duration, payload) => { - * if (!err) { - * console.log(`Ping acknowledged in ${duration} milliseconds`); - * console.log(`With payload '${payload.toString()}'`); - * } - * }); - * ``` - * - * If the `payload` argument is not specified, the default payload will be the - * 64-bit timestamp (little endian) marking the start of the `PING` duration. - * @since v8.9.3 - * @param payload Optional ping payload. - */ - ping(callback: (err: Error | null, duration: number, payload: Buffer) => void): boolean; - ping(payload: NodeJS.ArrayBufferView, callback: (err: Error | null, duration: number, payload: Buffer) => void): boolean; - /** - * Calls `ref()` on this `Http2Session`instance's underlying `net.Socket`. - * @since v9.4.0 - */ - ref(): void; - /** - * Sets the local endpoint's window size. - * The `windowSize` is the total window size to set, not - * the delta. - * - * ```js - * const http2 = require('http2'); - * - * const server = http2.createServer(); - * const expectedWindowSize = 2 ** 20; - * server.on('connect', (session) => { - * - * // Set local window size to be 2 ** 20 - * session.setLocalWindowSize(expectedWindowSize); - * }); - * ``` - * @since v15.3.0, v14.18.0 - */ - setLocalWindowSize(windowSize: number): void; - /** - * Used to set a callback function that is called when there is no activity on - * the `Http2Session` after `msecs` milliseconds. The given `callback` is - * registered as a listener on the `'timeout'` event. - * @since v8.4.0 - */ - setTimeout(msecs: number, callback?: () => void): void; - /** - * Updates the current local settings for this `Http2Session` and sends a new`SETTINGS` frame to the connected HTTP/2 peer. - * - * Once called, the `http2session.pendingSettingsAck` property will be `true`while the session is waiting for the remote peer to acknowledge the new - * settings. - * - * The new settings will not become effective until the `SETTINGS` acknowledgment - * is received and the `'localSettings'` event is emitted. It is possible to send - * multiple `SETTINGS` frames while acknowledgment is still pending. - * @since v8.4.0 - * @param callback Callback that is called once the session is connected or right away if the session is already connected. - */ - settings(settings: Settings, callback?: (err: Error | null, settings: Settings, duration: number) => void): void; - /** - * Calls `unref()` on this `Http2Session`instance's underlying `net.Socket`. - * @since v9.4.0 - */ - unref(): void; - addListener(event: 'close', listener: () => void): this; - addListener(event: 'error', listener: (err: Error) => void): this; - addListener(event: 'frameError', listener: (frameType: number, errorCode: number, streamID: number) => void): this; - addListener(event: 'goaway', listener: (errorCode: number, lastStreamID: number, opaqueData: Buffer) => void): this; - addListener(event: 'localSettings', listener: (settings: Settings) => void): this; - addListener(event: 'ping', listener: () => void): this; - addListener(event: 'remoteSettings', listener: (settings: Settings) => void): this; - addListener(event: 'timeout', listener: () => void): this; - addListener(event: string | symbol, listener: (...args: any[]) => void): this; - emit(event: 'close'): boolean; - emit(event: 'error', err: Error): boolean; - emit(event: 'frameError', frameType: number, errorCode: number, streamID: number): boolean; - emit(event: 'goaway', errorCode: number, lastStreamID: number, opaqueData: Buffer): boolean; - emit(event: 'localSettings', settings: Settings): boolean; - emit(event: 'ping'): boolean; - emit(event: 'remoteSettings', settings: Settings): boolean; - emit(event: 'timeout'): boolean; - emit(event: string | symbol, ...args: any[]): boolean; - on(event: 'close', listener: () => void): this; - on(event: 'error', listener: (err: Error) => void): this; - on(event: 'frameError', listener: (frameType: number, errorCode: number, streamID: number) => void): this; - on(event: 'goaway', listener: (errorCode: number, lastStreamID: number, opaqueData: Buffer) => void): this; - on(event: 'localSettings', listener: (settings: Settings) => void): this; - on(event: 'ping', listener: () => void): this; - on(event: 'remoteSettings', listener: (settings: Settings) => void): this; - on(event: 'timeout', listener: () => void): this; - on(event: string | symbol, listener: (...args: any[]) => void): this; - once(event: 'close', listener: () => void): this; - once(event: 'error', listener: (err: Error) => void): this; - once(event: 'frameError', listener: (frameType: number, errorCode: number, streamID: number) => void): this; - once(event: 'goaway', listener: (errorCode: number, lastStreamID: number, opaqueData: Buffer) => void): this; - once(event: 'localSettings', listener: (settings: Settings) => void): this; - once(event: 'ping', listener: () => void): this; - once(event: 'remoteSettings', listener: (settings: Settings) => void): this; - once(event: 'timeout', listener: () => void): this; - once(event: string | symbol, listener: (...args: any[]) => void): this; - prependListener(event: 'close', listener: () => void): this; - prependListener(event: 'error', listener: (err: Error) => void): this; - prependListener(event: 'frameError', listener: (frameType: number, errorCode: number, streamID: number) => void): this; - prependListener(event: 'goaway', listener: (errorCode: number, lastStreamID: number, opaqueData: Buffer) => void): this; - prependListener(event: 'localSettings', listener: (settings: Settings) => void): this; - prependListener(event: 'ping', listener: () => void): this; - prependListener(event: 'remoteSettings', listener: (settings: Settings) => void): this; - prependListener(event: 'timeout', listener: () => void): this; - prependListener(event: string | symbol, listener: (...args: any[]) => void): this; - prependOnceListener(event: 'close', listener: () => void): this; - prependOnceListener(event: 'error', listener: (err: Error) => void): this; - prependOnceListener(event: 'frameError', listener: (frameType: number, errorCode: number, streamID: number) => void): this; - prependOnceListener(event: 'goaway', listener: (errorCode: number, lastStreamID: number, opaqueData: Buffer) => void): this; - prependOnceListener(event: 'localSettings', listener: (settings: Settings) => void): this; - prependOnceListener(event: 'ping', listener: () => void): this; - prependOnceListener(event: 'remoteSettings', listener: (settings: Settings) => void): this; - prependOnceListener(event: 'timeout', listener: () => void): this; - prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this; - } - export interface ClientHttp2Session extends Http2Session { - /** - * For HTTP/2 Client `Http2Session` instances only, the `http2session.request()`creates and returns an `Http2Stream` instance that can be used to send an - * HTTP/2 request to the connected server. - * - * When a `ClientHttp2Session` is first created, the socket may not yet be - * connected. if `clienthttp2session.request()` is called during this time, the - * actual request will be deferred until the socket is ready to go. - * If the `session` is closed before the actual request be executed, an`ERR_HTTP2_GOAWAY_SESSION` is thrown. - * - * This method is only available if `http2session.type` is equal to`http2.constants.NGHTTP2_SESSION_CLIENT`. - * - * ```js - * const http2 = require('http2'); - * const clientSession = http2.connect('https://localhost:1234'); - * const { - * HTTP2_HEADER_PATH, - * HTTP2_HEADER_STATUS - * } = http2.constants; - * - * const req = clientSession.request({ [HTTP2_HEADER_PATH]: '/' }); - * req.on('response', (headers) => { - * console.log(headers[HTTP2_HEADER_STATUS]); - * req.on('data', (chunk) => { // .. }); - * req.on('end', () => { // .. }); - * }); - * ``` - * - * When the `options.waitForTrailers` option is set, the `'wantTrailers'` event - * is emitted immediately after queuing the last chunk of payload data to be sent. - * The `http2stream.sendTrailers()` method can then be called to send trailing - * headers to the peer. - * - * When `options.waitForTrailers` is set, the `Http2Stream` will not automatically - * close when the final `DATA` frame is transmitted. User code must call either`http2stream.sendTrailers()` or `http2stream.close()` to close the`Http2Stream`. - * - * When `options.signal` is set with an `AbortSignal` and then `abort` on the - * corresponding `AbortController` is called, the request will emit an `'error'`event with an `AbortError` error. - * - * The `:method` and `:path` pseudo-headers are not specified within `headers`, - * they respectively default to: - * - * * `:method` \= `'GET'` - * * `:path` \= `/` - * @since v8.4.0 - */ - request(headers?: OutgoingHttpHeaders, options?: ClientSessionRequestOptions): ClientHttp2Stream; - addListener(event: 'altsvc', listener: (alt: string, origin: string, stream: number) => void): this; - addListener(event: 'origin', listener: (origins: string[]) => void): this; - addListener(event: 'connect', listener: (session: ClientHttp2Session, socket: net.Socket | tls.TLSSocket) => void): this; - addListener(event: 'stream', listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - addListener(event: string | symbol, listener: (...args: any[]) => void): this; - emit(event: 'altsvc', alt: string, origin: string, stream: number): boolean; - emit(event: 'origin', origins: ReadonlyArray): boolean; - emit(event: 'connect', session: ClientHttp2Session, socket: net.Socket | tls.TLSSocket): boolean; - emit(event: 'stream', stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number): boolean; - emit(event: string | symbol, ...args: any[]): boolean; - on(event: 'altsvc', listener: (alt: string, origin: string, stream: number) => void): this; - on(event: 'origin', listener: (origins: string[]) => void): this; - on(event: 'connect', listener: (session: ClientHttp2Session, socket: net.Socket | tls.TLSSocket) => void): this; - on(event: 'stream', listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - on(event: string | symbol, listener: (...args: any[]) => void): this; - once(event: 'altsvc', listener: (alt: string, origin: string, stream: number) => void): this; - once(event: 'origin', listener: (origins: string[]) => void): this; - once(event: 'connect', listener: (session: ClientHttp2Session, socket: net.Socket | tls.TLSSocket) => void): this; - once(event: 'stream', listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - once(event: string | symbol, listener: (...args: any[]) => void): this; - prependListener(event: 'altsvc', listener: (alt: string, origin: string, stream: number) => void): this; - prependListener(event: 'origin', listener: (origins: string[]) => void): this; - prependListener(event: 'connect', listener: (session: ClientHttp2Session, socket: net.Socket | tls.TLSSocket) => void): this; - prependListener(event: 'stream', listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - prependListener(event: string | symbol, listener: (...args: any[]) => void): this; - prependOnceListener(event: 'altsvc', listener: (alt: string, origin: string, stream: number) => void): this; - prependOnceListener(event: 'origin', listener: (origins: string[]) => void): this; - prependOnceListener(event: 'connect', listener: (session: ClientHttp2Session, socket: net.Socket | tls.TLSSocket) => void): this; - prependOnceListener(event: 'stream', listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this; - } - export interface AlternativeServiceOptions { - origin: number | string | url.URL; - } - export interface ServerHttp2Session extends Http2Session { - readonly server: Http2Server | Http2SecureServer; - /** - * Submits an `ALTSVC` frame (as defined by [RFC 7838](https://tools.ietf.org/html/rfc7838)) to the connected client. - * - * ```js - * const http2 = require('http2'); - * - * const server = http2.createServer(); - * server.on('session', (session) => { - * // Set altsvc for origin https://example.org:80 - * session.altsvc('h2=":8000"', 'https://example.org:80'); - * }); - * - * server.on('stream', (stream) => { - * // Set altsvc for a specific stream - * stream.session.altsvc('h2=":8000"', stream.id); - * }); - * ``` - * - * Sending an `ALTSVC` frame with a specific stream ID indicates that the alternate - * service is associated with the origin of the given `Http2Stream`. - * - * The `alt` and origin string _must_ contain only ASCII bytes and are - * strictly interpreted as a sequence of ASCII bytes. The special value `'clear'`may be passed to clear any previously set alternative service for a given - * domain. - * - * When a string is passed for the `originOrStream` argument, it will be parsed as - * a URL and the origin will be derived. For instance, the origin for the - * HTTP URL `'https://example.org/foo/bar'` is the ASCII string`'https://example.org'`. An error will be thrown if either the given string - * cannot be parsed as a URL or if a valid origin cannot be derived. - * - * A `URL` object, or any object with an `origin` property, may be passed as`originOrStream`, in which case the value of the `origin` property will be - * used. The value of the `origin` property _must_ be a properly serialized - * ASCII origin. - * @since v9.4.0 - * @param alt A description of the alternative service configuration as defined by `RFC 7838`. - * @param originOrStream Either a URL string specifying the origin (or an `Object` with an `origin` property) or the numeric identifier of an active `Http2Stream` as given by the - * `http2stream.id` property. - */ - altsvc(alt: string, originOrStream: number | string | url.URL | AlternativeServiceOptions): void; - /** - * Submits an `ORIGIN` frame (as defined by [RFC 8336](https://tools.ietf.org/html/rfc8336)) to the connected client - * to advertise the set of origins for which the server is capable of providing - * authoritative responses. - * - * ```js - * const http2 = require('http2'); - * const options = getSecureOptionsSomehow(); - * const server = http2.createSecureServer(options); - * server.on('stream', (stream) => { - * stream.respond(); - * stream.end('ok'); - * }); - * server.on('session', (session) => { - * session.origin('https://example.com', 'https://example.org'); - * }); - * ``` - * - * When a string is passed as an `origin`, it will be parsed as a URL and the - * origin will be derived. For instance, the origin for the HTTP URL`'https://example.org/foo/bar'` is the ASCII string`'https://example.org'`. An error will be thrown if either the given - * string - * cannot be parsed as a URL or if a valid origin cannot be derived. - * - * A `URL` object, or any object with an `origin` property, may be passed as - * an `origin`, in which case the value of the `origin` property will be - * used. The value of the `origin` property _must_ be a properly serialized - * ASCII origin. - * - * Alternatively, the `origins` option may be used when creating a new HTTP/2 - * server using the `http2.createSecureServer()` method: - * - * ```js - * const http2 = require('http2'); - * const options = getSecureOptionsSomehow(); - * options.origins = ['https://example.com', 'https://example.org']; - * const server = http2.createSecureServer(options); - * server.on('stream', (stream) => { - * stream.respond(); - * stream.end('ok'); - * }); - * ``` - * @since v10.12.0 - * @param origins One or more URL Strings passed as separate arguments. - */ - origin( - ...origins: Array< - | string - | url.URL - | { - origin: string; - } - > - ): void; - addListener(event: 'connect', listener: (session: ServerHttp2Session, socket: net.Socket | tls.TLSSocket) => void): this; - addListener(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - addListener(event: string | symbol, listener: (...args: any[]) => void): this; - emit(event: 'connect', session: ServerHttp2Session, socket: net.Socket | tls.TLSSocket): boolean; - emit(event: 'stream', stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number): boolean; - emit(event: string | symbol, ...args: any[]): boolean; - on(event: 'connect', listener: (session: ServerHttp2Session, socket: net.Socket | tls.TLSSocket) => void): this; - on(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - on(event: string | symbol, listener: (...args: any[]) => void): this; - once(event: 'connect', listener: (session: ServerHttp2Session, socket: net.Socket | tls.TLSSocket) => void): this; - once(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - once(event: string | symbol, listener: (...args: any[]) => void): this; - prependListener(event: 'connect', listener: (session: ServerHttp2Session, socket: net.Socket | tls.TLSSocket) => void): this; - prependListener(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - prependListener(event: string | symbol, listener: (...args: any[]) => void): this; - prependOnceListener(event: 'connect', listener: (session: ServerHttp2Session, socket: net.Socket | tls.TLSSocket) => void): this; - prependOnceListener(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this; - } - // Http2Server - export interface SessionOptions { - maxDeflateDynamicTableSize?: number | undefined; - maxSessionMemory?: number | undefined; - maxHeaderListPairs?: number | undefined; - maxOutstandingPings?: number | undefined; - maxSendHeaderBlockLength?: number | undefined; - paddingStrategy?: number | undefined; - peerMaxConcurrentStreams?: number | undefined; - settings?: Settings | undefined; - /** - * Specifies a timeout in milliseconds that - * a server should wait when an [`'unknownProtocol'`][] is emitted. If the - * socket has not been destroyed by that time the server will destroy it. - * @default 100000 - */ - unknownProtocolTimeout?: number | undefined; - selectPadding?(frameLen: number, maxFrameLen: number): number; - createConnection?(authority: url.URL, option: SessionOptions): stream.Duplex; - } - export interface ClientSessionOptions extends SessionOptions { - maxReservedRemoteStreams?: number | undefined; - createConnection?: ((authority: url.URL, option: SessionOptions) => stream.Duplex) | undefined; - protocol?: 'http:' | 'https:' | undefined; - } - export interface ServerSessionOptions extends SessionOptions { - Http1IncomingMessage?: typeof IncomingMessage | undefined; - Http1ServerResponse?: typeof ServerResponse | undefined; - Http2ServerRequest?: typeof Http2ServerRequest | undefined; - Http2ServerResponse?: typeof Http2ServerResponse | undefined; - } - export interface SecureClientSessionOptions extends ClientSessionOptions, tls.ConnectionOptions {} - export interface SecureServerSessionOptions extends ServerSessionOptions, tls.TlsOptions {} - export interface ServerOptions extends ServerSessionOptions {} - export interface SecureServerOptions extends SecureServerSessionOptions { - allowHTTP1?: boolean | undefined; - origins?: string[] | undefined; - } - interface HTTP2ServerCommon { - setTimeout(msec?: number, callback?: () => void): this; - /** - * Throws ERR_HTTP2_INVALID_SETTING_VALUE for invalid settings values. - * Throws ERR_INVALID_ARG_TYPE for invalid settings argument. - */ - updateSettings(settings: Settings): void; - } - export interface Http2Server extends net.Server, HTTP2ServerCommon { - addListener(event: 'checkContinue', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - addListener(event: 'request', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - addListener(event: 'session', listener: (session: ServerHttp2Session) => void): this; - addListener(event: 'sessionError', listener: (err: Error) => void): this; - addListener(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - addListener(event: 'timeout', listener: () => void): this; - addListener(event: string | symbol, listener: (...args: any[]) => void): this; - emit(event: 'checkContinue', request: Http2ServerRequest, response: Http2ServerResponse): boolean; - emit(event: 'request', request: Http2ServerRequest, response: Http2ServerResponse): boolean; - emit(event: 'session', session: ServerHttp2Session): boolean; - emit(event: 'sessionError', err: Error): boolean; - emit(event: 'stream', stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number): boolean; - emit(event: 'timeout'): boolean; - emit(event: string | symbol, ...args: any[]): boolean; - on(event: 'checkContinue', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - on(event: 'request', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - on(event: 'session', listener: (session: ServerHttp2Session) => void): this; - on(event: 'sessionError', listener: (err: Error) => void): this; - on(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - on(event: 'timeout', listener: () => void): this; - on(event: string | symbol, listener: (...args: any[]) => void): this; - once(event: 'checkContinue', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - once(event: 'request', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - once(event: 'session', listener: (session: ServerHttp2Session) => void): this; - once(event: 'sessionError', listener: (err: Error) => void): this; - once(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - once(event: 'timeout', listener: () => void): this; - once(event: string | symbol, listener: (...args: any[]) => void): this; - prependListener(event: 'checkContinue', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - prependListener(event: 'request', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - prependListener(event: 'session', listener: (session: ServerHttp2Session) => void): this; - prependListener(event: 'sessionError', listener: (err: Error) => void): this; - prependListener(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - prependListener(event: 'timeout', listener: () => void): this; - prependListener(event: string | symbol, listener: (...args: any[]) => void): this; - prependOnceListener(event: 'checkContinue', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - prependOnceListener(event: 'request', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - prependOnceListener(event: 'session', listener: (session: ServerHttp2Session) => void): this; - prependOnceListener(event: 'sessionError', listener: (err: Error) => void): this; - prependOnceListener(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - prependOnceListener(event: 'timeout', listener: () => void): this; - prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this; - } - export interface Http2SecureServer extends tls.Server, HTTP2ServerCommon { - addListener(event: 'checkContinue', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - addListener(event: 'request', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - addListener(event: 'session', listener: (session: ServerHttp2Session) => void): this; - addListener(event: 'sessionError', listener: (err: Error) => void): this; - addListener(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - addListener(event: 'timeout', listener: () => void): this; - addListener(event: 'unknownProtocol', listener: (socket: tls.TLSSocket) => void): this; - addListener(event: string | symbol, listener: (...args: any[]) => void): this; - emit(event: 'checkContinue', request: Http2ServerRequest, response: Http2ServerResponse): boolean; - emit(event: 'request', request: Http2ServerRequest, response: Http2ServerResponse): boolean; - emit(event: 'session', session: ServerHttp2Session): boolean; - emit(event: 'sessionError', err: Error): boolean; - emit(event: 'stream', stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number): boolean; - emit(event: 'timeout'): boolean; - emit(event: 'unknownProtocol', socket: tls.TLSSocket): boolean; - emit(event: string | symbol, ...args: any[]): boolean; - on(event: 'checkContinue', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - on(event: 'request', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - on(event: 'session', listener: (session: ServerHttp2Session) => void): this; - on(event: 'sessionError', listener: (err: Error) => void): this; - on(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - on(event: 'timeout', listener: () => void): this; - on(event: 'unknownProtocol', listener: (socket: tls.TLSSocket) => void): this; - on(event: string | symbol, listener: (...args: any[]) => void): this; - once(event: 'checkContinue', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - once(event: 'request', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - once(event: 'session', listener: (session: ServerHttp2Session) => void): this; - once(event: 'sessionError', listener: (err: Error) => void): this; - once(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - once(event: 'timeout', listener: () => void): this; - once(event: 'unknownProtocol', listener: (socket: tls.TLSSocket) => void): this; - once(event: string | symbol, listener: (...args: any[]) => void): this; - prependListener(event: 'checkContinue', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - prependListener(event: 'request', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - prependListener(event: 'session', listener: (session: ServerHttp2Session) => void): this; - prependListener(event: 'sessionError', listener: (err: Error) => void): this; - prependListener(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - prependListener(event: 'timeout', listener: () => void): this; - prependListener(event: 'unknownProtocol', listener: (socket: tls.TLSSocket) => void): this; - prependListener(event: string | symbol, listener: (...args: any[]) => void): this; - prependOnceListener(event: 'checkContinue', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - prependOnceListener(event: 'request', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - prependOnceListener(event: 'session', listener: (session: ServerHttp2Session) => void): this; - prependOnceListener(event: 'sessionError', listener: (err: Error) => void): this; - prependOnceListener(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - prependOnceListener(event: 'timeout', listener: () => void): this; - prependOnceListener(event: 'unknownProtocol', listener: (socket: tls.TLSSocket) => void): this; - prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this; - } - /** - * A `Http2ServerRequest` object is created by {@link Server} or {@link SecureServer} and passed as the first argument to the `'request'` event. It may be used to access a request status, - * headers, and - * data. - * @since v8.4.0 - */ - export class Http2ServerRequest extends stream.Readable { - constructor(stream: ServerHttp2Stream, headers: IncomingHttpHeaders, options: stream.ReadableOptions, rawHeaders: ReadonlyArray); - /** - * The `request.aborted` property will be `true` if the request has - * been aborted. - * @since v10.1.0 - */ - readonly aborted: boolean; - /** - * The request authority pseudo header field. Because HTTP/2 allows requests - * to set either `:authority` or `host`, this value is derived from`req.headers[':authority']` if present. Otherwise, it is derived from`req.headers['host']`. - * @since v8.4.0 - */ - readonly authority: string; - /** - * See `request.socket`. - * @since v8.4.0 - * @deprecated Since v13.0.0 - Use `socket`. - */ - readonly connection: net.Socket | tls.TLSSocket; - /** - * The `request.complete` property will be `true` if the request has - * been completed, aborted, or destroyed. - * @since v12.10.0 - */ - readonly complete: boolean; - /** - * The request/response headers object. - * - * Key-value pairs of header names and values. Header names are lower-cased. - * - * ```js - * // Prints something like: - * // - * // { 'user-agent': 'curl/7.22.0', - * // host: '127.0.0.1:8000', - * // accept: '*' } - * console.log(request.headers); - * ``` - * - * See `HTTP/2 Headers Object`. - * - * In HTTP/2, the request path, host name, protocol, and method are represented as - * special headers prefixed with the `:` character (e.g. `':path'`). These special - * headers will be included in the `request.headers` object. Care must be taken not - * to inadvertently modify these special headers or errors may occur. For instance, - * removing all headers from the request will cause errors to occur: - * - * ```js - * removeAllHeaders(request.headers); - * assert(request.url); // Fails because the :path header has been removed - * ``` - * @since v8.4.0 - */ - readonly headers: IncomingHttpHeaders; - /** - * In case of server request, the HTTP version sent by the client. In the case of - * client response, the HTTP version of the connected-to server. Returns`'2.0'`. - * - * Also `message.httpVersionMajor` is the first integer and`message.httpVersionMinor` is the second. - * @since v8.4.0 - */ - readonly httpVersion: string; - readonly httpVersionMinor: number; - readonly httpVersionMajor: number; - /** - * The request method as a string. Read-only. Examples: `'GET'`, `'DELETE'`. - * @since v8.4.0 - */ - readonly method: string; - /** - * The raw request/response headers list exactly as they were received. - * - * The keys and values are in the same list. It is _not_ a - * list of tuples. So, the even-numbered offsets are key values, and the - * odd-numbered offsets are the associated values. - * - * Header names are not lowercased, and duplicates are not merged. - * - * ```js - * // Prints something like: - * // - * // [ 'user-agent', - * // 'this is invalid because there can be only one', - * // 'User-Agent', - * // 'curl/7.22.0', - * // 'Host', - * // '127.0.0.1:8000', - * // 'ACCEPT', - * // '*' ] - * console.log(request.rawHeaders); - * ``` - * @since v8.4.0 - */ - readonly rawHeaders: string[]; - /** - * The raw request/response trailer keys and values exactly as they were - * received. Only populated at the `'end'` event. - * @since v8.4.0 - */ - readonly rawTrailers: string[]; - /** - * The request scheme pseudo header field indicating the scheme - * portion of the target URL. - * @since v8.4.0 - */ - readonly scheme: string; - /** - * Returns a `Proxy` object that acts as a `net.Socket` (or `tls.TLSSocket`) but - * applies getters, setters, and methods based on HTTP/2 logic. - * - * `destroyed`, `readable`, and `writable` properties will be retrieved from and - * set on `request.stream`. - * - * `destroy`, `emit`, `end`, `on` and `once` methods will be called on`request.stream`. - * - * `setTimeout` method will be called on `request.stream.session`. - * - * `pause`, `read`, `resume`, and `write` will throw an error with code`ERR_HTTP2_NO_SOCKET_MANIPULATION`. See `Http2Session and Sockets` for - * more information. - * - * All other interactions will be routed directly to the socket. With TLS support, - * use `request.socket.getPeerCertificate()` to obtain the client's - * authentication details. - * @since v8.4.0 - */ - readonly socket: net.Socket | tls.TLSSocket; - /** - * The `Http2Stream` object backing the request. - * @since v8.4.0 - */ - readonly stream: ServerHttp2Stream; - /** - * The request/response trailers object. Only populated at the `'end'` event. - * @since v8.4.0 - */ - readonly trailers: IncomingHttpHeaders; - /** - * Request URL string. This contains only the URL that is present in the actual - * HTTP request. If the request is: - * - * ```http - * GET /status?name=ryan HTTP/1.1 - * Accept: text/plain - * ``` - * - * Then `request.url` will be: - * - * ```js - * '/status?name=ryan' - * ``` - * - * To parse the url into its parts, `new URL()` can be used: - * - * ```console - * $ node - * > new URL('/status?name=ryan', 'http://example.com') - * URL { - * href: 'http://example.com/status?name=ryan', - * origin: 'http://example.com', - * protocol: 'http:', - * username: '', - * password: '', - * host: 'example.com', - * hostname: 'example.com', - * port: '', - * pathname: '/status', - * search: '?name=ryan', - * searchParams: URLSearchParams { 'name' => 'ryan' }, - * hash: '' - * } - * ``` - * @since v8.4.0 - */ - url: string; - /** - * Sets the `Http2Stream`'s timeout value to `msecs`. If a callback is - * provided, then it is added as a listener on the `'timeout'` event on - * the response object. - * - * If no `'timeout'` listener is added to the request, the response, or - * the server, then `Http2Stream` s are destroyed when they time out. If a - * handler is assigned to the request, the response, or the server's `'timeout'`events, timed out sockets must be handled explicitly. - * @since v8.4.0 - */ - setTimeout(msecs: number, callback?: () => void): void; - read(size?: number): Buffer | string | null; - addListener(event: 'aborted', listener: (hadError: boolean, code: number) => void): this; - addListener(event: 'close', listener: () => void): this; - addListener(event: 'data', listener: (chunk: Buffer | string) => void): this; - addListener(event: 'end', listener: () => void): this; - addListener(event: 'readable', listener: () => void): this; - addListener(event: 'error', listener: (err: Error) => void): this; - addListener(event: string | symbol, listener: (...args: any[]) => void): this; - emit(event: 'aborted', hadError: boolean, code: number): boolean; - emit(event: 'close'): boolean; - emit(event: 'data', chunk: Buffer | string): boolean; - emit(event: 'end'): boolean; - emit(event: 'readable'): boolean; - emit(event: 'error', err: Error): boolean; - emit(event: string | symbol, ...args: any[]): boolean; - on(event: 'aborted', listener: (hadError: boolean, code: number) => void): this; - on(event: 'close', listener: () => void): this; - on(event: 'data', listener: (chunk: Buffer | string) => void): this; - on(event: 'end', listener: () => void): this; - on(event: 'readable', listener: () => void): this; - on(event: 'error', listener: (err: Error) => void): this; - on(event: string | symbol, listener: (...args: any[]) => void): this; - once(event: 'aborted', listener: (hadError: boolean, code: number) => void): this; - once(event: 'close', listener: () => void): this; - once(event: 'data', listener: (chunk: Buffer | string) => void): this; - once(event: 'end', listener: () => void): this; - once(event: 'readable', listener: () => void): this; - once(event: 'error', listener: (err: Error) => void): this; - once(event: string | symbol, listener: (...args: any[]) => void): this; - prependListener(event: 'aborted', listener: (hadError: boolean, code: number) => void): this; - prependListener(event: 'close', listener: () => void): this; - prependListener(event: 'data', listener: (chunk: Buffer | string) => void): this; - prependListener(event: 'end', listener: () => void): this; - prependListener(event: 'readable', listener: () => void): this; - prependListener(event: 'error', listener: (err: Error) => void): this; - prependListener(event: string | symbol, listener: (...args: any[]) => void): this; - prependOnceListener(event: 'aborted', listener: (hadError: boolean, code: number) => void): this; - prependOnceListener(event: 'close', listener: () => void): this; - prependOnceListener(event: 'data', listener: (chunk: Buffer | string) => void): this; - prependOnceListener(event: 'end', listener: () => void): this; - prependOnceListener(event: 'readable', listener: () => void): this; - prependOnceListener(event: 'error', listener: (err: Error) => void): this; - prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this; - } - /** - * This object is created internally by an HTTP server, not by the user. It is - * passed as the second parameter to the `'request'` event. - * @since v8.4.0 - */ - export class Http2ServerResponse extends stream.Writable { - constructor(stream: ServerHttp2Stream); - /** - * See `response.socket`. - * @since v8.4.0 - * @deprecated Since v13.0.0 - Use `socket`. - */ - readonly connection: net.Socket | tls.TLSSocket; - /** - * Boolean value that indicates whether the response has completed. Starts - * as `false`. After `response.end()` executes, the value will be `true`. - * @since v8.4.0 - * @deprecated Since v13.4.0,v12.16.0 - Use `writableEnded`. - */ - readonly finished: boolean; - /** - * True if headers were sent, false otherwise (read-only). - * @since v8.4.0 - */ - readonly headersSent: boolean; - /** - * A reference to the original HTTP2 request object. - * @since v15.7.0 - */ - readonly req: Http2ServerRequest; - /** - * Returns a `Proxy` object that acts as a `net.Socket` (or `tls.TLSSocket`) but - * applies getters, setters, and methods based on HTTP/2 logic. - * - * `destroyed`, `readable`, and `writable` properties will be retrieved from and - * set on `response.stream`. - * - * `destroy`, `emit`, `end`, `on` and `once` methods will be called on`response.stream`. - * - * `setTimeout` method will be called on `response.stream.session`. - * - * `pause`, `read`, `resume`, and `write` will throw an error with code`ERR_HTTP2_NO_SOCKET_MANIPULATION`. See `Http2Session and Sockets` for - * more information. - * - * All other interactions will be routed directly to the socket. - * - * ```js - * const http2 = require('http2'); - * const server = http2.createServer((req, res) => { - * const ip = req.socket.remoteAddress; - * const port = req.socket.remotePort; - * res.end(`Your IP address is ${ip} and your source port is ${port}.`); - * }).listen(3000); - * ``` - * @since v8.4.0 - */ - readonly socket: net.Socket | tls.TLSSocket; - /** - * The `Http2Stream` object backing the response. - * @since v8.4.0 - */ - readonly stream: ServerHttp2Stream; - /** - * When true, the Date header will be automatically generated and sent in - * the response if it is not already present in the headers. Defaults to true. - * - * This should only be disabled for testing; HTTP requires the Date header - * in responses. - * @since v8.4.0 - */ - sendDate: boolean; - /** - * When using implicit headers (not calling `response.writeHead()` explicitly), - * this property controls the status code that will be sent to the client when - * the headers get flushed. - * - * ```js - * response.statusCode = 404; - * ``` - * - * After response header was sent to the client, this property indicates the - * status code which was sent out. - * @since v8.4.0 - */ - statusCode: number; - /** - * Status message is not supported by HTTP/2 (RFC 7540 8.1.2.4). It returns - * an empty string. - * @since v8.4.0 - */ - statusMessage: ''; - /** - * This method adds HTTP trailing headers (a header but at the end of the - * message) to the response. - * - * Attempting to set a header field name or value that contains invalid characters - * will result in a `TypeError` being thrown. - * @since v8.4.0 - */ - addTrailers(trailers: OutgoingHttpHeaders): void; - /** - * This method signals to the server that all of the response headers and body - * have been sent; that server should consider this message complete. - * The method, `response.end()`, MUST be called on each response. - * - * If `data` is specified, it is equivalent to calling `response.write(data, encoding)` followed by `response.end(callback)`. - * - * If `callback` is specified, it will be called when the response stream - * is finished. - * @since v8.4.0 - */ - end(callback?: () => void): this; - end(data: string | Uint8Array, callback?: () => void): this; - end(data: string | Uint8Array, encoding: BufferEncoding, callback?: () => void): this; - /** - * Reads out a header that has already been queued but not sent to the client. - * The name is case-insensitive. - * - * ```js - * const contentType = response.getHeader('content-type'); - * ``` - * @since v8.4.0 - */ - getHeader(name: string): string; - /** - * Returns an array containing the unique names of the current outgoing headers. - * All header names are lowercase. - * - * ```js - * response.setHeader('Foo', 'bar'); - * response.setHeader('Set-Cookie', ['foo=bar', 'bar=baz']); - * - * const headerNames = response.getHeaderNames(); - * // headerNames === ['foo', 'set-cookie'] - * ``` - * @since v8.4.0 - */ - getHeaderNames(): string[]; - /** - * Returns a shallow copy of the current outgoing headers. Since a shallow copy - * is used, array values may be mutated without additional calls to various - * header-related http module methods. The keys of the returned object are the - * header names and the values are the respective header values. All header names - * are lowercase. - * - * The object returned by the `response.getHeaders()` method _does not_prototypically inherit from the JavaScript `Object`. This means that typical`Object` methods such as `obj.toString()`, - * `obj.hasOwnProperty()`, and others - * are not defined and _will not work_. - * - * ```js - * response.setHeader('Foo', 'bar'); - * response.setHeader('Set-Cookie', ['foo=bar', 'bar=baz']); - * - * const headers = response.getHeaders(); - * // headers === { foo: 'bar', 'set-cookie': ['foo=bar', 'bar=baz'] } - * ``` - * @since v8.4.0 - */ - getHeaders(): OutgoingHttpHeaders; - /** - * Returns `true` if the header identified by `name` is currently set in the - * outgoing headers. The header name matching is case-insensitive. - * - * ```js - * const hasContentType = response.hasHeader('content-type'); - * ``` - * @since v8.4.0 - */ - hasHeader(name: string): boolean; - /** - * Removes a header that has been queued for implicit sending. - * - * ```js - * response.removeHeader('Content-Encoding'); - * ``` - * @since v8.4.0 - */ - removeHeader(name: string): void; - /** - * Sets a single header value for implicit headers. If this header already exists - * in the to-be-sent headers, its value will be replaced. Use an array of strings - * here to send multiple headers with the same name. - * - * ```js - * response.setHeader('Content-Type', 'text/html; charset=utf-8'); - * ``` - * - * or - * - * ```js - * response.setHeader('Set-Cookie', ['type=ninja', 'language=javascript']); - * ``` - * - * Attempting to set a header field name or value that contains invalid characters - * will result in a `TypeError` being thrown. - * - * When headers have been set with `response.setHeader()`, they will be merged - * with any headers passed to `response.writeHead()`, with the headers passed - * to `response.writeHead()` given precedence. - * - * ```js - * // Returns content-type = text/plain - * const server = http2.createServer((req, res) => { - * res.setHeader('Content-Type', 'text/html; charset=utf-8'); - * res.setHeader('X-Foo', 'bar'); - * res.writeHead(200, { 'Content-Type': 'text/plain; charset=utf-8' }); - * res.end('ok'); - * }); - * ``` - * @since v8.4.0 - */ - setHeader(name: string, value: number | string | ReadonlyArray): void; - /** - * Sets the `Http2Stream`'s timeout value to `msecs`. If a callback is - * provided, then it is added as a listener on the `'timeout'` event on - * the response object. - * - * If no `'timeout'` listener is added to the request, the response, or - * the server, then `Http2Stream` s are destroyed when they time out. If a - * handler is assigned to the request, the response, or the server's `'timeout'`events, timed out sockets must be handled explicitly. - * @since v8.4.0 - */ - setTimeout(msecs: number, callback?: () => void): void; - /** - * If this method is called and `response.writeHead()` has not been called, - * it will switch to implicit header mode and flush the implicit headers. - * - * This sends a chunk of the response body. This method may - * be called multiple times to provide successive parts of the body. - * - * In the `http` module, the response body is omitted when the - * request is a HEAD request. Similarly, the `204` and `304` responses _must not_ include a message body. - * - * `chunk` can be a string or a buffer. If `chunk` is a string, - * the second parameter specifies how to encode it into a byte stream. - * By default the `encoding` is `'utf8'`. `callback` will be called when this chunk - * of data is flushed. - * - * This is the raw HTTP body and has nothing to do with higher-level multi-part - * body encodings that may be used. - * - * The first time `response.write()` is called, it will send the buffered - * header information and the first chunk of the body to the client. The second - * time `response.write()` is called, Node.js assumes data will be streamed, - * and sends the new data separately. That is, the response is buffered up to the - * first chunk of the body. - * - * Returns `true` if the entire data was flushed successfully to the kernel - * buffer. Returns `false` if all or part of the data was queued in user memory.`'drain'` will be emitted when the buffer is free again. - * @since v8.4.0 - */ - write(chunk: string | Uint8Array, callback?: (err: Error) => void): boolean; - write(chunk: string | Uint8Array, encoding: BufferEncoding, callback?: (err: Error) => void): boolean; - /** - * Sends a status `100 Continue` to the client, indicating that the request body - * should be sent. See the `'checkContinue'` event on `Http2Server` and`Http2SecureServer`. - * @since v8.4.0 - */ - writeContinue(): void; - /** - * Sends a status `103 Early Hints` to the client with a Link header, - * indicating that the user agent can preload/preconnect the linked resources. - * The `hints` is an object containing the values of headers to be sent with - * early hints message. - * - * Example: - * - * ```js - * const earlyHintsLink = '; rel=preload; as=style'; - * response.writeEarlyHints({ - * 'link': earlyHintsLink, - * }); - * - * const earlyHintsLinks = [ - * '; rel=preload; as=style', - * '; rel=preload; as=script', - * ]; - * response.writeEarlyHints({ - * 'link': earlyHintsLinks, - * 'x-trace-id': 'id for diagnostics' - * }); - * ``` - * - * @since v18.11.0 - * @param hints An object containing the values of headers - */ - writeEarlyHints(hints: Record): void; - /** - * Sends a response header to the request. The status code is a 3-digit HTTP - * status code, like `404`. The last argument, `headers`, are the response headers. - * - * Returns a reference to the `Http2ServerResponse`, so that calls can be chained. - * - * For compatibility with `HTTP/1`, a human-readable `statusMessage` may be - * passed as the second argument. However, because the `statusMessage` has no - * meaning within HTTP/2, the argument will have no effect and a process warning - * will be emitted. - * - * ```js - * const body = 'hello world'; - * response.writeHead(200, { - * 'Content-Length': Buffer.byteLength(body), - * 'Content-Type': 'text/plain; charset=utf-8', - * }); - * ``` - * - * `Content-Length` is given in bytes not characters. The`Buffer.byteLength()` API may be used to determine the number of bytes in a - * given encoding. On outbound messages, Node.js does not check if Content-Length - * and the length of the body being transmitted are equal or not. However, when - * receiving messages, Node.js will automatically reject messages when the`Content-Length` does not match the actual payload size. - * - * This method may be called at most one time on a message before `response.end()` is called. - * - * If `response.write()` or `response.end()` are called before calling - * this, the implicit/mutable headers will be calculated and call this function. - * - * When headers have been set with `response.setHeader()`, they will be merged - * with any headers passed to `response.writeHead()`, with the headers passed - * to `response.writeHead()` given precedence. - * - * ```js - * // Returns content-type = text/plain - * const server = http2.createServer((req, res) => { - * res.setHeader('Content-Type', 'text/html; charset=utf-8'); - * res.setHeader('X-Foo', 'bar'); - * res.writeHead(200, { 'Content-Type': 'text/plain; charset=utf-8' }); - * res.end('ok'); - * }); - * ``` - * - * Attempting to set a header field name or value that contains invalid characters - * will result in a `TypeError` being thrown. - * @since v8.4.0 - */ - writeHead(statusCode: number, headers?: OutgoingHttpHeaders): this; - writeHead(statusCode: number, statusMessage: string, headers?: OutgoingHttpHeaders): this; - /** - * Call `http2stream.pushStream()` with the given headers, and wrap the - * given `Http2Stream` on a newly created `Http2ServerResponse` as the callback - * parameter if successful. When `Http2ServerRequest` is closed, the callback is - * called with an error `ERR_HTTP2_INVALID_STREAM`. - * @since v8.4.0 - * @param headers An object describing the headers - * @param callback Called once `http2stream.pushStream()` is finished, or either when the attempt to create the pushed `Http2Stream` has failed or has been rejected, or the state of - * `Http2ServerRequest` is closed prior to calling the `http2stream.pushStream()` method - */ - createPushResponse(headers: OutgoingHttpHeaders, callback: (err: Error | null, res: Http2ServerResponse) => void): void; - addListener(event: 'close', listener: () => void): this; - addListener(event: 'drain', listener: () => void): this; - addListener(event: 'error', listener: (error: Error) => void): this; - addListener(event: 'finish', listener: () => void): this; - addListener(event: 'pipe', listener: (src: stream.Readable) => void): this; - addListener(event: 'unpipe', listener: (src: stream.Readable) => void): this; - addListener(event: string | symbol, listener: (...args: any[]) => void): this; - emit(event: 'close'): boolean; - emit(event: 'drain'): boolean; - emit(event: 'error', error: Error): boolean; - emit(event: 'finish'): boolean; - emit(event: 'pipe', src: stream.Readable): boolean; - emit(event: 'unpipe', src: stream.Readable): boolean; - emit(event: string | symbol, ...args: any[]): boolean; - on(event: 'close', listener: () => void): this; - on(event: 'drain', listener: () => void): this; - on(event: 'error', listener: (error: Error) => void): this; - on(event: 'finish', listener: () => void): this; - on(event: 'pipe', listener: (src: stream.Readable) => void): this; - on(event: 'unpipe', listener: (src: stream.Readable) => void): this; - on(event: string | symbol, listener: (...args: any[]) => void): this; - once(event: 'close', listener: () => void): this; - once(event: 'drain', listener: () => void): this; - once(event: 'error', listener: (error: Error) => void): this; - once(event: 'finish', listener: () => void): this; - once(event: 'pipe', listener: (src: stream.Readable) => void): this; - once(event: 'unpipe', listener: (src: stream.Readable) => void): this; - once(event: string | symbol, listener: (...args: any[]) => void): this; - prependListener(event: 'close', listener: () => void): this; - prependListener(event: 'drain', listener: () => void): this; - prependListener(event: 'error', listener: (error: Error) => void): this; - prependListener(event: 'finish', listener: () => void): this; - prependListener(event: 'pipe', listener: (src: stream.Readable) => void): this; - prependListener(event: 'unpipe', listener: (src: stream.Readable) => void): this; - prependListener(event: string | symbol, listener: (...args: any[]) => void): this; - prependOnceListener(event: 'close', listener: () => void): this; - prependOnceListener(event: 'drain', listener: () => void): this; - prependOnceListener(event: 'error', listener: (error: Error) => void): this; - prependOnceListener(event: 'finish', listener: () => void): this; - prependOnceListener(event: 'pipe', listener: (src: stream.Readable) => void): this; - prependOnceListener(event: 'unpipe', listener: (src: stream.Readable) => void): this; - prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this; - } - export namespace constants { - const NGHTTP2_SESSION_SERVER: number; - const NGHTTP2_SESSION_CLIENT: number; - const NGHTTP2_STREAM_STATE_IDLE: number; - const NGHTTP2_STREAM_STATE_OPEN: number; - const NGHTTP2_STREAM_STATE_RESERVED_LOCAL: number; - const NGHTTP2_STREAM_STATE_RESERVED_REMOTE: number; - const NGHTTP2_STREAM_STATE_HALF_CLOSED_LOCAL: number; - const NGHTTP2_STREAM_STATE_HALF_CLOSED_REMOTE: number; - const NGHTTP2_STREAM_STATE_CLOSED: number; - const NGHTTP2_NO_ERROR: number; - const NGHTTP2_PROTOCOL_ERROR: number; - const NGHTTP2_INTERNAL_ERROR: number; - const NGHTTP2_FLOW_CONTROL_ERROR: number; - const NGHTTP2_SETTINGS_TIMEOUT: number; - const NGHTTP2_STREAM_CLOSED: number; - const NGHTTP2_FRAME_SIZE_ERROR: number; - const NGHTTP2_REFUSED_STREAM: number; - const NGHTTP2_CANCEL: number; - const NGHTTP2_COMPRESSION_ERROR: number; - const NGHTTP2_CONNECT_ERROR: number; - const NGHTTP2_ENHANCE_YOUR_CALM: number; - const NGHTTP2_INADEQUATE_SECURITY: number; - const NGHTTP2_HTTP_1_1_REQUIRED: number; - const NGHTTP2_ERR_FRAME_SIZE_ERROR: number; - const NGHTTP2_FLAG_NONE: number; - const NGHTTP2_FLAG_END_STREAM: number; - const NGHTTP2_FLAG_END_HEADERS: number; - const NGHTTP2_FLAG_ACK: number; - const NGHTTP2_FLAG_PADDED: number; - const NGHTTP2_FLAG_PRIORITY: number; - const DEFAULT_SETTINGS_HEADER_TABLE_SIZE: number; - const DEFAULT_SETTINGS_ENABLE_PUSH: number; - const DEFAULT_SETTINGS_INITIAL_WINDOW_SIZE: number; - const DEFAULT_SETTINGS_MAX_FRAME_SIZE: number; - const MAX_MAX_FRAME_SIZE: number; - const MIN_MAX_FRAME_SIZE: number; - const MAX_INITIAL_WINDOW_SIZE: number; - const NGHTTP2_DEFAULT_WEIGHT: number; - const NGHTTP2_SETTINGS_HEADER_TABLE_SIZE: number; - const NGHTTP2_SETTINGS_ENABLE_PUSH: number; - const NGHTTP2_SETTINGS_MAX_CONCURRENT_STREAMS: number; - const NGHTTP2_SETTINGS_INITIAL_WINDOW_SIZE: number; - const NGHTTP2_SETTINGS_MAX_FRAME_SIZE: number; - const NGHTTP2_SETTINGS_MAX_HEADER_LIST_SIZE: number; - const PADDING_STRATEGY_NONE: number; - const PADDING_STRATEGY_MAX: number; - const PADDING_STRATEGY_CALLBACK: number; - const HTTP2_HEADER_STATUS: string; - const HTTP2_HEADER_METHOD: string; - const HTTP2_HEADER_AUTHORITY: string; - const HTTP2_HEADER_SCHEME: string; - const HTTP2_HEADER_PATH: string; - const HTTP2_HEADER_ACCEPT_CHARSET: string; - const HTTP2_HEADER_ACCEPT_ENCODING: string; - const HTTP2_HEADER_ACCEPT_LANGUAGE: string; - const HTTP2_HEADER_ACCEPT_RANGES: string; - const HTTP2_HEADER_ACCEPT: string; - const HTTP2_HEADER_ACCESS_CONTROL_ALLOW_ORIGIN: string; - const HTTP2_HEADER_AGE: string; - const HTTP2_HEADER_ALLOW: string; - const HTTP2_HEADER_AUTHORIZATION: string; - const HTTP2_HEADER_CACHE_CONTROL: string; - const HTTP2_HEADER_CONNECTION: string; - const HTTP2_HEADER_CONTENT_DISPOSITION: string; - const HTTP2_HEADER_CONTENT_ENCODING: string; - const HTTP2_HEADER_CONTENT_LANGUAGE: string; - const HTTP2_HEADER_CONTENT_LENGTH: string; - const HTTP2_HEADER_CONTENT_LOCATION: string; - const HTTP2_HEADER_CONTENT_MD5: string; - const HTTP2_HEADER_CONTENT_RANGE: string; - const HTTP2_HEADER_CONTENT_TYPE: string; - const HTTP2_HEADER_COOKIE: string; - const HTTP2_HEADER_DATE: string; - const HTTP2_HEADER_ETAG: string; - const HTTP2_HEADER_EXPECT: string; - const HTTP2_HEADER_EXPIRES: string; - const HTTP2_HEADER_FROM: string; - const HTTP2_HEADER_HOST: string; - const HTTP2_HEADER_IF_MATCH: string; - const HTTP2_HEADER_IF_MODIFIED_SINCE: string; - const HTTP2_HEADER_IF_NONE_MATCH: string; - const HTTP2_HEADER_IF_RANGE: string; - const HTTP2_HEADER_IF_UNMODIFIED_SINCE: string; - const HTTP2_HEADER_LAST_MODIFIED: string; - const HTTP2_HEADER_LINK: string; - const HTTP2_HEADER_LOCATION: string; - const HTTP2_HEADER_MAX_FORWARDS: string; - const HTTP2_HEADER_PREFER: string; - const HTTP2_HEADER_PROXY_AUTHENTICATE: string; - const HTTP2_HEADER_PROXY_AUTHORIZATION: string; - const HTTP2_HEADER_RANGE: string; - const HTTP2_HEADER_REFERER: string; - const HTTP2_HEADER_REFRESH: string; - const HTTP2_HEADER_RETRY_AFTER: string; - const HTTP2_HEADER_SERVER: string; - const HTTP2_HEADER_SET_COOKIE: string; - const HTTP2_HEADER_STRICT_TRANSPORT_SECURITY: string; - const HTTP2_HEADER_TRANSFER_ENCODING: string; - const HTTP2_HEADER_TE: string; - const HTTP2_HEADER_UPGRADE: string; - const HTTP2_HEADER_USER_AGENT: string; - const HTTP2_HEADER_VARY: string; - const HTTP2_HEADER_VIA: string; - const HTTP2_HEADER_WWW_AUTHENTICATE: string; - const HTTP2_HEADER_HTTP2_SETTINGS: string; - const HTTP2_HEADER_KEEP_ALIVE: string; - const HTTP2_HEADER_PROXY_CONNECTION: string; - const HTTP2_METHOD_ACL: string; - const HTTP2_METHOD_BASELINE_CONTROL: string; - const HTTP2_METHOD_BIND: string; - const HTTP2_METHOD_CHECKIN: string; - const HTTP2_METHOD_CHECKOUT: string; - const HTTP2_METHOD_CONNECT: string; - const HTTP2_METHOD_COPY: string; - const HTTP2_METHOD_DELETE: string; - const HTTP2_METHOD_GET: string; - const HTTP2_METHOD_HEAD: string; - const HTTP2_METHOD_LABEL: string; - const HTTP2_METHOD_LINK: string; - const HTTP2_METHOD_LOCK: string; - const HTTP2_METHOD_MERGE: string; - const HTTP2_METHOD_MKACTIVITY: string; - const HTTP2_METHOD_MKCALENDAR: string; - const HTTP2_METHOD_MKCOL: string; - const HTTP2_METHOD_MKREDIRECTREF: string; - const HTTP2_METHOD_MKWORKSPACE: string; - const HTTP2_METHOD_MOVE: string; - const HTTP2_METHOD_OPTIONS: string; - const HTTP2_METHOD_ORDERPATCH: string; - const HTTP2_METHOD_PATCH: string; - const HTTP2_METHOD_POST: string; - const HTTP2_METHOD_PRI: string; - const HTTP2_METHOD_PROPFIND: string; - const HTTP2_METHOD_PROPPATCH: string; - const HTTP2_METHOD_PUT: string; - const HTTP2_METHOD_REBIND: string; - const HTTP2_METHOD_REPORT: string; - const HTTP2_METHOD_SEARCH: string; - const HTTP2_METHOD_TRACE: string; - const HTTP2_METHOD_UNBIND: string; - const HTTP2_METHOD_UNCHECKOUT: string; - const HTTP2_METHOD_UNLINK: string; - const HTTP2_METHOD_UNLOCK: string; - const HTTP2_METHOD_UPDATE: string; - const HTTP2_METHOD_UPDATEREDIRECTREF: string; - const HTTP2_METHOD_VERSION_CONTROL: string; - const HTTP_STATUS_CONTINUE: number; - const HTTP_STATUS_SWITCHING_PROTOCOLS: number; - const HTTP_STATUS_PROCESSING: number; - const HTTP_STATUS_OK: number; - const HTTP_STATUS_CREATED: number; - const HTTP_STATUS_ACCEPTED: number; - const HTTP_STATUS_NON_AUTHORITATIVE_INFORMATION: number; - const HTTP_STATUS_NO_CONTENT: number; - const HTTP_STATUS_RESET_CONTENT: number; - const HTTP_STATUS_PARTIAL_CONTENT: number; - const HTTP_STATUS_MULTI_STATUS: number; - const HTTP_STATUS_ALREADY_REPORTED: number; - const HTTP_STATUS_IM_USED: number; - const HTTP_STATUS_MULTIPLE_CHOICES: number; - const HTTP_STATUS_MOVED_PERMANENTLY: number; - const HTTP_STATUS_FOUND: number; - const HTTP_STATUS_SEE_OTHER: number; - const HTTP_STATUS_NOT_MODIFIED: number; - const HTTP_STATUS_USE_PROXY: number; - const HTTP_STATUS_TEMPORARY_REDIRECT: number; - const HTTP_STATUS_PERMANENT_REDIRECT: number; - const HTTP_STATUS_BAD_REQUEST: number; - const HTTP_STATUS_UNAUTHORIZED: number; - const HTTP_STATUS_PAYMENT_REQUIRED: number; - const HTTP_STATUS_FORBIDDEN: number; - const HTTP_STATUS_NOT_FOUND: number; - const HTTP_STATUS_METHOD_NOT_ALLOWED: number; - const HTTP_STATUS_NOT_ACCEPTABLE: number; - const HTTP_STATUS_PROXY_AUTHENTICATION_REQUIRED: number; - const HTTP_STATUS_REQUEST_TIMEOUT: number; - const HTTP_STATUS_CONFLICT: number; - const HTTP_STATUS_GONE: number; - const HTTP_STATUS_LENGTH_REQUIRED: number; - const HTTP_STATUS_PRECONDITION_FAILED: number; - const HTTP_STATUS_PAYLOAD_TOO_LARGE: number; - const HTTP_STATUS_URI_TOO_LONG: number; - const HTTP_STATUS_UNSUPPORTED_MEDIA_TYPE: number; - const HTTP_STATUS_RANGE_NOT_SATISFIABLE: number; - const HTTP_STATUS_EXPECTATION_FAILED: number; - const HTTP_STATUS_TEAPOT: number; - const HTTP_STATUS_MISDIRECTED_REQUEST: number; - const HTTP_STATUS_UNPROCESSABLE_ENTITY: number; - const HTTP_STATUS_LOCKED: number; - const HTTP_STATUS_FAILED_DEPENDENCY: number; - const HTTP_STATUS_UNORDERED_COLLECTION: number; - const HTTP_STATUS_UPGRADE_REQUIRED: number; - const HTTP_STATUS_PRECONDITION_REQUIRED: number; - const HTTP_STATUS_TOO_MANY_REQUESTS: number; - const HTTP_STATUS_REQUEST_HEADER_FIELDS_TOO_LARGE: number; - const HTTP_STATUS_UNAVAILABLE_FOR_LEGAL_REASONS: number; - const HTTP_STATUS_INTERNAL_SERVER_ERROR: number; - const HTTP_STATUS_NOT_IMPLEMENTED: number; - const HTTP_STATUS_BAD_GATEWAY: number; - const HTTP_STATUS_SERVICE_UNAVAILABLE: number; - const HTTP_STATUS_GATEWAY_TIMEOUT: number; - const HTTP_STATUS_HTTP_VERSION_NOT_SUPPORTED: number; - const HTTP_STATUS_VARIANT_ALSO_NEGOTIATES: number; - const HTTP_STATUS_INSUFFICIENT_STORAGE: number; - const HTTP_STATUS_LOOP_DETECTED: number; - const HTTP_STATUS_BANDWIDTH_LIMIT_EXCEEDED: number; - const HTTP_STATUS_NOT_EXTENDED: number; - const HTTP_STATUS_NETWORK_AUTHENTICATION_REQUIRED: number; - } - /** - * This symbol can be set as a property on the HTTP/2 headers object with - * an array value in order to provide a list of headers considered sensitive. - */ - export const sensitiveHeaders: symbol; - /** - * Returns an object containing the default settings for an `Http2Session`instance. This method returns a new object instance every time it is called - * so instances returned may be safely modified for use. - * @since v8.4.0 - */ - export function getDefaultSettings(): Settings; - /** - * Returns a `Buffer` instance containing serialized representation of the given - * HTTP/2 settings as specified in the [HTTP/2](https://tools.ietf.org/html/rfc7540) specification. This is intended - * for use with the `HTTP2-Settings` header field. - * - * ```js - * const http2 = require('http2'); - * - * const packed = http2.getPackedSettings({ enablePush: false }); - * - * console.log(packed.toString('base64')); - * // Prints: AAIAAAAA - * ``` - * @since v8.4.0 - */ - export function getPackedSettings(settings: Settings): Buffer; - /** - * Returns a `HTTP/2 Settings Object` containing the deserialized settings from - * the given `Buffer` as generated by `http2.getPackedSettings()`. - * @since v8.4.0 - * @param buf The packed settings. - */ - export function getUnpackedSettings(buf: Uint8Array): Settings; - /** - * Returns a `net.Server` instance that creates and manages `Http2Session`instances. - * - * Since there are no browsers known that support [unencrypted HTTP/2](https://http2.github.io/faq/#does-http2-require-encryption), the use of {@link createSecureServer} is necessary when - * communicating - * with browser clients. - * - * ```js - * const http2 = require('http2'); - * - * // Create an unencrypted HTTP/2 server. - * // Since there are no browsers known that support - * // unencrypted HTTP/2, the use of `http2.createSecureServer()` - * // is necessary when communicating with browser clients. - * const server = http2.createServer(); - * - * server.on('stream', (stream, headers) => { - * stream.respond({ - * 'content-type': 'text/html; charset=utf-8', - * ':status': 200 - * }); - * stream.end('

        Hello World

        '); - * }); - * - * server.listen(80); - * ``` - * @since v8.4.0 - * @param onRequestHandler See `Compatibility API` - */ - export function createServer(onRequestHandler?: (request: Http2ServerRequest, response: Http2ServerResponse) => void): Http2Server; - export function createServer(options: ServerOptions, onRequestHandler?: (request: Http2ServerRequest, response: Http2ServerResponse) => void): Http2Server; - /** - * Returns a `tls.Server` instance that creates and manages `Http2Session`instances. - * - * ```js - * const http2 = require('http2'); - * const fs = require('fs'); - * - * const options = { - * key: fs.readFileSync('server-key.pem'), - * cert: fs.readFileSync('server-cert.pem') - * }; - * - * // Create a secure HTTP/2 server - * const server = http2.createSecureServer(options); - * - * server.on('stream', (stream, headers) => { - * stream.respond({ - * 'content-type': 'text/html; charset=utf-8', - * ':status': 200 - * }); - * stream.end('

        Hello World

        '); - * }); - * - * server.listen(80); - * ``` - * @since v8.4.0 - * @param onRequestHandler See `Compatibility API` - */ - export function createSecureServer(onRequestHandler?: (request: Http2ServerRequest, response: Http2ServerResponse) => void): Http2SecureServer; - export function createSecureServer(options: SecureServerOptions, onRequestHandler?: (request: Http2ServerRequest, response: Http2ServerResponse) => void): Http2SecureServer; - /** - * Returns a `ClientHttp2Session` instance. - * - * ```js - * const http2 = require('http2'); - * const client = http2.connect('https://localhost:1234'); - * - * // Use the client - * - * client.close(); - * ``` - * @since v8.4.0 - * @param authority The remote HTTP/2 server to connect to. This must be in the form of a minimal, valid URL with the `http://` or `https://` prefix, host name, and IP port (if a non-default port - * is used). Userinfo (user ID and password), path, querystring, and fragment details in the URL will be ignored. - * @param listener Will be registered as a one-time listener of the {@link 'connect'} event. - */ - export function connect(authority: string | url.URL, listener: (session: ClientHttp2Session, socket: net.Socket | tls.TLSSocket) => void): ClientHttp2Session; - export function connect( - authority: string | url.URL, - options?: ClientSessionOptions | SecureClientSessionOptions, - listener?: (session: ClientHttp2Session, socket: net.Socket | tls.TLSSocket) => void - ): ClientHttp2Session; -} -declare module 'node:http2' { - export * from 'http2'; -} diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/util.d.ts b/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/util.d.ts deleted file mode 100644 index 6d350195b0cac64d298183b458cc65c9b4b6549f..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/util.d.ts +++ /dev/null @@ -1,1926 +0,0 @@ -/** - * The `util` module supports the needs of Node.js internal APIs. Many of the - * utilities are useful for application and module developers as well. To access - * it: - * - * ```js - * const util = require('util'); - * ``` - * @see [source](https://github.com/nodejs/node/blob/v18.x/lib/util.js) - */ -declare module 'util' { - import * as types from 'node:util/types'; - export interface InspectOptions { - /** - * If `true`, object's non-enumerable symbols and properties are included in the formatted result. - * `WeakMap` and `WeakSet` entries are also included as well as user defined prototype properties (excluding method properties). - * @default false - */ - showHidden?: boolean | undefined; - /** - * Specifies the number of times to recurse while formatting object. - * This is useful for inspecting large objects. - * To recurse up to the maximum call stack size pass `Infinity` or `null`. - * @default 2 - */ - depth?: number | null | undefined; - /** - * If `true`, the output is styled with ANSI color codes. Colors are customizable. - */ - colors?: boolean | undefined; - /** - * If `false`, `[util.inspect.custom](depth, opts, inspect)` functions are not invoked. - * @default true - */ - customInspect?: boolean | undefined; - /** - * If `true`, `Proxy` inspection includes the target and handler objects. - * @default false - */ - showProxy?: boolean | undefined; - /** - * Specifies the maximum number of `Array`, `TypedArray`, `WeakMap`, and `WeakSet` elements - * to include when formatting. Set to `null` or `Infinity` to show all elements. - * Set to `0` or negative to show no elements. - * @default 100 - */ - maxArrayLength?: number | null | undefined; - /** - * Specifies the maximum number of characters to - * include when formatting. Set to `null` or `Infinity` to show all elements. - * Set to `0` or negative to show no characters. - * @default 10000 - */ - maxStringLength?: number | null | undefined; - /** - * The length at which input values are split across multiple lines. - * Set to `Infinity` to format the input as a single line - * (in combination with `compact` set to `true` or any number >= `1`). - * @default 80 - */ - breakLength?: number | undefined; - /** - * Setting this to `false` causes each object key - * to be displayed on a new line. It will also add new lines to text that is - * longer than `breakLength`. If set to a number, the most `n` inner elements - * are united on a single line as long as all properties fit into - * `breakLength`. Short array elements are also grouped together. Note that no - * text will be reduced below 16 characters, no matter the `breakLength` size. - * For more information, see the example below. - * @default true - */ - compact?: boolean | number | undefined; - /** - * If set to `true` or a function, all properties of an object, and `Set` and `Map` - * entries are sorted in the resulting string. - * If set to `true` the default sort is used. - * If set to a function, it is used as a compare function. - */ - sorted?: boolean | ((a: string, b: string) => number) | undefined; - /** - * If set to `true`, getters are going to be - * inspected as well. If set to `'get'` only getters without setter are going - * to be inspected. If set to `'set'` only getters having a corresponding - * setter are going to be inspected. This might cause side effects depending on - * the getter function. - * @default false - */ - getters?: 'get' | 'set' | boolean | undefined; - /** - * If set to `true`, an underscore is used to separate every three digits in all bigints and numbers. - * @default false - */ - numericSeparator?: boolean | undefined; - } - export type Style = 'special' | 'number' | 'bigint' | 'boolean' | 'undefined' | 'null' | 'string' | 'symbol' | 'date' | 'regexp' | 'module'; - export type CustomInspectFunction = (depth: number, options: InspectOptionsStylized) => any; // TODO: , inspect: inspect - export interface InspectOptionsStylized extends InspectOptions { - stylize(text: string, styleType: Style): string; - } - /** - * The `util.format()` method returns a formatted string using the first argument - * as a `printf`\-like format string which can contain zero or more format - * specifiers. Each specifier is replaced with the converted value from the - * corresponding argument. Supported specifiers are: - * - * If a specifier does not have a corresponding argument, it is not replaced: - * - * ```js - * util.format('%s:%s', 'foo'); - * // Returns: 'foo:%s' - * ``` - * - * Values that are not part of the format string are formatted using`util.inspect()` if their type is not `string`. - * - * If there are more arguments passed to the `util.format()` method than the - * number of specifiers, the extra arguments are concatenated to the returned - * string, separated by spaces: - * - * ```js - * util.format('%s:%s', 'foo', 'bar', 'baz'); - * // Returns: 'foo:bar baz' - * ``` - * - * If the first argument does not contain a valid format specifier, `util.format()`returns a string that is the concatenation of all arguments separated by spaces: - * - * ```js - * util.format(1, 2, 3); - * // Returns: '1 2 3' - * ``` - * - * If only one argument is passed to `util.format()`, it is returned as it is - * without any formatting: - * - * ```js - * util.format('%% %s'); - * // Returns: '%% %s' - * ``` - * - * `util.format()` is a synchronous method that is intended as a debugging tool. - * Some input values can have a significant performance overhead that can block the - * event loop. Use this function with care and never in a hot code path. - * @since v0.5.3 - * @param format A `printf`-like format string. - */ - export function format(format?: any, ...param: any[]): string; - /** - * This function is identical to {@link format}, except in that it takes - * an `inspectOptions` argument which specifies options that are passed along to {@link inspect}. - * - * ```js - * util.formatWithOptions({ colors: true }, 'See object %O', { foo: 42 }); - * // Returns 'See object { foo: 42 }', where `42` is colored as a number - * // when printed to a terminal. - * ``` - * @since v10.0.0 - */ - export function formatWithOptions(inspectOptions: InspectOptions, format?: any, ...param: any[]): string; - /** - * Returns the string name for a numeric error code that comes from a Node.js API. - * The mapping between error codes and error names is platform-dependent. - * See `Common System Errors` for the names of common errors. - * - * ```js - * fs.access('file/that/does/not/exist', (err) => { - * const name = util.getSystemErrorName(err.errno); - * console.error(name); // ENOENT - * }); - * ``` - * @since v9.7.0 - */ - export function getSystemErrorName(err: number): string; - /** - * Returns a Map of all system error codes available from the Node.js API. - * The mapping between error codes and error names is platform-dependent. - * See `Common System Errors` for the names of common errors. - * - * ```js - * fs.access('file/that/does/not/exist', (err) => { - * const errorMap = util.getSystemErrorMap(); - * const name = errorMap.get(err.errno); - * console.error(name); // ENOENT - * }); - * ``` - * @since v16.0.0, v14.17.0 - */ - export function getSystemErrorMap(): Map; - /** - * The `util.log()` method prints the given `string` to `stdout` with an included - * timestamp. - * - * ```js - * const util = require('util'); - * - * util.log('Timestamped message.'); - * ``` - * @since v0.3.0 - * @deprecated Since v6.0.0 - Use a third party module instead. - */ - export function log(string: string): void; - /** - * Returns the `string` after replacing any surrogate code points - * (or equivalently, any unpaired surrogate code units) with the - * Unicode "replacement character" U+FFFD. - * @since v16.8.0, v14.18.0 - */ - export function toUSVString(string: string): string; - /** - * Creates and returns an `AbortController` instance whose `AbortSignal` is marked - * as transferable and can be used with `structuredClone()` or `postMessage()`. - * @since v18.11.0 - * @returns A transferable AbortController - */ - export function transferableAbortController(): AbortController; - /** - * Marks the given {AbortSignal} as transferable so that it can be used with - * `structuredClone()` and `postMessage()`. - * - * ```js - * const signal = transferableAbortSignal(AbortSignal.timeout(100)); - * const channel = new MessageChannel(); - * channel.port2.postMessage(signal, [signal]); - * ``` - * @since v18.11.0 - * @param signal The AbortSignal - * @returns The same AbortSignal - */ - export function transferableAbortSignal(signal: AbortSignal): AbortSignal; - /** - * The `util.inspect()` method returns a string representation of `object` that is - * intended for debugging. The output of `util.inspect` may change at any time - * and should not be depended upon programmatically. Additional `options` may be - * passed that alter the result.`util.inspect()` will use the constructor's name and/or `@@toStringTag` to make - * an identifiable tag for an inspected value. - * - * ```js - * class Foo { - * get [Symbol.toStringTag]() { - * return 'bar'; - * } - * } - * - * class Bar {} - * - * const baz = Object.create(null, { [Symbol.toStringTag]: { value: 'foo' } }); - * - * util.inspect(new Foo()); // 'Foo [bar] {}' - * util.inspect(new Bar()); // 'Bar {}' - * util.inspect(baz); // '[foo] {}' - * ``` - * - * Circular references point to their anchor by using a reference index: - * - * ```js - * const { inspect } = require('util'); - * - * const obj = {}; - * obj.a = [obj]; - * obj.b = {}; - * obj.b.inner = obj.b; - * obj.b.obj = obj; - * - * console.log(inspect(obj)); - * // { - * // a: [ [Circular *1] ], - * // b: { inner: [Circular *2], obj: [Circular *1] } - * // } - * ``` - * - * The following example inspects all properties of the `util` object: - * - * ```js - * const util = require('util'); - * - * console.log(util.inspect(util, { showHidden: true, depth: null })); - * ``` - * - * The following example highlights the effect of the `compact` option: - * - * ```js - * const util = require('util'); - * - * const o = { - * a: [1, 2, [[ - * 'Lorem ipsum dolor sit amet,\nconsectetur adipiscing elit, sed do ' + - * 'eiusmod \ntempor incididunt ut labore et dolore magna aliqua.', - * 'test', - * 'foo']], 4], - * b: new Map([['za', 1], ['zb', 'test']]) - * }; - * console.log(util.inspect(o, { compact: true, depth: 5, breakLength: 80 })); - * - * // { a: - * // [ 1, - * // 2, - * // [ [ 'Lorem ipsum dolor sit amet,\nconsectetur [...]', // A long line - * // 'test', - * // 'foo' ] ], - * // 4 ], - * // b: Map(2) { 'za' => 1, 'zb' => 'test' } } - * - * // Setting `compact` to false or an integer creates more reader friendly output. - * console.log(util.inspect(o, { compact: false, depth: 5, breakLength: 80 })); - * - * // { - * // a: [ - * // 1, - * // 2, - * // [ - * // [ - * // 'Lorem ipsum dolor sit amet,\n' + - * // 'consectetur adipiscing elit, sed do eiusmod \n' + - * // 'tempor incididunt ut labore et dolore magna aliqua.', - * // 'test', - * // 'foo' - * // ] - * // ], - * // 4 - * // ], - * // b: Map(2) { - * // 'za' => 1, - * // 'zb' => 'test' - * // } - * // } - * - * // Setting `breakLength` to e.g. 150 will print the "Lorem ipsum" text in a - * // single line. - * ``` - * - * The `showHidden` option allows [`WeakMap`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WeakMap) and - * [`WeakSet`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WeakSet) entries to be - * inspected. If there are more entries than `maxArrayLength`, there is no - * guarantee which entries are displayed. That means retrieving the same [`WeakSet`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WeakSet) entries twice may - * result in different output. Furthermore, entries - * with no remaining strong references may be garbage collected at any time. - * - * ```js - * const { inspect } = require('util'); - * - * const obj = { a: 1 }; - * const obj2 = { b: 2 }; - * const weakSet = new WeakSet([obj, obj2]); - * - * console.log(inspect(weakSet, { showHidden: true })); - * // WeakSet { { a: 1 }, { b: 2 } } - * ``` - * - * The `sorted` option ensures that an object's property insertion order does not - * impact the result of `util.inspect()`. - * - * ```js - * const { inspect } = require('util'); - * const assert = require('assert'); - * - * const o1 = { - * b: [2, 3, 1], - * a: '`a` comes before `b`', - * c: new Set([2, 3, 1]) - * }; - * console.log(inspect(o1, { sorted: true })); - * // { a: '`a` comes before `b`', b: [ 2, 3, 1 ], c: Set(3) { 1, 2, 3 } } - * console.log(inspect(o1, { sorted: (a, b) => b.localeCompare(a) })); - * // { c: Set(3) { 3, 2, 1 }, b: [ 2, 3, 1 ], a: '`a` comes before `b`' } - * - * const o2 = { - * c: new Set([2, 1, 3]), - * a: '`a` comes before `b`', - * b: [2, 3, 1] - * }; - * assert.strict.equal( - * inspect(o1, { sorted: true }), - * inspect(o2, { sorted: true }) - * ); - * ``` - * - * The `numericSeparator` option adds an underscore every three digits to all - * numbers. - * - * ```js - * const { inspect } = require('util'); - * - * const thousand = 1_000; - * const million = 1_000_000; - * const bigNumber = 123_456_789n; - * const bigDecimal = 1_234.123_45; - * - * console.log(thousand, million, bigNumber, bigDecimal); - * // 1_000 1_000_000 123_456_789n 1_234.123_45 - * ``` - * - * `util.inspect()` is a synchronous method intended for debugging. Its maximum - * output length is approximately 128 MB. Inputs that result in longer output will - * be truncated. - * @since v0.3.0 - * @param object Any JavaScript primitive or `Object`. - * @return The representation of `object`. - */ - export function inspect(object: any, showHidden?: boolean, depth?: number | null, color?: boolean): string; - export function inspect(object: any, options?: InspectOptions): string; - export namespace inspect { - let colors: NodeJS.Dict<[number, number]>; - let styles: { - [K in Style]: string; - }; - let defaultOptions: InspectOptions; - /** - * Allows changing inspect settings from the repl. - */ - let replDefaults: InspectOptions; - /** - * That can be used to declare custom inspect functions. - */ - const custom: unique symbol; - } - /** - * Alias for [`Array.isArray()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/isArray). - * - * Returns `true` if the given `object` is an `Array`. Otherwise, returns `false`. - * - * ```js - * const util = require('util'); - * - * util.isArray([]); - * // Returns: true - * util.isArray(new Array()); - * // Returns: true - * util.isArray({}); - * // Returns: false - * ``` - * @since v0.6.0 - * @deprecated Since v4.0.0 - Use `isArray` instead. - */ - export function isArray(object: unknown): object is unknown[]; - /** - * Returns `true` if the given `object` is a `RegExp`. Otherwise, returns `false`. - * - * ```js - * const util = require('util'); - * - * util.isRegExp(/some regexp/); - * // Returns: true - * util.isRegExp(new RegExp('another regexp')); - * // Returns: true - * util.isRegExp({}); - * // Returns: false - * ``` - * @since v0.6.0 - * @deprecated Since v4.0.0 - Deprecated - */ - export function isRegExp(object: unknown): object is RegExp; - /** - * Returns `true` if the given `object` is a `Date`. Otherwise, returns `false`. - * - * ```js - * const util = require('util'); - * - * util.isDate(new Date()); - * // Returns: true - * util.isDate(Date()); - * // false (without 'new' returns a String) - * util.isDate({}); - * // Returns: false - * ``` - * @since v0.6.0 - * @deprecated Since v4.0.0 - Use {@link types.isDate} instead. - */ - export function isDate(object: unknown): object is Date; - /** - * Returns `true` if the given `object` is an `Error`. Otherwise, returns`false`. - * - * ```js - * const util = require('util'); - * - * util.isError(new Error()); - * // Returns: true - * util.isError(new TypeError()); - * // Returns: true - * util.isError({ name: 'Error', message: 'an error occurred' }); - * // Returns: false - * ``` - * - * This method relies on `Object.prototype.toString()` behavior. It is - * possible to obtain an incorrect result when the `object` argument manipulates`@@toStringTag`. - * - * ```js - * const util = require('util'); - * const obj = { name: 'Error', message: 'an error occurred' }; - * - * util.isError(obj); - * // Returns: false - * obj[Symbol.toStringTag] = 'Error'; - * util.isError(obj); - * // Returns: true - * ``` - * @since v0.6.0 - * @deprecated Since v4.0.0 - Use {@link types.isNativeError} instead. - */ - export function isError(object: unknown): object is Error; - /** - * Usage of `util.inherits()` is discouraged. Please use the ES6 `class` and`extends` keywords to get language level inheritance support. Also note - * that the two styles are [semantically incompatible](https://github.com/nodejs/node/issues/4179). - * - * Inherit the prototype methods from one [constructor](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/constructor) into another. The - * prototype of `constructor` will be set to a new object created from`superConstructor`. - * - * This mainly adds some input validation on top of`Object.setPrototypeOf(constructor.prototype, superConstructor.prototype)`. - * As an additional convenience, `superConstructor` will be accessible - * through the `constructor.super_` property. - * - * ```js - * const util = require('util'); - * const EventEmitter = require('events'); - * - * function MyStream() { - * EventEmitter.call(this); - * } - * - * util.inherits(MyStream, EventEmitter); - * - * MyStream.prototype.write = function(data) { - * this.emit('data', data); - * }; - * - * const stream = new MyStream(); - * - * console.log(stream instanceof EventEmitter); // true - * console.log(MyStream.super_ === EventEmitter); // true - * - * stream.on('data', (data) => { - * console.log(`Received data: "${data}"`); - * }); - * stream.write('It works!'); // Received data: "It works!" - * ``` - * - * ES6 example using `class` and `extends`: - * - * ```js - * const EventEmitter = require('events'); - * - * class MyStream extends EventEmitter { - * write(data) { - * this.emit('data', data); - * } - * } - * - * const stream = new MyStream(); - * - * stream.on('data', (data) => { - * console.log(`Received data: "${data}"`); - * }); - * stream.write('With ES6'); - * ``` - * @since v0.3.0 - * @deprecated Legacy: Use ES2015 class syntax and `extends` keyword instead. - */ - export function inherits(constructor: unknown, superConstructor: unknown): void; - export type DebugLoggerFunction = (msg: string, ...param: unknown[]) => void; - export interface DebugLogger extends DebugLoggerFunction { - enabled: boolean; - } - /** - * The `util.debuglog()` method is used to create a function that conditionally - * writes debug messages to `stderr` based on the existence of the `NODE_DEBUG`environment variable. If the `section` name appears within the value of that - * environment variable, then the returned function operates similar to `console.error()`. If not, then the returned function is a no-op. - * - * ```js - * const util = require('util'); - * const debuglog = util.debuglog('foo'); - * - * debuglog('hello from foo [%d]', 123); - * ``` - * - * If this program is run with `NODE_DEBUG=foo` in the environment, then - * it will output something like: - * - * ```console - * FOO 3245: hello from foo [123] - * ``` - * - * where `3245` is the process id. If it is not run with that - * environment variable set, then it will not print anything. - * - * The `section` supports wildcard also: - * - * ```js - * const util = require('util'); - * const debuglog = util.debuglog('foo-bar'); - * - * debuglog('hi there, it\'s foo-bar [%d]', 2333); - * ``` - * - * if it is run with `NODE_DEBUG=foo*` in the environment, then it will output - * something like: - * - * ```console - * FOO-BAR 3257: hi there, it's foo-bar [2333] - * ``` - * - * Multiple comma-separated `section` names may be specified in the `NODE_DEBUG`environment variable: `NODE_DEBUG=fs,net,tls`. - * - * The optional `callback` argument can be used to replace the logging function - * with a different function that doesn't have any initialization or - * unnecessary wrapping. - * - * ```js - * const util = require('util'); - * let debuglog = util.debuglog('internals', (debug) => { - * // Replace with a logging function that optimizes out - * // testing if the section is enabled - * debuglog = debug; - * }); - * ``` - * @since v0.11.3 - * @param section A string identifying the portion of the application for which the `debuglog` function is being created. - * @param callback A callback invoked the first time the logging function is called with a function argument that is a more optimized logging function. - * @return The logging function - */ - export function debuglog(section: string, callback?: (fn: DebugLoggerFunction) => void): DebugLogger; - export const debug: typeof debuglog; - /** - * Returns `true` if the given `object` is a `Boolean`. Otherwise, returns `false`. - * - * ```js - * const util = require('util'); - * - * util.isBoolean(1); - * // Returns: false - * util.isBoolean(0); - * // Returns: false - * util.isBoolean(false); - * // Returns: true - * ``` - * @since v0.11.5 - * @deprecated Since v4.0.0 - Use `typeof value === 'boolean'` instead. - */ - export function isBoolean(object: unknown): object is boolean; - /** - * Returns `true` if the given `object` is a `Buffer`. Otherwise, returns `false`. - * - * ```js - * const util = require('util'); - * - * util.isBuffer({ length: 0 }); - * // Returns: false - * util.isBuffer([]); - * // Returns: false - * util.isBuffer(Buffer.from('hello world')); - * // Returns: true - * ``` - * @since v0.11.5 - * @deprecated Since v4.0.0 - Use `isBuffer` instead. - */ - export function isBuffer(object: unknown): object is Buffer; - /** - * Returns `true` if the given `object` is a `Function`. Otherwise, returns`false`. - * - * ```js - * const util = require('util'); - * - * function Foo() {} - * const Bar = () => {}; - * - * util.isFunction({}); - * // Returns: false - * util.isFunction(Foo); - * // Returns: true - * util.isFunction(Bar); - * // Returns: true - * ``` - * @since v0.11.5 - * @deprecated Since v4.0.0 - Use `typeof value === 'function'` instead. - */ - export function isFunction(object: unknown): boolean; - /** - * Returns `true` if the given `object` is strictly `null`. Otherwise, returns`false`. - * - * ```js - * const util = require('util'); - * - * util.isNull(0); - * // Returns: false - * util.isNull(undefined); - * // Returns: false - * util.isNull(null); - * // Returns: true - * ``` - * @since v0.11.5 - * @deprecated Since v4.0.0 - Use `value === null` instead. - */ - export function isNull(object: unknown): object is null; - /** - * Returns `true` if the given `object` is `null` or `undefined`. Otherwise, - * returns `false`. - * - * ```js - * const util = require('util'); - * - * util.isNullOrUndefined(0); - * // Returns: false - * util.isNullOrUndefined(undefined); - * // Returns: true - * util.isNullOrUndefined(null); - * // Returns: true - * ``` - * @since v0.11.5 - * @deprecated Since v4.0.0 - Use `value === undefined || value === null` instead. - */ - export function isNullOrUndefined(object: unknown): object is null | undefined; - /** - * Returns `true` if the given `object` is a `Number`. Otherwise, returns `false`. - * - * ```js - * const util = require('util'); - * - * util.isNumber(false); - * // Returns: false - * util.isNumber(Infinity); - * // Returns: true - * util.isNumber(0); - * // Returns: true - * util.isNumber(NaN); - * // Returns: true - * ``` - * @since v0.11.5 - * @deprecated Since v4.0.0 - Use `typeof value === 'number'` instead. - */ - export function isNumber(object: unknown): object is number; - /** - * Returns `true` if the given `object` is strictly an `Object`**and** not a`Function` (even though functions are objects in JavaScript). - * Otherwise, returns `false`. - * - * ```js - * const util = require('util'); - * - * util.isObject(5); - * // Returns: false - * util.isObject(null); - * // Returns: false - * util.isObject({}); - * // Returns: true - * util.isObject(() => {}); - * // Returns: false - * ``` - * @since v0.11.5 - * @deprecated Since v4.0.0 - Deprecated: Use `value !== null && typeof value === 'object'` instead. - */ - export function isObject(object: unknown): boolean; - /** - * Returns `true` if the given `object` is a primitive type. Otherwise, returns`false`. - * - * ```js - * const util = require('util'); - * - * util.isPrimitive(5); - * // Returns: true - * util.isPrimitive('foo'); - * // Returns: true - * util.isPrimitive(false); - * // Returns: true - * util.isPrimitive(null); - * // Returns: true - * util.isPrimitive(undefined); - * // Returns: true - * util.isPrimitive({}); - * // Returns: false - * util.isPrimitive(() => {}); - * // Returns: false - * util.isPrimitive(/^$/); - * // Returns: false - * util.isPrimitive(new Date()); - * // Returns: false - * ``` - * @since v0.11.5 - * @deprecated Since v4.0.0 - Use `(typeof value !== 'object' && typeof value !== 'function') || value === null` instead. - */ - export function isPrimitive(object: unknown): boolean; - /** - * Returns `true` if the given `object` is a `string`. Otherwise, returns `false`. - * - * ```js - * const util = require('util'); - * - * util.isString(''); - * // Returns: true - * util.isString('foo'); - * // Returns: true - * util.isString(String('foo')); - * // Returns: true - * util.isString(5); - * // Returns: false - * ``` - * @since v0.11.5 - * @deprecated Since v4.0.0 - Use `typeof value === 'string'` instead. - */ - export function isString(object: unknown): object is string; - /** - * Returns `true` if the given `object` is a `Symbol`. Otherwise, returns `false`. - * - * ```js - * const util = require('util'); - * - * util.isSymbol(5); - * // Returns: false - * util.isSymbol('foo'); - * // Returns: false - * util.isSymbol(Symbol('foo')); - * // Returns: true - * ``` - * @since v0.11.5 - * @deprecated Since v4.0.0 - Use `typeof value === 'symbol'` instead. - */ - export function isSymbol(object: unknown): object is symbol; - /** - * Returns `true` if the given `object` is `undefined`. Otherwise, returns `false`. - * - * ```js - * const util = require('util'); - * - * const foo = undefined; - * util.isUndefined(5); - * // Returns: false - * util.isUndefined(foo); - * // Returns: true - * util.isUndefined(null); - * // Returns: false - * ``` - * @since v0.11.5 - * @deprecated Since v4.0.0 - Use `value === undefined` instead. - */ - export function isUndefined(object: unknown): object is undefined; - /** - * The `util.deprecate()` method wraps `fn` (which may be a function or class) in - * such a way that it is marked as deprecated. - * - * ```js - * const util = require('util'); - * - * exports.obsoleteFunction = util.deprecate(() => { - * // Do something here. - * }, 'obsoleteFunction() is deprecated. Use newShinyFunction() instead.'); - * ``` - * - * When called, `util.deprecate()` will return a function that will emit a`DeprecationWarning` using the `'warning'` event. The warning will - * be emitted and printed to `stderr` the first time the returned function is - * called. After the warning is emitted, the wrapped function is called without - * emitting a warning. - * - * If the same optional `code` is supplied in multiple calls to `util.deprecate()`, - * the warning will be emitted only once for that `code`. - * - * ```js - * const util = require('util'); - * - * const fn1 = util.deprecate(someFunction, someMessage, 'DEP0001'); - * const fn2 = util.deprecate(someOtherFunction, someOtherMessage, 'DEP0001'); - * fn1(); // Emits a deprecation warning with code DEP0001 - * fn2(); // Does not emit a deprecation warning because it has the same code - * ``` - * - * If either the `--no-deprecation` or `--no-warnings` command-line flags are - * used, or if the `process.noDeprecation` property is set to `true`_prior_ to - * the first deprecation warning, the `util.deprecate()` method does nothing. - * - * If the `--trace-deprecation` or `--trace-warnings` command-line flags are set, - * or the `process.traceDeprecation` property is set to `true`, a warning and a - * stack trace are printed to `stderr` the first time the deprecated function is - * called. - * - * If the `--throw-deprecation` command-line flag is set, or the`process.throwDeprecation` property is set to `true`, then an exception will be - * thrown when the deprecated function is called. - * - * The `--throw-deprecation` command-line flag and `process.throwDeprecation`property take precedence over `--trace-deprecation` and`process.traceDeprecation`. - * @since v0.8.0 - * @param fn The function that is being deprecated. - * @param msg A warning message to display when the deprecated function is invoked. - * @param code A deprecation code. See the `list of deprecated APIs` for a list of codes. - * @return The deprecated function wrapped to emit a warning. - */ - export function deprecate(fn: T, msg: string, code?: string): T; - /** - * Returns `true` if there is deep strict equality between `val1` and `val2`. - * Otherwise, returns `false`. - * - * See `assert.deepStrictEqual()` for more information about deep strict - * equality. - * @since v9.0.0 - */ - export function isDeepStrictEqual(val1: unknown, val2: unknown): boolean; - /** - * Returns `str` with any ANSI escape codes removed. - * - * ```js - * console.log(util.stripVTControlCharacters('\u001B[4mvalue\u001B[0m')); - * // Prints "value" - * ``` - * @since v16.11.0 - */ - export function stripVTControlCharacters(str: string): string; - /** - * Takes an `async` function (or a function that returns a `Promise`) and returns a - * function following the error-first callback style, i.e. taking - * an `(err, value) => ...` callback as the last argument. In the callback, the - * first argument will be the rejection reason (or `null` if the `Promise`resolved), and the second argument will be the resolved value. - * - * ```js - * const util = require('util'); - * - * async function fn() { - * return 'hello world'; - * } - * const callbackFunction = util.callbackify(fn); - * - * callbackFunction((err, ret) => { - * if (err) throw err; - * console.log(ret); - * }); - * ``` - * - * Will print: - * - * ```text - * hello world - * ``` - * - * The callback is executed asynchronously, and will have a limited stack trace. - * If the callback throws, the process will emit an `'uncaughtException'` event, and if not handled will exit. - * - * Since `null` has a special meaning as the first argument to a callback, if a - * wrapped function rejects a `Promise` with a falsy value as a reason, the value - * is wrapped in an `Error` with the original value stored in a field named`reason`. - * - * ```js - * function fn() { - * return Promise.reject(null); - * } - * const callbackFunction = util.callbackify(fn); - * - * callbackFunction((err, ret) => { - * // When the Promise was rejected with `null` it is wrapped with an Error and - * // the original value is stored in `reason`. - * err && Object.hasOwn(err, 'reason') && err.reason === null; // true - * }); - * ``` - * @since v8.2.0 - * @param fn An `async` function - * @return a callback style function - */ - export function callbackify(fn: () => Promise): (callback: (err: NodeJS.ErrnoException) => void) => void; - export function callbackify(fn: () => Promise): (callback: (err: NodeJS.ErrnoException, result: TResult) => void) => void; - export function callbackify(fn: (arg1: T1) => Promise): (arg1: T1, callback: (err: NodeJS.ErrnoException) => void) => void; - export function callbackify(fn: (arg1: T1) => Promise): (arg1: T1, callback: (err: NodeJS.ErrnoException, result: TResult) => void) => void; - export function callbackify(fn: (arg1: T1, arg2: T2) => Promise): (arg1: T1, arg2: T2, callback: (err: NodeJS.ErrnoException) => void) => void; - export function callbackify(fn: (arg1: T1, arg2: T2) => Promise): (arg1: T1, arg2: T2, callback: (err: NodeJS.ErrnoException | null, result: TResult) => void) => void; - export function callbackify(fn: (arg1: T1, arg2: T2, arg3: T3) => Promise): (arg1: T1, arg2: T2, arg3: T3, callback: (err: NodeJS.ErrnoException) => void) => void; - export function callbackify( - fn: (arg1: T1, arg2: T2, arg3: T3) => Promise - ): (arg1: T1, arg2: T2, arg3: T3, callback: (err: NodeJS.ErrnoException | null, result: TResult) => void) => void; - export function callbackify( - fn: (arg1: T1, arg2: T2, arg3: T3, arg4: T4) => Promise - ): (arg1: T1, arg2: T2, arg3: T3, arg4: T4, callback: (err: NodeJS.ErrnoException) => void) => void; - export function callbackify( - fn: (arg1: T1, arg2: T2, arg3: T3, arg4: T4) => Promise - ): (arg1: T1, arg2: T2, arg3: T3, arg4: T4, callback: (err: NodeJS.ErrnoException | null, result: TResult) => void) => void; - export function callbackify( - fn: (arg1: T1, arg2: T2, arg3: T3, arg4: T4, arg5: T5) => Promise - ): (arg1: T1, arg2: T2, arg3: T3, arg4: T4, arg5: T5, callback: (err: NodeJS.ErrnoException) => void) => void; - export function callbackify( - fn: (arg1: T1, arg2: T2, arg3: T3, arg4: T4, arg5: T5) => Promise - ): (arg1: T1, arg2: T2, arg3: T3, arg4: T4, arg5: T5, callback: (err: NodeJS.ErrnoException | null, result: TResult) => void) => void; - export function callbackify( - fn: (arg1: T1, arg2: T2, arg3: T3, arg4: T4, arg5: T5, arg6: T6) => Promise - ): (arg1: T1, arg2: T2, arg3: T3, arg4: T4, arg5: T5, arg6: T6, callback: (err: NodeJS.ErrnoException) => void) => void; - export function callbackify( - fn: (arg1: T1, arg2: T2, arg3: T3, arg4: T4, arg5: T5, arg6: T6) => Promise - ): (arg1: T1, arg2: T2, arg3: T3, arg4: T4, arg5: T5, arg6: T6, callback: (err: NodeJS.ErrnoException | null, result: TResult) => void) => void; - export interface CustomPromisifyLegacy extends Function { - __promisify__: TCustom; - } - export interface CustomPromisifySymbol extends Function { - [promisify.custom]: TCustom; - } - export type CustomPromisify = CustomPromisifySymbol | CustomPromisifyLegacy; - /** - * Takes a function following the common error-first callback style, i.e. taking - * an `(err, value) => ...` callback as the last argument, and returns a version - * that returns promises. - * - * ```js - * const util = require('util'); - * const fs = require('fs'); - * - * const stat = util.promisify(fs.stat); - * stat('.').then((stats) => { - * // Do something with `stats` - * }).catch((error) => { - * // Handle the error. - * }); - * ``` - * - * Or, equivalently using `async function`s: - * - * ```js - * const util = require('util'); - * const fs = require('fs'); - * - * const stat = util.promisify(fs.stat); - * - * async function callStat() { - * const stats = await stat('.'); - * console.log(`This directory is owned by ${stats.uid}`); - * } - * ``` - * - * If there is an `original[util.promisify.custom]` property present, `promisify`will return its value, see `Custom promisified functions`. - * - * `promisify()` assumes that `original` is a function taking a callback as its - * final argument in all cases. If `original` is not a function, `promisify()`will throw an error. If `original` is a function but its last argument is not - * an error-first callback, it will still be passed an error-first - * callback as its last argument. - * - * Using `promisify()` on class methods or other methods that use `this` may not - * work as expected unless handled specially: - * - * ```js - * const util = require('util'); - * - * class Foo { - * constructor() { - * this.a = 42; - * } - * - * bar(callback) { - * callback(null, this.a); - * } - * } - * - * const foo = new Foo(); - * - * const naiveBar = util.promisify(foo.bar); - * // TypeError: Cannot read property 'a' of undefined - * // naiveBar().then(a => console.log(a)); - * - * naiveBar.call(foo).then((a) => console.log(a)); // '42' - * - * const bindBar = naiveBar.bind(foo); - * bindBar().then((a) => console.log(a)); // '42' - * ``` - * @since v8.0.0 - */ - export function promisify(fn: CustomPromisify): TCustom; - export function promisify(fn: (callback: (err: any, result: TResult) => void) => void): () => Promise; - export function promisify(fn: (callback: (err?: any) => void) => void): () => Promise; - export function promisify(fn: (arg1: T1, callback: (err: any, result: TResult) => void) => void): (arg1: T1) => Promise; - export function promisify(fn: (arg1: T1, callback: (err?: any) => void) => void): (arg1: T1) => Promise; - export function promisify(fn: (arg1: T1, arg2: T2, callback: (err: any, result: TResult) => void) => void): (arg1: T1, arg2: T2) => Promise; - export function promisify(fn: (arg1: T1, arg2: T2, callback: (err?: any) => void) => void): (arg1: T1, arg2: T2) => Promise; - export function promisify(fn: (arg1: T1, arg2: T2, arg3: T3, callback: (err: any, result: TResult) => void) => void): (arg1: T1, arg2: T2, arg3: T3) => Promise; - export function promisify(fn: (arg1: T1, arg2: T2, arg3: T3, callback: (err?: any) => void) => void): (arg1: T1, arg2: T2, arg3: T3) => Promise; - export function promisify( - fn: (arg1: T1, arg2: T2, arg3: T3, arg4: T4, callback: (err: any, result: TResult) => void) => void - ): (arg1: T1, arg2: T2, arg3: T3, arg4: T4) => Promise; - export function promisify(fn: (arg1: T1, arg2: T2, arg3: T3, arg4: T4, callback: (err?: any) => void) => void): (arg1: T1, arg2: T2, arg3: T3, arg4: T4) => Promise; - export function promisify( - fn: (arg1: T1, arg2: T2, arg3: T3, arg4: T4, arg5: T5, callback: (err: any, result: TResult) => void) => void - ): (arg1: T1, arg2: T2, arg3: T3, arg4: T4, arg5: T5) => Promise; - export function promisify( - fn: (arg1: T1, arg2: T2, arg3: T3, arg4: T4, arg5: T5, callback: (err?: any) => void) => void - ): (arg1: T1, arg2: T2, arg3: T3, arg4: T4, arg5: T5) => Promise; - export function promisify(fn: Function): Function; - export namespace promisify { - /** - * That can be used to declare custom promisified variants of functions. - */ - const custom: unique symbol; - } - /** - * An implementation of the [WHATWG Encoding Standard](https://encoding.spec.whatwg.org/) `TextDecoder` API. - * - * ```js - * const decoder = new TextDecoder(); - * const u8arr = new Uint8Array([72, 101, 108, 108, 111]); - * console.log(decoder.decode(u8arr)); // Hello - * ``` - * @since v8.3.0 - */ - export class TextDecoder { - /** - * The encoding supported by the `TextDecoder` instance. - */ - readonly encoding: string; - /** - * The value will be `true` if decoding errors result in a `TypeError` being - * thrown. - */ - readonly fatal: boolean; - /** - * The value will be `true` if the decoding result will include the byte order - * mark. - */ - readonly ignoreBOM: boolean; - constructor( - encoding?: string, - options?: { - fatal?: boolean | undefined; - ignoreBOM?: boolean | undefined; - } - ); - /** - * Decodes the `input` and returns a string. If `options.stream` is `true`, any - * incomplete byte sequences occurring at the end of the `input` are buffered - * internally and emitted after the next call to `textDecoder.decode()`. - * - * If `textDecoder.fatal` is `true`, decoding errors that occur will result in a`TypeError` being thrown. - * @param input An `ArrayBuffer`, `DataView` or `TypedArray` instance containing the encoded data. - */ - decode( - input?: NodeJS.ArrayBufferView | ArrayBuffer | null, - options?: { - stream?: boolean | undefined; - } - ): string; - } - export interface EncodeIntoResult { - /** - * The read Unicode code units of input. - */ - read: number; - /** - * The written UTF-8 bytes of output. - */ - written: number; - } - export { types }; - - //// TextEncoder/Decoder - /** - * An implementation of the [WHATWG Encoding Standard](https://encoding.spec.whatwg.org/) `TextEncoder` API. All - * instances of `TextEncoder` only support UTF-8 encoding. - * - * ```js - * const encoder = new TextEncoder(); - * const uint8array = encoder.encode('this is some data'); - * ``` - * - * The `TextEncoder` class is also available on the global object. - * @since v8.3.0 - */ - export class TextEncoder { - /** - * The encoding supported by the `TextEncoder` instance. Always set to `'utf-8'`. - */ - readonly encoding: string; - /** - * UTF-8 encodes the `input` string and returns a `Uint8Array` containing the - * encoded bytes. - * @param [input='an empty string'] The text to encode. - */ - encode(input?: string): Uint8Array; - /** - * UTF-8 encodes the `src` string to the `dest` Uint8Array and returns an object - * containing the read Unicode code units and written UTF-8 bytes. - * - * ```js - * const encoder = new TextEncoder(); - * const src = 'this is some data'; - * const dest = new Uint8Array(10); - * const { read, written } = encoder.encodeInto(src, dest); - * ``` - * @param src The text to encode. - * @param dest The array to hold the encode result. - */ - encodeInto(src: string, dest: Uint8Array): EncodeIntoResult; - } - - import { TextDecoder as _TextDecoder, TextEncoder as _TextEncoder } from 'util'; - global { - /** - * `TextDecoder` class is a global reference for `require('util').TextDecoder` - * https://nodejs.org/api/globals.html#textdecoder - * @since v11.0.0 - */ - var TextDecoder: typeof globalThis extends { - onmessage: any; - TextDecoder: infer TextDecoder; - } - ? TextDecoder - : typeof _TextDecoder; - - /** - * `TextEncoder` class is a global reference for `require('util').TextEncoder` - * https://nodejs.org/api/globals.html#textencoder - * @since v11.0.0 - */ - var TextEncoder: typeof globalThis extends { - onmessage: any; - TextEncoder: infer TextEncoder; - } - ? TextEncoder - : typeof _TextEncoder; - } - - //// parseArgs - /** - * Provides a high-level API for command-line argument parsing. Takes a - * specification for the expected arguments and returns a structured object - * with the parsed values and positionals. - * - * `config` provides arguments for parsing and configures the parser. It - * supports the following properties: - * - * - `args` The array of argument strings. **Default:** `process.argv` with - * `execPath` and `filename` removed. - * - `options` Arguments known to the parser. Keys of `options` are the long - * names of options and values are objects accepting the following properties: - * - * - `type` Type of argument, which must be either `boolean` (for options - * which do not take values) or `string` (for options which do). - * - `multiple` Whether this option can be provided multiple - * times. If `true`, all values will be collected in an array. If - * `false`, values for the option are last-wins. **Default:** `false`. - * - `short` A single character alias for the option. - * - `default` The default option value when it is not set by args. It - * must be of the same type as the `type` property. When `multiple` - * is `true`, it must be an array. - * - * - `strict`: Whether an error should be thrown when unknown arguments - * are encountered, or when arguments are passed that do not match the - * `type` configured in `options`. **Default:** `true`. - * - `allowPositionals`: Whether this command accepts positional arguments. - * **Default:** `false` if `strict` is `true`, otherwise `true`. - * - `tokens`: Whether tokens {boolean} Return the parsed tokens. This is useful - * for extending the built-in behavior, from adding additional checks through - * to reprocessing the tokens in different ways. - * **Default:** `false`. - * - * @returns The parsed command line arguments: - * - * - `values` A mapping of parsed option names with their string - * or boolean values. - * - `positionals` Positional arguments. - * - `tokens` Detailed parse information (only if `tokens` was specified). - * - */ - export function parseArgs(config?: T): ParsedResults; - - interface ParseArgsOptionConfig { - /** - * Type of argument. - */ - type: 'string' | 'boolean'; - /** - * Whether this option can be provided multiple times. - * If `true`, all values will be collected in an array. - * If `false`, values for the option are last-wins. - * @default false. - */ - multiple?: boolean | undefined; - /** - * A single character alias for the option. - */ - short?: string | undefined; - /** - * The default option value when it is not set by args. - * It must be of the same type as the the `type` property. - * When `multiple` is `true`, it must be an array. - * @since v18.11.0 - */ - default?: string | boolean | string[] | boolean[] | undefined; - } - - interface ParseArgsOptionsConfig { - [longOption: string]: ParseArgsOptionConfig; - } - - export interface ParseArgsConfig { - /** - * Array of argument strings. - */ - args?: string[] | undefined; - /** - * Used to describe arguments known to the parser. - */ - options?: ParseArgsOptionsConfig | undefined; - /** - * Should an error be thrown when unknown arguments are encountered, - * or when arguments are passed that do not match the `type` configured in `options`. - * @default true - */ - strict?: boolean | undefined; - /** - * Whether this command accepts positional arguments. - */ - allowPositionals?: boolean | undefined; - /** - * Return the parsed tokens. This is useful for extending the built-in behavior, - * from adding additional checks through to reprocessing the tokens in different ways. - * @default false - */ - tokens?: boolean | undefined; - } - - /* - IfDefaultsTrue and IfDefaultsFalse are helpers to handle default values for missing boolean properties. - TypeScript does not have exact types for objects: https://github.com/microsoft/TypeScript/issues/12936 - This means it is impossible to distinguish between "field X is definitely not present" and "field X may or may not be present". - But we expect users to generally provide their config inline or `as const`, which means TS will always know whether a given field is present. - So this helper treats "not definitely present" (i.e., not `extends boolean`) as being "definitely not present", i.e. it should have its default value. - This is technically incorrect but is a much nicer UX for the common case. - The IfDefaultsTrue version is for things which default to true; the IfDefaultsFalse version is for things which default to false. - */ - type IfDefaultsTrue = T extends true - ? IfTrue - : T extends false - ? IfFalse - : IfTrue; - - // we put the `extends false` condition first here because `undefined` compares like `any` when `strictNullChecks: false` - type IfDefaultsFalse = T extends false - ? IfFalse - : T extends true - ? IfTrue - : IfFalse; - - type ExtractOptionValue = IfDefaultsTrue< - T['strict'], - O['type'] extends 'string' ? string : O['type'] extends 'boolean' ? boolean : string | boolean, - string | boolean - >; - - type ParsedValues = - & IfDefaultsTrue - & (T['options'] extends ParseArgsOptionsConfig - ? { - -readonly [LongOption in keyof T['options']]: IfDefaultsFalse< - T['options'][LongOption]['multiple'], - undefined | Array>, - undefined | ExtractOptionValue - >; - } - : {}); - - type ParsedPositionals = IfDefaultsTrue< - T['strict'], - IfDefaultsFalse, - IfDefaultsTrue - >; - - type PreciseTokenForOptions< - K extends string, - O extends ParseArgsOptionConfig, - > = O['type'] extends 'string' - ? { - kind: 'option'; - index: number; - name: K; - rawName: string; - value: string; - inlineValue: boolean; - } - : O['type'] extends 'boolean' - ? { - kind: 'option'; - index: number; - name: K; - rawName: string; - value: undefined; - inlineValue: undefined; - } - : OptionToken & { name: K }; - - type TokenForOptions< - T extends ParseArgsConfig, - K extends keyof T['options'] = keyof T['options'], - > = K extends unknown - ? T['options'] extends ParseArgsOptionsConfig - ? PreciseTokenForOptions - : OptionToken - : never; - - type ParsedOptionToken = IfDefaultsTrue, OptionToken>; - - type ParsedPositionalToken = IfDefaultsTrue< - T['strict'], - IfDefaultsFalse, - IfDefaultsTrue - >; - - type ParsedTokens = Array< - ParsedOptionToken | ParsedPositionalToken | { kind: 'option-terminator'; index: number } - >; - - type PreciseParsedResults = IfDefaultsFalse< - T['tokens'], - { - values: ParsedValues; - positionals: ParsedPositionals; - tokens: ParsedTokens; - }, - { - values: ParsedValues; - positionals: ParsedPositionals; - } - >; - - type OptionToken = - | { kind: 'option'; index: number; name: string; rawName: string; value: string; inlineValue: boolean } - | { - kind: 'option'; - index: number; - name: string; - rawName: string; - value: undefined; - inlineValue: undefined; - }; - - type Token = - | OptionToken - | { kind: 'positional'; index: number; value: string } - | { kind: 'option-terminator'; index: number }; - - // If ParseArgsConfig extends T, then the user passed config constructed elsewhere. - // So we can't rely on the `"not definitely present" implies "definitely not present"` assumption mentioned above. - type ParsedResults = ParseArgsConfig extends T - ? { - values: { [longOption: string]: undefined | string | boolean | Array }; - positionals: string[]; - tokens?: Token[]; - } - : PreciseParsedResults; -} -declare module 'util/types' { - export * from 'util/types'; -} -declare module 'util/types' { - import { KeyObject, webcrypto } from 'node:crypto'; - /** - * Returns `true` if the value is a built-in [`ArrayBuffer`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer) or - * [`SharedArrayBuffer`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/SharedArrayBuffer) instance. - * - * See also `util.types.isArrayBuffer()` and `util.types.isSharedArrayBuffer()`. - * - * ```js - * util.types.isAnyArrayBuffer(new ArrayBuffer()); // Returns true - * util.types.isAnyArrayBuffer(new SharedArrayBuffer()); // Returns true - * ``` - * @since v10.0.0 - */ - function isAnyArrayBuffer(object: unknown): object is ArrayBufferLike; - /** - * Returns `true` if the value is an `arguments` object. - * - * ```js - * function foo() { - * util.types.isArgumentsObject(arguments); // Returns true - * } - * ``` - * @since v10.0.0 - */ - function isArgumentsObject(object: unknown): object is IArguments; - /** - * Returns `true` if the value is a built-in [`ArrayBuffer`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer) instance. - * This does _not_ include [`SharedArrayBuffer`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/SharedArrayBuffer) instances. Usually, it is - * desirable to test for both; See `util.types.isAnyArrayBuffer()` for that. - * - * ```js - * util.types.isArrayBuffer(new ArrayBuffer()); // Returns true - * util.types.isArrayBuffer(new SharedArrayBuffer()); // Returns false - * ``` - * @since v10.0.0 - */ - function isArrayBuffer(object: unknown): object is ArrayBuffer; - /** - * Returns `true` if the value is an instance of one of the [`ArrayBuffer`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer) views, such as typed - * array objects or [`DataView`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/DataView). Equivalent to - * [`ArrayBuffer.isView()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer/isView). - * - * ```js - * util.types.isArrayBufferView(new Int8Array()); // true - * util.types.isArrayBufferView(Buffer.from('hello world')); // true - * util.types.isArrayBufferView(new DataView(new ArrayBuffer(16))); // true - * util.types.isArrayBufferView(new ArrayBuffer()); // false - * ``` - * @since v10.0.0 - */ - function isArrayBufferView(object: unknown): object is NodeJS.ArrayBufferView; - /** - * Returns `true` if the value is an [async function](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function). - * This only reports back what the JavaScript engine is seeing; - * in particular, the return value may not match the original source code if - * a transpilation tool was used. - * - * ```js - * util.types.isAsyncFunction(function foo() {}); // Returns false - * util.types.isAsyncFunction(async function foo() {}); // Returns true - * ``` - * @since v10.0.0 - */ - function isAsyncFunction(object: unknown): boolean; - /** - * Returns `true` if the value is a `BigInt64Array` instance. - * - * ```js - * util.types.isBigInt64Array(new BigInt64Array()); // Returns true - * util.types.isBigInt64Array(new BigUint64Array()); // Returns false - * ``` - * @since v10.0.0 - */ - function isBigInt64Array(value: unknown): value is BigInt64Array; - /** - * Returns `true` if the value is a `BigUint64Array` instance. - * - * ```js - * util.types.isBigUint64Array(new BigInt64Array()); // Returns false - * util.types.isBigUint64Array(new BigUint64Array()); // Returns true - * ``` - * @since v10.0.0 - */ - function isBigUint64Array(value: unknown): value is BigUint64Array; - /** - * Returns `true` if the value is a boolean object, e.g. created - * by `new Boolean()`. - * - * ```js - * util.types.isBooleanObject(false); // Returns false - * util.types.isBooleanObject(true); // Returns false - * util.types.isBooleanObject(new Boolean(false)); // Returns true - * util.types.isBooleanObject(new Boolean(true)); // Returns true - * util.types.isBooleanObject(Boolean(false)); // Returns false - * util.types.isBooleanObject(Boolean(true)); // Returns false - * ``` - * @since v10.0.0 - */ - function isBooleanObject(object: unknown): object is Boolean; - /** - * Returns `true` if the value is any boxed primitive object, e.g. created - * by `new Boolean()`, `new String()` or `Object(Symbol())`. - * - * For example: - * - * ```js - * util.types.isBoxedPrimitive(false); // Returns false - * util.types.isBoxedPrimitive(new Boolean(false)); // Returns true - * util.types.isBoxedPrimitive(Symbol('foo')); // Returns false - * util.types.isBoxedPrimitive(Object(Symbol('foo'))); // Returns true - * util.types.isBoxedPrimitive(Object(BigInt(5))); // Returns true - * ``` - * @since v10.11.0 - */ - function isBoxedPrimitive(object: unknown): object is String | Number | BigInt | Boolean | Symbol; - /** - * Returns `true` if the value is a built-in [`DataView`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/DataView) instance. - * - * ```js - * const ab = new ArrayBuffer(20); - * util.types.isDataView(new DataView(ab)); // Returns true - * util.types.isDataView(new Float64Array()); // Returns false - * ``` - * - * See also [`ArrayBuffer.isView()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer/isView). - * @since v10.0.0 - */ - function isDataView(object: unknown): object is DataView; - /** - * Returns `true` if the value is a built-in [`Date`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date) instance. - * - * ```js - * util.types.isDate(new Date()); // Returns true - * ``` - * @since v10.0.0 - */ - function isDate(object: unknown): object is Date; - /** - * Returns `true` if the value is a native `External` value. - * - * A native `External` value is a special type of object that contains a - * raw C++ pointer (`void*`) for access from native code, and has no other - * properties. Such objects are created either by Node.js internals or native - * addons. In JavaScript, they are [frozen](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/freeze) objects with a`null` prototype. - * - * ```c - * #include - * #include - * napi_value result; - * static napi_value MyNapi(napi_env env, napi_callback_info info) { - * int* raw = (int*) malloc(1024); - * napi_status status = napi_create_external(env, (void*) raw, NULL, NULL, &result); - * if (status != napi_ok) { - * napi_throw_error(env, NULL, "napi_create_external failed"); - * return NULL; - * } - * return result; - * } - * ... - * DECLARE_NAPI_PROPERTY("myNapi", MyNapi) - * ... - * ``` - * - * ```js - * const native = require('napi_addon.node'); - * const data = native.myNapi(); - * util.types.isExternal(data); // returns true - * util.types.isExternal(0); // returns false - * util.types.isExternal(new String('foo')); // returns false - * ``` - * - * For further information on `napi_create_external`, refer to `napi_create_external()`. - * @since v10.0.0 - */ - function isExternal(object: unknown): boolean; - /** - * Returns `true` if the value is a built-in [`Float32Array`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Float32Array) instance. - * - * ```js - * util.types.isFloat32Array(new ArrayBuffer()); // Returns false - * util.types.isFloat32Array(new Float32Array()); // Returns true - * util.types.isFloat32Array(new Float64Array()); // Returns false - * ``` - * @since v10.0.0 - */ - function isFloat32Array(object: unknown): object is Float32Array; - /** - * Returns `true` if the value is a built-in [`Float64Array`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Float64Array) instance. - * - * ```js - * util.types.isFloat64Array(new ArrayBuffer()); // Returns false - * util.types.isFloat64Array(new Uint8Array()); // Returns false - * util.types.isFloat64Array(new Float64Array()); // Returns true - * ``` - * @since v10.0.0 - */ - function isFloat64Array(object: unknown): object is Float64Array; - /** - * Returns `true` if the value is a generator function. - * This only reports back what the JavaScript engine is seeing; - * in particular, the return value may not match the original source code if - * a transpilation tool was used. - * - * ```js - * util.types.isGeneratorFunction(function foo() {}); // Returns false - * util.types.isGeneratorFunction(function* foo() {}); // Returns true - * ``` - * @since v10.0.0 - */ - function isGeneratorFunction(object: unknown): object is GeneratorFunction; - /** - * Returns `true` if the value is a generator object as returned from a - * built-in generator function. - * This only reports back what the JavaScript engine is seeing; - * in particular, the return value may not match the original source code if - * a transpilation tool was used. - * - * ```js - * function* foo() {} - * const generator = foo(); - * util.types.isGeneratorObject(generator); // Returns true - * ``` - * @since v10.0.0 - */ - function isGeneratorObject(object: unknown): object is Generator; - /** - * Returns `true` if the value is a built-in [`Int8Array`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Int8Array) instance. - * - * ```js - * util.types.isInt8Array(new ArrayBuffer()); // Returns false - * util.types.isInt8Array(new Int8Array()); // Returns true - * util.types.isInt8Array(new Float64Array()); // Returns false - * ``` - * @since v10.0.0 - */ - function isInt8Array(object: unknown): object is Int8Array; - /** - * Returns `true` if the value is a built-in [`Int16Array`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Int16Array) instance. - * - * ```js - * util.types.isInt16Array(new ArrayBuffer()); // Returns false - * util.types.isInt16Array(new Int16Array()); // Returns true - * util.types.isInt16Array(new Float64Array()); // Returns false - * ``` - * @since v10.0.0 - */ - function isInt16Array(object: unknown): object is Int16Array; - /** - * Returns `true` if the value is a built-in [`Int32Array`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Int32Array) instance. - * - * ```js - * util.types.isInt32Array(new ArrayBuffer()); // Returns false - * util.types.isInt32Array(new Int32Array()); // Returns true - * util.types.isInt32Array(new Float64Array()); // Returns false - * ``` - * @since v10.0.0 - */ - function isInt32Array(object: unknown): object is Int32Array; - /** - * Returns `true` if the value is a built-in [`Map`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map) instance. - * - * ```js - * util.types.isMap(new Map()); // Returns true - * ``` - * @since v10.0.0 - */ - function isMap(object: T | {}): object is T extends ReadonlyMap ? (unknown extends T ? never : ReadonlyMap) : Map; - /** - * Returns `true` if the value is an iterator returned for a built-in [`Map`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map) instance. - * - * ```js - * const map = new Map(); - * util.types.isMapIterator(map.keys()); // Returns true - * util.types.isMapIterator(map.values()); // Returns true - * util.types.isMapIterator(map.entries()); // Returns true - * util.types.isMapIterator(map[Symbol.iterator]()); // Returns true - * ``` - * @since v10.0.0 - */ - function isMapIterator(object: unknown): boolean; - /** - * Returns `true` if the value is an instance of a [Module Namespace Object](https://tc39.github.io/ecma262/#sec-module-namespace-exotic-objects). - * - * ```js - * import * as ns from './a.js'; - * - * util.types.isModuleNamespaceObject(ns); // Returns true - * ``` - * @since v10.0.0 - */ - function isModuleNamespaceObject(value: unknown): boolean; - /** - * Returns `true` if the value is an instance of a built-in `Error` type. - * - * ```js - * util.types.isNativeError(new Error()); // Returns true - * util.types.isNativeError(new TypeError()); // Returns true - * util.types.isNativeError(new RangeError()); // Returns true - * ``` - * @since v10.0.0 - */ - function isNativeError(object: unknown): object is Error; - /** - * Returns `true` if the value is a number object, e.g. created - * by `new Number()`. - * - * ```js - * util.types.isNumberObject(0); // Returns false - * util.types.isNumberObject(new Number(0)); // Returns true - * ``` - * @since v10.0.0 - */ - function isNumberObject(object: unknown): object is Number; - /** - * Returns `true` if the value is a built-in [`Promise`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise). - * - * ```js - * util.types.isPromise(Promise.resolve(42)); // Returns true - * ``` - * @since v10.0.0 - */ - function isPromise(object: unknown): object is Promise; - /** - * Returns `true` if the value is a [`Proxy`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy) instance. - * - * ```js - * const target = {}; - * const proxy = new Proxy(target, {}); - * util.types.isProxy(target); // Returns false - * util.types.isProxy(proxy); // Returns true - * ``` - * @since v10.0.0 - */ - function isProxy(object: unknown): boolean; - /** - * Returns `true` if the value is a regular expression object. - * - * ```js - * util.types.isRegExp(/abc/); // Returns true - * util.types.isRegExp(new RegExp('abc')); // Returns true - * ``` - * @since v10.0.0 - */ - function isRegExp(object: unknown): object is RegExp; - /** - * Returns `true` if the value is a built-in [`Set`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set) instance. - * - * ```js - * util.types.isSet(new Set()); // Returns true - * ``` - * @since v10.0.0 - */ - function isSet(object: T | {}): object is T extends ReadonlySet ? (unknown extends T ? never : ReadonlySet) : Set; - /** - * Returns `true` if the value is an iterator returned for a built-in [`Set`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set) instance. - * - * ```js - * const set = new Set(); - * util.types.isSetIterator(set.keys()); // Returns true - * util.types.isSetIterator(set.values()); // Returns true - * util.types.isSetIterator(set.entries()); // Returns true - * util.types.isSetIterator(set[Symbol.iterator]()); // Returns true - * ``` - * @since v10.0.0 - */ - function isSetIterator(object: unknown): boolean; - /** - * Returns `true` if the value is a built-in [`SharedArrayBuffer`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/SharedArrayBuffer) instance. - * This does _not_ include [`ArrayBuffer`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer) instances. Usually, it is - * desirable to test for both; See `util.types.isAnyArrayBuffer()` for that. - * - * ```js - * util.types.isSharedArrayBuffer(new ArrayBuffer()); // Returns false - * util.types.isSharedArrayBuffer(new SharedArrayBuffer()); // Returns true - * ``` - * @since v10.0.0 - */ - function isSharedArrayBuffer(object: unknown): object is SharedArrayBuffer; - /** - * Returns `true` if the value is a string object, e.g. created - * by `new String()`. - * - * ```js - * util.types.isStringObject('foo'); // Returns false - * util.types.isStringObject(new String('foo')); // Returns true - * ``` - * @since v10.0.0 - */ - function isStringObject(object: unknown): object is String; - /** - * Returns `true` if the value is a symbol object, created - * by calling `Object()` on a `Symbol` primitive. - * - * ```js - * const symbol = Symbol('foo'); - * util.types.isSymbolObject(symbol); // Returns false - * util.types.isSymbolObject(Object(symbol)); // Returns true - * ``` - * @since v10.0.0 - */ - function isSymbolObject(object: unknown): object is Symbol; - /** - * Returns `true` if the value is a built-in [`TypedArray`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/TypedArray) instance. - * - * ```js - * util.types.isTypedArray(new ArrayBuffer()); // Returns false - * util.types.isTypedArray(new Uint8Array()); // Returns true - * util.types.isTypedArray(new Float64Array()); // Returns true - * ``` - * - * See also [`ArrayBuffer.isView()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer/isView). - * @since v10.0.0 - */ - function isTypedArray(object: unknown): object is NodeJS.TypedArray; - /** - * Returns `true` if the value is a built-in [`Uint8Array`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Uint8Array) instance. - * - * ```js - * util.types.isUint8Array(new ArrayBuffer()); // Returns false - * util.types.isUint8Array(new Uint8Array()); // Returns true - * util.types.isUint8Array(new Float64Array()); // Returns false - * ``` - * @since v10.0.0 - */ - function isUint8Array(object: unknown): object is Uint8Array; - /** - * Returns `true` if the value is a built-in [`Uint8ClampedArray`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Uint8ClampedArray) instance. - * - * ```js - * util.types.isUint8ClampedArray(new ArrayBuffer()); // Returns false - * util.types.isUint8ClampedArray(new Uint8ClampedArray()); // Returns true - * util.types.isUint8ClampedArray(new Float64Array()); // Returns false - * ``` - * @since v10.0.0 - */ - function isUint8ClampedArray(object: unknown): object is Uint8ClampedArray; - /** - * Returns `true` if the value is a built-in [`Uint16Array`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Uint16Array) instance. - * - * ```js - * util.types.isUint16Array(new ArrayBuffer()); // Returns false - * util.types.isUint16Array(new Uint16Array()); // Returns true - * util.types.isUint16Array(new Float64Array()); // Returns false - * ``` - * @since v10.0.0 - */ - function isUint16Array(object: unknown): object is Uint16Array; - /** - * Returns `true` if the value is a built-in [`Uint32Array`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Uint32Array) instance. - * - * ```js - * util.types.isUint32Array(new ArrayBuffer()); // Returns false - * util.types.isUint32Array(new Uint32Array()); // Returns true - * util.types.isUint32Array(new Float64Array()); // Returns false - * ``` - * @since v10.0.0 - */ - function isUint32Array(object: unknown): object is Uint32Array; - /** - * Returns `true` if the value is a built-in [`WeakMap`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WeakMap) instance. - * - * ```js - * util.types.isWeakMap(new WeakMap()); // Returns true - * ``` - * @since v10.0.0 - */ - function isWeakMap(object: unknown): object is WeakMap; - /** - * Returns `true` if the value is a built-in [`WeakSet`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WeakSet) instance. - * - * ```js - * util.types.isWeakSet(new WeakSet()); // Returns true - * ``` - * @since v10.0.0 - */ - function isWeakSet(object: unknown): object is WeakSet; - /** - * Returns `true` if `value` is a `KeyObject`, `false` otherwise. - * @since v16.2.0 - */ - function isKeyObject(object: unknown): object is KeyObject; - /** - * Returns `true` if `value` is a `CryptoKey`, `false` otherwise. - * @since v16.2.0 - */ - function isCryptoKey(object: unknown): object is webcrypto.CryptoKey; -} -declare module 'node:util' { - export * from 'util'; -} -declare module 'node:util/types' { - export * from 'util/types'; -} diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/Cakewalk Sonar 8 Update 85 Producer Edition X86 X64 2011 MULTILANG PATCHED Crack.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/Cakewalk Sonar 8 Update 85 Producer Edition X86 X64 2011 MULTILANG PATCHED Crack.md deleted file mode 100644 index e5c72f1ef71810c68db43b61e2c5654a42a487ad..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/Cakewalk Sonar 8 Update 85 Producer Edition X86 X64 2011 MULTILANG PATCHED Crack.md +++ /dev/null @@ -1,65 +0,0 @@ -## Cakewalk Sonar 8 Update 85 Producer Edition X86 X64 2011 MULTILANG Crack - - - -**LINK → [https://www.google.com/url?q=https%3A%2F%2Furllie.com%2F2twErf&sa=D&sntz=1&usg=AOvVaw1em4i0JLYuV6otu-LQpWIj](https://www.google.com/url?q=https%3A%2F%2Furllie.com%2F2twErf&sa=D&sntz=1&usg=AOvVaw1em4i0JLYuV6otu-LQpWIj)** - - - -# How to Install Cakewalk Sonar 8 Update 85 Producer Edition X86 X64 2011 MULTILANG Crack - - - -Cakewalk Sonar 8 is a powerful digital audio workstation that offers a comprehensive set of features for music production, mixing, mastering, and more. The Producer Edition includes an amazing collection of virtual instruments, effects, and MIDI processors that add value and creativity to your projects. However, if you want to use the latest version of Sonar 8, which is Update 85, you will need to download and install a crack file that bypasses the software's copy protection. - - - -In this article, we will show you how to install Cakewalk Sonar 8 Update 85 Producer Edition X86 X64 2011 MULTILANG Crack on your Windows PC. This crack file works for both 32-bit and 64-bit systems, and supports multiple languages. Follow these steps carefully to avoid any errors or problems. - - - -1. Download Cakewalk Sonar 8 Update 85 Producer Edition X86 X64 2011 MULTILANG Crack from one of these links: [^2^] [^3^]. Make sure you have enough disk space and a reliable internet connection. - -2. Extract the downloaded file using a program like WinRAR or 7-Zip. You will get a folder named "Cakewalk Sonar 8 (Update 8.5) Producer Edition X86 X64 2011 MULTILANG Crack". - -3. Open the folder and run the file named "setup.exe". This will launch the installation wizard for Sonar 8 Update 85. Follow the on-screen instructions and accept the license agreement. Choose the destination folder where you want to install Sonar 8. You can also customize the components you want to install, such as plug-ins, loops, samples, and documentation. - -4. When the installation is complete, do not run Sonar 8 yet. You need to apply the crack file first. Go back to the folder where you extracted the downloaded file and open the subfolder named "Crack". - -5. Copy the file named "SONAR.exe" and paste it into the folder where you installed Sonar 8. This will replace the original executable file with the cracked one. You may need to confirm the replacement or provide administrator permission. - -6. Now you can run Sonar 8 from your desktop shortcut or start menu. You will not be asked for any serial number or activation code. You can enjoy all the features and functions of Sonar 8 Update 85 Producer Edition without any limitations. - - - -Congratulations! You have successfully installed Cakewalk Sonar 8 Update 85 Producer Edition X86 X64 2011 MULTILANG Crack on your Windows PC. You can now create, record, edit, mix, and master your music projects with this powerful software. However, please note that this crack file is for educational purposes only and we do not condone piracy or illegal use of software. If you like Sonar 8 and want to support its developers, please buy a legitimate copy from their official website[^4^]. - - - -In this section, we will give you a brief overview of some of the new features and improvements that Sonar 8 Update 85 Producer Edition offers. These include: - - - -- A new Loop Explorer 2.0 that lets you preview and drag-and-drop audio and MIDI loops into your project. You can also access thousands of royalty-free loops from Cakewalk's online library. - -- A new Beatscape instrument that lets you create and manipulate beats, loops, and grooves. You can use the built-in step sequencer, slice editor, effects, and mixer to create your own rhythms or use the included 4 GB of content. - -- A new Matrix View that lets you trigger and remix audio and MIDI clips in real time. You can use the Matrix View to experiment with different arrangements, perform live, or create mash-ups and remixes. - -- A new VocalSync feature that lets you align the timing and pitch of vocal tracks with a guide track. You can use VocalSync to create tight vocal harmonies, dubbing, or rap performances. - -- A new Dimension Pro synthesizer that offers over 1,500 sounds ranging from acoustic instruments to electronic sounds. You can also import your own REX, SFZ, or WAV files and use the powerful modulation matrix and effects to shape your sounds. - -- A new TL-64 Tube Leveler that simulates the warm sound of tube saturation. You can use the TL-64 Tube Leveler to add character and warmth to your tracks or to create distortion and overdrive effects. - -- A new TS-64 Transient Shaper that lets you control the attack and sustain of your audio signals. You can use the TS-64 Transient Shaper to enhance the punch and clarity of drums, guitars, vocals, and more. - -- A new N.I. Guitar Rig 3 LE that lets you emulate classic guitar amps, cabinets, effects, and mics. You can use Guitar Rig 3 LE to create realistic guitar tones or to process any audio signal with creative effects. - -- Many other enhancements and bug fixes that improve the performance, stability, and usability of Sonar 8. - - - -As you can see, Sonar 8 Update 85 Producer Edition is a comprehensive and versatile software that can handle any music production task. Whether you are a beginner or a professional, you will find Sonar 8 easy to use and powerful enough to meet your needs. - - 1b8d091108 \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Brandy Talore (Lemons And Big Melons).md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Brandy Talore (Lemons And Big Melons).md deleted file mode 100644 index bc362621a3fa7565dc284ace0766d52fc58f5754..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Brandy Talore (Lemons And Big Melons).md +++ /dev/null @@ -1,16 +0,0 @@ -

        Brandy Talore (Lemons and Big Melons)


        Download Filehttps://urlgoal.com/2uCKQG



        - -In the past, I have written about her on two separate occasions. Her 2 videos and 1 live show on the site. I like her so much that I actually watched her new video, 2 new videos, 3 stills, and her live show before I even uploaded this episode. I was that thrilled with her. Check out this girl. I can't wait for more. I'm waiting for more, and I'm betting, you can't either! - -I remember as a kid how hot I thought Brandy Talore was. Her live show was my first introduction to her. When I discovered that she was doing this site, I was as excited as I was about discovering her in the first place. So now it's kind of like a dream come true for me to say that I got to watch her live show right before I uploaded this episode of The GLBT Review LIVE! Check it out. There is another full live show from her coming this week. - -That's all you need to know about Brandy Talore. So go check her out and maybe you'll want to bookmark her page and check back frequently for more. - -We've got some nice news for you fans of the gay big dick world. Today we will bring you a nice dose of hot cock in one single episode of The GLBT Review LIVE. This time we'll be featuring a young cute, handsome, and muscular twink, that's fucking his first cock ever and doing a great job. Here he is - Danny Hutton! - -When you see Danny he is a great looking guy. He's a clean-cut, all-American boy. There is something almost too good to be true about this young guy. He's so clean and cute that you can't help but fall for him. His eyes just seem to be smiling all the time. He's the perfect combination of young and handsome. - -We met Danny at one of the gay sex parties that we attended at iLoveToSuck. He was very shy at first, but once he got past that, he opened up and let the good times roll. Danny told us that he has only been sucking cock in the past. This was his first time, and he was very good at it. He told us that he had some of the best blow jobs he ever had. Danny is so cute, it was almost too easy to convince him to strip down and take some pictures of himself naked for us. Danny seems like a very sweet and sincere guy. 4fefd39f24
        -
        -
        -

        diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Clue.v6.1.Incl.Keygen-ORiON Serial Key.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Clue.v6.1.Incl.Keygen-ORiON Serial Key.md deleted file mode 100644 index bacd533a4b996db0fdccd3a1918fb2175aee9f71..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Clue.v6.1.Incl.Keygen-ORiON Serial Key.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Clue.v6.1.Incl.Keygen-ORiON Serial Key


        Download Zip ::: https://urlgoal.com/2uCKoQ



        -
        - d5da3c52bf
        -
        -
        -

        diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Facebook 6 Haneli Onay Kodu Hack.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Facebook 6 Haneli Onay Kodu Hack.md deleted file mode 100644 index a2e79f0f6f838047c3fd236f909b493b11998e6f..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Facebook 6 Haneli Onay Kodu Hack.md +++ /dev/null @@ -1,7 +0,0 @@ -

        facebook 6 haneli onay kodu hack


        DOWNLOADhttps://urlgoal.com/2uCJRB



        -
        -Jan 2022 - ... 72041activaradobeaftereffectscccrackDiablo 2 Lod V113 C No Cd CrackDESKTOP AUTHOR 7.1.1 crack.115facebook 6 haneli onay kodu hackgolkes. info Cracked 2 crack Cracked 1 Cracked 2 cracked 3 cracked 4 cracked 5 cracked 6 cracked 7 cracked 8 cracked 9 cracked 10 cracked -11 cracked 12 cracked 13 cracked 14 cracked 15 cracked 16 cracked 17 cracked 18 cracked 19 cracked 20 cracked 21 cracked 22 cracked 23 cracked 23 cracked 25 cracked 26 cracked 27 cracked 28 cracked 29 cracked 30 cracked 31 cracked 32 cracked 33 cracked 34 cracked 35 cracked 36 cracked 37 cracked 38 8a78ff9644
        -
        -
        -

        diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fiat Ecu Scan 34 2 Crack Free.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fiat Ecu Scan 34 2 Crack Free.md deleted file mode 100644 index b0c3eb0028bc929a123aee0aaa5eabca491bb081..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fiat Ecu Scan 34 2 Crack Free.md +++ /dev/null @@ -1,20 +0,0 @@ -

        Fiat Ecu Scan 34 2 Crack Free


        Downloadhttps://urlgoal.com/2uCJH1



        -
        -Fiat ecu scanner 2e. I have a brand new scanner and when i connect it to my phone it reads it as a 1. - -I’m getting an error “Can’t find mountpoint for /var/cache/ecuscan. Reply. Roy Sayers 5 years ago. I am using an ecu that is an older version and would like to get the newer version in order to use some new features in the computer software. I currently have the newer version of the ecu - -all the files. I have a brand new scanner and when i connect it to my phone it reads it as a 1. Did you do anything special to get the newer version of the ecu scanner on your phone? Please let me know what your ecu is, I’d be interested in knowing. ecuscan. - -First of all I have to tell you that I have an old version of this software and the ecu works fine. 2. It’s not supported by the manufacturer anymore. I would download the software update.zip file from their website and install it on your computer. - -Please let me know if this helped. It really sucks that the manufacturer has stopped supporting this product. I bought this because i loved it. Reply. tachy mccarthy 7 years ago. My scanner is working fine. It will read my acura tl because my windows software is reading it as a Honda. However it is reading my 1999 toyota corolla. Any suggestions? Reply. - -I have a scanner that I have never had a problem reading. Did you find a way to update the version? I have to scan EVERY single car I own to check them out. I find it very frustrating and would love to have a scanner that is compatible with all of my cars. Reply. tony mcglynn 10 years ago. How do I get a new version of this software? Reply. john mcknight 1 year ago. Can this be updated to read models - -You might try the latest version I just got the software update it says an error on my computer and is still a 1. Reply. tony mcknight 1 year ago. How do I get a new version of this software? Reply. john mcknight 1 year ago. Can this be updated to read models that did not come with the software Reply. - -Can any one help? Reply. Richard Rutledge 5 years ago. Can this be updated to read 4fefd39f24
        -
        -
        -

        diff --git a/spaces/rewoo/ReWOO-Demo/nodes/Worker.py b/spaces/rewoo/ReWOO-Demo/nodes/Worker.py deleted file mode 100644 index 9b99dccc1ca7f5bf5ba22dabccb8138dc0ee21fd..0000000000000000000000000000000000000000 --- a/spaces/rewoo/ReWOO-Demo/nodes/Worker.py +++ /dev/null @@ -1,229 +0,0 @@ -import requests -from geopy.geocoders import Nominatim -from langchain import OpenAI, LLMMathChain, LLMChain, PromptTemplate, Wikipedia -from langchain.agents import Tool -from langchain.agents.react.base import DocstoreExplorer -from langchain.document_loaders import TextLoader -from langchain.indexes import VectorstoreIndexCreator -from langchain.utilities import SerpAPIWrapper -from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper - -from nodes.Node import Node - - -class GoogleWorker(Node): - def __init__(self, name="Google"): - super().__init__(name, input_type=str, output_type=str) - self.isLLMBased = False - self.description = "Worker that searches results from Google. Useful when you need to find short " \ - "and succinct answers about a specific topic. Input should be a search query." - - def run(self, input, log=False): - assert isinstance(input, self.input_type) - tool = SerpAPIWrapper() - evidence = tool.run(input) - assert isinstance(evidence, self.output_type) - if log: - print(f"Running {self.name} with input {input}\nOutput: {evidence}\n") - return evidence - - -class WikipediaWorker(Node): - def __init__(self, name="Wikipedia", docstore=None): - super().__init__(name, input_type=str, output_type=str) - self.isLLMBased = False - self.description = "Worker that search for similar page contents from Wikipedia. Useful when you need to " \ - "get holistic knowledge about people, places, companies, historical events, " \ - "or other subjects. The response are long and might contain some irrelevant information. " \ - "Input should be a search query." - self.docstore = docstore - - def run(self, input, log=False): - if not self.docstore: - self.docstore = DocstoreExplorer(Wikipedia()) - assert isinstance(input, self.input_type) - tool = Tool( - name="Search", - func=self.docstore.search, - description="useful for when you need to ask with search" - ) - evidence = tool.run(input) - assert isinstance(evidence, self.output_type) - if log: - print(f"Running {self.name} with input {input}\nOutput: {evidence}\n") - return evidence - - -class DocStoreLookUpWorker(Node): - def __init__(self, name="LookUp", docstore=None): - super().__init__(name, input_type=str, output_type=str) - self.isLLMBased = False - self.description = "Worker that search the direct sentence in current Wikipedia result page. Useful when you " \ - "need to find information about a specific keyword from a existing Wikipedia search " \ - "result. Input should be a search keyword." - self.docstore = docstore - - def run(self, input, log=False): - if not self.docstore: - raise ValueError("Docstore must be provided for lookup") - assert isinstance(input, self.input_type) - tool = Tool( - name="Lookup", - func=self.docstore.lookup, - description="useful for when you need to ask with lookup" - ) - evidence = tool.run(input) - assert isinstance(evidence, self.output_type) - if log: - print(f"Running {self.name} with input {input}\nOutput: {evidence}\n") - return evidence - - -class CustomWolframAlphaAPITool(WolframAlphaAPIWrapper): - def __init__(self): - super().__init__() - - def run(self, query: str) -> str: - """Run query through WolframAlpha and parse result.""" - res = self.wolfram_client.query(query) - - try: - answer = next(res.results).text - except StopIteration: - return "Wolfram Alpha wasn't able to answer it" - - if answer is None or answer == "": - return "No good Wolfram Alpha Result was found" - else: - return f"Answer: {answer}" - - -class WolframAlphaWorker(Node): - def __init__(self, name="WolframAlpha"): - super().__init__(name, input_type=str, output_type=str) - self.isLLMBased = False - self.description = "A WolframAlpha search engine. Useful when you need to solve a complicated Mathematical or " \ - "Algebraic equation. Input should be an equation or function." - - def run(self, input, log=False): - assert isinstance(input, self.input_type) - tool = CustomWolframAlphaAPITool() - evidence = tool.run(input).replace("Answer:", "").strip() - assert isinstance(evidence, self.output_type) - if log: - print(f"Running {self.name} with input {input}\nOutput: {evidence}\n") - return evidence - - -class CalculatorWorker(Node): - def __init__(self, name="Calculator"): - super().__init__(name, input_type=str, output_type=str) - self.isLLMBased = True - self.description = "A calculator that can compute arithmetic expressions. Useful when you need to perform " \ - "math calculations. Input should be a mathematical expression" - - def run(self, input, log=False): - assert isinstance(input, self.input_type) - llm = OpenAI(temperature=0) - tool = LLMMathChain(llm=llm, verbose=False) - response = tool(input) - evidence = response["answer"].replace("Answer:", "").strip() - assert isinstance(evidence, self.output_type) - if log: - return {"input": response["question"], "output": response["answer"]} - return evidence - - -class LLMWorker(Node): - def __init__(self, name="LLM"): - super().__init__(name, input_type=str, output_type=str) - self.isLLMBased = True - self.description = "A pretrained LLM like yourself. Useful when you need to act with general world " \ - "knowledge and common sense. Prioritize it when you are confident in solving the problem " \ - "yourself. Input can be any instruction." - - def run(self, input, log=False): - assert isinstance(input, self.input_type) - llm = OpenAI(temperature=0) - prompt = PromptTemplate(template="Respond in short directly with no extra words.\n\n{request}", - input_variables=["request"]) - tool = LLMChain(prompt=prompt, llm=llm, verbose=False) - response = tool(input) - evidence = response["text"].strip("\n") - assert isinstance(evidence, self.output_type) - if log: - return {"input": response["request"], "output": response["text"]} - return evidence - - -class ZipCodeRetriever(Node): - - def __init__(self, name="ZipCodeRetriever"): - super().__init__(name, input_type=str, output_type=str) - self.isLLMBased = False - self.description = "A zip code retriever. Useful when you need to get users' current zip code. Input can be " \ - "left blank." - - def get_ip_address(self): - response = requests.get("https://ipinfo.io/json") - data = response.json() - return data["ip"] - - def get_location_data(sefl, ip_address): - url = f"https://ipinfo.io/{ip_address}/json" - response = requests.get(url) - data = response.json() - return data - - def get_zipcode_from_lat_long(self, lat, long): - geolocator = Nominatim(user_agent="zipcode_locator") - location = geolocator.reverse((lat, long)) - return location.raw["address"]["postcode"] - - def get_current_zipcode(self): - ip_address = self.get_ip_address() - location_data = self.get_location_data(ip_address) - lat, long = location_data["loc"].split(",") - zipcode = self.get_zipcode_from_lat_long(float(lat), float(long)) - return zipcode - - def run(self, input): - assert isinstance(input, self.input_type) - evidence = self.get_current_zipcode() - assert isinstance(evidence, self.output_type) - - -class SearchDocWorker(Node): - - def __init__(self, doc_name, doc_path, name="SearchDoc"): - super().__init__(name, input_type=str, output_type=str) - self.isLLMBased = True - self.doc_path = doc_path - self.description = f"A vector store that searches for similar and related content in document: {doc_name}. " \ - f"The result is a huge chunk of text related to your search but can also " \ - f"contain irrelevant info. Input should be a search query." - - def run(self, input, log=False): - assert isinstance(input, self.input_type) - loader = TextLoader(self.doc_path) - vectorstore = VectorstoreIndexCreator().from_loaders([loader]).vectorstore - evidence = vectorstore.similarity_search(input, k=1)[0].page_content - assert isinstance(evidence, self.output_type) - if log: - print(f"Running {self.name} with input {input}\nOutput: {evidence}\n") - return evidence - - -class SearchSOTUWorker(SearchDocWorker): - def __init__(self, name="SearchSOTU"): - super().__init__(name=name, doc_name="state_of_the_union", doc_path="data/docs/state_of_the_union.txt") - - - -WORKER_REGISTRY = {"Google": GoogleWorker(), - "Wikipedia": WikipediaWorker(), - "LookUp": DocStoreLookUpWorker(), - "WolframAlpha": WolframAlphaWorker(), - "Calculator": CalculatorWorker(), - "LLM": LLMWorker(), - "SearchSOTU": SearchSOTUWorker()} diff --git a/spaces/richardzhangy26/yandian_flow_classification/configs/_base_/models/maskflownets.py b/spaces/richardzhangy26/yandian_flow_classification/configs/_base_/models/maskflownets.py deleted file mode 100644 index c879e17a3a5bd67df8fd15aca1fe71a10e180155..0000000000000000000000000000000000000000 --- a/spaces/richardzhangy26/yandian_flow_classification/configs/_base_/models/maskflownets.py +++ /dev/null @@ -1,44 +0,0 @@ -model = dict( - type='MaskFlowNetS', - freeze_net=False, - encoder=dict( - type='PWCNetEncoder', - in_channels=3, - net_type='Basic', - pyramid_levels=[ - 'level1', 'level2', 'level3', 'level4', 'level5', 'level6' - ], - out_channels=(16, 32, 64, 96, 128, 196), - strides=(2, 2, 2, 2, 2, 2), - dilations=(1, 1, 1, 1, 1, 1), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1)), - decoder=dict( - type='MaskFlowNetSDecoder', - warp_in_channels=dict( - level6=196, level5=128, level4=96, level3=64, level2=32), - up_channels=dict( - level6=16, level5=16, level4=16, level3=16, level2=16), - warp_type='AsymOFMM', - in_channels=dict( - level6=81, level5=227, level4=195, level3=163, level2=131), - corr_cfg=dict(type='Correlation', max_displacement=4), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - scaled=False, - post_processor=dict(type='ContextNet', in_channels=579), - flow_loss=dict( - type='MultiLevelEPE', - p=2, - reduction='sum', - weights={ - 'level2': 0.005, - 'level3': 0.01, - 'level4': 0.02, - 'level5': 0.08, - 'level6': 0.32 - }), - ), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(), - init_cfg=dict( - type='Kaiming', a=0.1, distribution='uniform', layer='Conv2d')) diff --git a/spaces/riyueyiming/gpt/custom.css b/spaces/riyueyiming/gpt/custom.css deleted file mode 100644 index 5143eb138ea2469d8c457c71cb210fd3fb7cbe15..0000000000000000000000000000000000000000 --- a/spaces/riyueyiming/gpt/custom.css +++ /dev/null @@ -1,162 +0,0 @@ -:root { - --chatbot-color-light: #F3F3F3; - --chatbot-color-dark: #121111; -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2.5em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -#chuanhu_chatbot, #status_display { - transition: all 0.6s; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色 */ -#chuanhu_chatbot { - background-color: var(--chatbot-color-light) !important; -} -[data-testid = "bot"] { - background-color: #FFFFFF !important; -} -[data-testid = "user"] { - background-color: #95EC69 !important; -} -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1.4em 1.2em 0em 1.4em; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git a/spaces/robin0307/MMOCR/configs/kie/sdmgr/README.md b/spaces/robin0307/MMOCR/configs/kie/sdmgr/README.md deleted file mode 100644 index 645696b75c76e496c394a8f6773a8fa8a0d939da..0000000000000000000000000000000000000000 --- a/spaces/robin0307/MMOCR/configs/kie/sdmgr/README.md +++ /dev/null @@ -1,52 +0,0 @@ -# SDMGR - -> [Spatial Dual-Modality Graph Reasoning for Key Information Extraction](https://arxiv.org/abs/2103.14470) - - - -## Abstract - -Key information extraction from document images is of paramount importance in office automation. Conventional template matching based approaches fail to generalize well to document images of unseen templates, and are not robust against text recognition errors. In this paper, we propose an end-to-end Spatial Dual-Modality Graph Reasoning method (SDMG-R) to extract key information from unstructured document images. We model document images as dual-modality graphs, nodes of which encode both the visual and textual features of detected text regions, and edges of which represent the spatial relations between neighboring text regions. The key information extraction is solved by iteratively propagating messages along graph edges and reasoning the categories of graph nodes. In order to roundly evaluate our proposed method as well as boost the future research, we release a new dataset named WildReceipt, which is collected and annotated tailored for the evaluation of key information extraction from document images of unseen templates in the wild. It contains 25 key information categories, a total of about 69000 text boxes, and is about 2 times larger than the existing public datasets. Extensive experiments validate that all information including visual features, textual features and spatial relations can benefit key information extraction. It has been shown that SDMG-R can effectively extract key information from document images of unseen templates, and obtain new state-of-the-art results on the recent popular benchmark SROIE and our WildReceipt. Our code and dataset will be publicly released. - -
        - -
        - -## Results and models - -### WildReceipt - -| Method | Modality | Macro F1-Score | Download | -| :--------------------------------------------------------------------: | :--------------: | :------------: | :--------------------------------------------------------------------------------------------------: | -| [sdmgr_unet16](/configs/kie/sdmgr/sdmgr_unet16_60e_wildreceipt.py) | Visual + Textual | 0.888 | [model](https://download.openmmlab.com/mmocr/kie/sdmgr/sdmgr_unet16_60e_wildreceipt_20210520-7489e6de.pth) \| [log](https://download.openmmlab.com/mmocr/kie/sdmgr/20210520_132236.log.json) | -| [sdmgr_novisual](/configs/kie/sdmgr/sdmgr_novisual_60e_wildreceipt.py) | Textual | 0.870 | [model](https://download.openmmlab.com/mmocr/kie/sdmgr/sdmgr_novisual_60e_wildreceipt_20210517-a44850da.pth) \| [log](https://download.openmmlab.com/mmocr/kie/sdmgr/20210517_205829.log.json) | - -```{note} -1. For `sdmgr_novisual`, images are not needed for training and testing. So fake `img_prefix` can be used in configs. As well, fake `file_name` can be used in annotation files. -``` - -### WildReceiptOpenset - -| Method | Modality | Edge F1-Score | Node Macro F1-Score | Node Micro F1-Score | Download | -| :-------------------------------------------------------------------: | :------: | :-----------: | :-----------------: | :-----------------: | :----------------------------------------------------------------------: | -| [sdmgr_novisual](/configs/kie/sdmgr/sdmgr_novisual_60e_wildreceipt_openset.py) | Textual | 0.786 | 0.926 | 0.935 | [model](https://download.openmmlab.com/mmocr/kie/sdmgr/sdmgr_novisual_60e_wildreceipt_openset_20210917-d236b3ea.pth) \| [log](https://download.openmmlab.com/mmocr/kie/sdmgr/20210917_050824.log.json) | - -```{note} -1. In the case of openset, the number of node categories is unknown or unfixed, and more node category can be added. -2. To show that our method can handle openset problem, we modify the ground truth of `WildReceipt` to `WildReceiptOpenset`. The `nodes` are just classified into 4 classes: `background, key, value, others`, while adding `edge` labels for each box. -3. The model is used to predict whether two nodes are a pair connecting by a valid edge. -4. You can learn more about the key differences between CloseSet and OpenSet annotations in our [tutorial](tutorials/kie_closeset_openset.md). -``` - -## Citation - -```bibtex -@misc{sun2021spatial, - title={Spatial Dual-Modality Graph Reasoning for Key Information Extraction}, - author={Hongbin Sun and Zhanghui Kuang and Xiaoyu Yue and Chenhao Lin and Wayne Zhang}, - year={2021}, - eprint={2103.14470}, - archivePrefix={arXiv}, - primaryClass={cs.CV} -} -``` diff --git a/spaces/robin0307/MMOCR/configs/textrecog/satrn/satrn_small.py b/spaces/robin0307/MMOCR/configs/textrecog/satrn/satrn_small.py deleted file mode 100644 index 96f86797f4700fd6ab9590fa983323f3e22d15c2..0000000000000000000000000000000000000000 --- a/spaces/robin0307/MMOCR/configs/textrecog/satrn/satrn_small.py +++ /dev/null @@ -1,68 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/recog_pipelines/satrn_pipeline.py', - '../../_base_/recog_datasets/ST_MJ_train.py', - '../../_base_/recog_datasets/academic_test.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline = {{_base_.test_pipeline}} - -label_convertor = dict( - type='AttnConvertor', dict_type='DICT90', with_unknown=True) - -model = dict( - type='SATRN', - backbone=dict(type='ShallowCNN', input_channels=3, hidden_dim=256), - encoder=dict( - type='SatrnEncoder', - n_layers=6, - n_head=8, - d_k=256 // 8, - d_v=256 // 8, - d_model=256, - n_position=100, - d_inner=256 * 4, - dropout=0.1), - decoder=dict( - type='NRTRDecoder', - n_layers=6, - d_embedding=256, - n_head=8, - d_model=256, - d_inner=256 * 4, - d_k=256 // 8, - d_v=256 // 8), - loss=dict(type='TFLoss'), - label_convertor=label_convertor, - max_seq_len=25) - -# optimizer -optimizer = dict(type='Adam', lr=3e-4) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict(policy='step', step=[3, 4]) -total_epochs = 6 - -data = dict( - samples_per_gpu=64, - workers_per_gpu=4, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline)) - -evaluation = dict(interval=1, metric='acc') diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/backbones/mobilenet_v2.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/backbones/mobilenet_v2.py deleted file mode 100644 index 8c6fcfaaa4c550b3568343f6b9baf1512d41b4db..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/backbones/mobilenet_v2.py +++ /dev/null @@ -1,197 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES -from ..utils import InvertedResidual, make_divisible - - -@BACKBONES.register_module() -class MobileNetV2(BaseModule): - """MobileNetV2 backbone. - - Args: - widen_factor (float): Width multiplier, multiply number of - channels in each layer by this amount. Default: 1.0. - out_indices (Sequence[int], optional): Output from which stages. - Default: (1, 2, 4, 7). - frozen_stages (int): Stages to be frozen (all param fixed). - Default: -1, which means not freezing any parameters. - conv_cfg (dict, optional): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='ReLU6'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - # Parameters to build layers. 4 parameters are needed to construct a - # layer, from left to right: expand_ratio, channel, num_blocks, stride. - arch_settings = [[1, 16, 1, 1], [6, 24, 2, 2], [6, 32, 3, 2], - [6, 64, 4, 2], [6, 96, 3, 1], [6, 160, 3, 2], - [6, 320, 1, 1]] - - def __init__(self, - widen_factor=1., - out_indices=(1, 2, 4, 7), - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU6'), - norm_eval=False, - with_cp=False, - pretrained=None, - init_cfg=None): - super(MobileNetV2, self).__init__(init_cfg) - - self.pretrained = pretrained - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - else: - raise TypeError('pretrained must be a str or None') - - self.widen_factor = widen_factor - self.out_indices = out_indices - if not set(out_indices).issubset(set(range(0, 8))): - raise ValueError('out_indices must be a subset of range' - f'(0, 8). But received {out_indices}') - - if frozen_stages not in range(-1, 8): - raise ValueError('frozen_stages must be in range(-1, 8). ' - f'But received {frozen_stages}') - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.norm_eval = norm_eval - self.with_cp = with_cp - - self.in_channels = make_divisible(32 * widen_factor, 8) - - self.conv1 = ConvModule( - in_channels=3, - out_channels=self.in_channels, - kernel_size=3, - stride=2, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - self.layers = [] - - for i, layer_cfg in enumerate(self.arch_settings): - expand_ratio, channel, num_blocks, stride = layer_cfg - out_channels = make_divisible(channel * widen_factor, 8) - inverted_res_layer = self.make_layer( - out_channels=out_channels, - num_blocks=num_blocks, - stride=stride, - expand_ratio=expand_ratio) - layer_name = f'layer{i + 1}' - self.add_module(layer_name, inverted_res_layer) - self.layers.append(layer_name) - - if widen_factor > 1.0: - self.out_channel = int(1280 * widen_factor) - else: - self.out_channel = 1280 - - layer = ConvModule( - in_channels=self.in_channels, - out_channels=self.out_channel, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.add_module('conv2', layer) - self.layers.append('conv2') - - def make_layer(self, out_channels, num_blocks, stride, expand_ratio): - """Stack InvertedResidual blocks to build a layer for MobileNetV2. - - Args: - out_channels (int): out_channels of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - expand_ratio (int): Expand the number of channels of the - hidden layer in InvertedResidual by this ratio. Default: 6. - """ - layers = [] - for i in range(num_blocks): - if i >= 1: - stride = 1 - layers.append( - InvertedResidual( - self.in_channels, - out_channels, - mid_channels=int(round(self.in_channels * expand_ratio)), - stride=stride, - with_expand_conv=expand_ratio != 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - with_cp=self.with_cp)) - self.in_channels = out_channels - - return nn.Sequential(*layers) - - def _freeze_stages(self): - if self.frozen_stages >= 0: - for param in self.conv1.parameters(): - param.requires_grad = False - for i in range(1, self.frozen_stages + 1): - layer = getattr(self, f'layer{i}') - layer.eval() - for param in layer.parameters(): - param.requires_grad = False - - def forward(self, x): - """Forward function.""" - x = self.conv1(x) - outs = [] - for i, layer_name in enumerate(self.layers): - layer = getattr(self, layer_name) - x = layer(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) - - def train(self, mode=True): - """Convert the model into training mode while keep normalization layer - frozen.""" - super(MobileNetV2, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() diff --git a/spaces/ronig/protein_binding_search/app.py b/spaces/ronig/protein_binding_search/app.py deleted file mode 100644 index 83b84a810c56cd10c51494f8eb51098d54e0b623..0000000000000000000000000000000000000000 --- a/spaces/ronig/protein_binding_search/app.py +++ /dev/null @@ -1,184 +0,0 @@ -import collections -import os -from typing import Dict, List - -import gradio as gr - -from index_list import read_index_list -from protein_viz import get_pdb_title, render_html -from search_engine import MilvusParams, ProteinSearchEngine - -model_repo = "ronig/protein_biencoder" - -available_indexes = read_index_list() -engine = ProteinSearchEngine( - milvus_params=MilvusParams( - uri="https://in03-ddab8e9a5a09fcc.api.gcp-us-west1.zillizcloud.com", - token=os.environ.get("MILVUS_TOKEN"), - db_name="Protein", - collection_name="Peptriever", - ), - model_repo=model_repo, -) - -max_results = 1000 -choice_sep = " | " -max_seq_length = 50 - - -def search_and_display(seq, max_res, index_selection): - n_search_res = 1024 - _validate_sequence_length(seq) - max_res = int(limit_n_results(max_res)) - if index_selection == "All Species": - index_selection = None - search_res = engine.search_by_sequence( - seq, n=n_search_res, organism=index_selection - ) - agg_search_results = aggregate_search_results(search_res, max_res) - formatted_search_results = format_search_results(agg_search_results) - results_options = update_dropdown_menu(agg_search_results) - return formatted_search_results, results_options - - -def _validate_sequence_length(seq): - if len(seq) > max_seq_length: - raise gr.Error("Only peptide input is currently supported") - - -def limit_n_results(n): - return max(min(n, max_results), 1) - - -def aggregate_search_results(raw_results: List[dict], max_res: int) -> Dict[str, dict]: - aggregated_by_uniprot = collections.defaultdict(list) - for raw_result in raw_results: - entry = select_keys( - raw_result, - keys=["pdb_name", "chain_id", "score", "organism", "uniprot_id", "genes"], - ) - uniprot_id = raw_result["uniprot_id"] - - if uniprot_id is not None: - aggregated_by_uniprot[uniprot_id].append(entry) - if len(aggregated_by_uniprot) >= max_res: - return dict(aggregated_by_uniprot) - return dict(aggregated_by_uniprot) - - -def select_keys(d: dict, keys: List[str]): - return {key: d[key] for key in keys} - - -def format_search_results(agg_search_results): - formatted_search_results = {} - for uniprot_id, entries in agg_search_results.items(): - entry = entries[0] - organism = entry["organism"] - score = entry["score"] - genes = entry["genes"] - key = f"Uniprot ID: {uniprot_id} | Organism: {organism} | Gene Names: {genes}" - formatted_search_results[key] = score - return formatted_search_results - - -def update_dropdown_menu(agg_search_res): - choices = [] - for uniprot_id, entries in agg_search_res.items(): - for entry in entries: - choice = choice_sep.join( - [ - uniprot_id, - entry["pdb_name"], - entry["chain_id"], - entry["genes"] or "", - ] - ) - choices.append(choice) - - if choices: - update = gr.Dropdown.update( - choices=choices, interactive=True, value=choices[0], visible=True - ) - else: - update = gr.Dropdown.update( - choices=choices, interactive=True, visible=False, value=None - ) - return update - - -def parse_pdb_search_result(raw_result): - prot = raw_result["pdb_name"] - chain = raw_result["chain_id"] - value = raw_result["score"] - gene_names = raw_result["genes"] - species = raw_result["organism"] - key = f"PDB: {prot}.{chain}" - if gene_names is not None: - key += f" | Genes: {gene_names} | Organism: {species}" - return key, value - - -def switch_viz(new_choice): - if new_choice is None: - html = "" - title_update = gr.Markdown.update(visible=False) - description_update = gr.Markdown.update(value=None, visible=False) - else: - choice_parts = new_choice.split(choice_sep) - pdb_id, chain = choice_parts[1:3] - title_update = gr.Markdown.update(visible=True) - pdb_title = get_pdb_title(pdb_id) - - new_value = f"""**PDB Title**: {pdb_title}""" - - description_update = gr.Markdown.update(value=new_value, visible=True) - html = render_html(pdb_id=pdb_id, chain=chain) - return html, title_update, description_update - - -with gr.Blocks() as demo: - with gr.Column(): - with gr.Column(): - with gr.Row(): - with gr.Column(): - seq_input = gr.Textbox(value="APTMPPPLPP", label="Input Sequence") - n_results = gr.Number(10, label="N Results") - index_selector = gr.Dropdown( - choices=available_indexes, - value="All Species", - multiselect=False, - visible=True, - label="Index", - ) - search_button = gr.Button("Search", variant="primary") - search_results = gr.Label( - num_top_classes=max_results, label="Search Results" - ) - viz_header = gr.Markdown("## Visualization", visible=False) - results_selector = gr.Dropdown( - choices=[], - multiselect=False, - visible=False, - label="Visualized Search Result", - ) - viz_body = gr.Markdown("", visible=False) - protein_viz = gr.HTML( - value=render_html(pdb_id=None, chain=None), - label="Protein Visualization", - ) - gr.Examples( - ["APTMPPPLPP", "KFLIYQMECSTMIFGL", "PHFAMPPIHEDHLE", "AEERIISLD"], - inputs=[seq_input], - ) - search_button.click( - search_and_display, - inputs=[seq_input, n_results, index_selector], - outputs=[search_results, results_selector], - ) - results_selector.change( - switch_viz, inputs=results_selector, outputs=[protein_viz, viz_header, viz_body] - ) - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/rorallitri/biomedical-language-models/logs/A Tribe Called Quest The Love Movement Zip How to Get the Deluxe Edition with Bonus Tracks and Remixes.md b/spaces/rorallitri/biomedical-language-models/logs/A Tribe Called Quest The Love Movement Zip How to Get the Deluxe Edition with Bonus Tracks and Remixes.md deleted file mode 100644 index 3635406914447c37918f1d74e01d04167dd3026f..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/A Tribe Called Quest The Love Movement Zip How to Get the Deluxe Edition with Bonus Tracks and Remixes.md +++ /dev/null @@ -1,6 +0,0 @@ -

        a tribe called quest the love movement zip


        Downloadhttps://tinurll.com/2uzlkT



        - - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/rorallitri/biomedical-language-models/logs/Autodesk AutoCAD 2018 8.47 (x86x64) Keygen Crack [WORK].md b/spaces/rorallitri/biomedical-language-models/logs/Autodesk AutoCAD 2018 8.47 (x86x64) Keygen Crack [WORK].md deleted file mode 100644 index fb06ccd7df3689110fe96db73fa72a968130eca3..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Autodesk AutoCAD 2018 8.47 (x86x64) Keygen Crack [WORK].md +++ /dev/null @@ -1,9 +0,0 @@ -
        -

        AutoCAD LT allows users to work as part of a team or in the traditional, single-user design format, with other collaborative tools available. The advantages of Autodesk AutoCAD LT can be found in the following list. An additional explanation of Autodesk AutoCAD LT version 2017 and its benefits are also provided. Architecture: Architectural design tools are an essential part of the overall design process. Designs can easily be transferred between Autodesk AutoCAD LT and Autodesk AutoCAD through the use of DWG files. Arranging and displaying objects (such as 3D models or text) is also easier with these tools. File formats: Autodesk AutoCAD LT supports all standard DWG and DXF file formats.

        -

        Autodesk AutoCAD 2018 8.47 (x86x64) Keygen Crack


        Downloadhttps://tinurll.com/2uzlQO



        -

        AutoCAD LT and AutoCAD use DWG and DXF file formats for a broad range of design and construction documents. AUTOCAD, the architectural, mechanical, and electrical design program for AutoCAD LT, also supports HDF, stl, and 3DS file formats for models and 3D drawings. Autodesk AutoCAD LT users can share their work, and drawings or models created in one program can be viewed and edited in another.

        -

        Note: Autodesk AutoCAD LT does not support 2D drawings. Layout and drafting: You can drag and drop objects from one program to another. Flexibility and customization: You can customize menus, and you can export and import user-defined or template files. Version control: You can view and edit the same DWG file from multiple computers.

        -

        You can also use Autodesk AutoCAD LT for construction management. This version is available in the Construction segment of the AutoCAD LT program. Building or remodeling a house or commercial building requires the integration of a variety of disciplines, including the use of Autodesk AutoCAD LT as well as other tools.

        -

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Full Extra Quality SHOM Navigation Maps (french English Channel) Map2ozi.md b/spaces/rorallitri/biomedical-language-models/logs/Full Extra Quality SHOM Navigation Maps (french English Channel) Map2ozi.md deleted file mode 100644 index a5d564a4e0bb94371a83b585032a04e0fff4a0f1..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Full Extra Quality SHOM Navigation Maps (french English Channel) Map2ozi.md +++ /dev/null @@ -1,6 +0,0 @@ -

        FULL SHOM navigation maps (french english channel) map2ozi


        DOWNLOADhttps://tinurll.com/2uzlYu



        - - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/saadkiet/AI_Blog_generation_Powered_by_GPT_NEO_1.3B/README.md b/spaces/saadkiet/AI_Blog_generation_Powered_by_GPT_NEO_1.3B/README.md deleted file mode 100644 index 45aafc3394068725cecd81c2581290018b8a537b..0000000000000000000000000000000000000000 --- a/spaces/saadkiet/AI_Blog_generation_Powered_by_GPT_NEO_1.3B/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: AI Blog Generation Powered By GPT NEO 1.3B -emoji: ⚡ -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.1.7 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sachinrcz/isItCarOrPlaceOrBus/README.md b/spaces/sachinrcz/isItCarOrPlaceOrBus/README.md deleted file mode 100644 index 04c16cf16075e0cdb0501b7e68061f86eab96a22..0000000000000000000000000000000000000000 --- a/spaces/sachinrcz/isItCarOrPlaceOrBus/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: IsItCarOrPlaceOrBus -emoji: 📚 -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/safebuster2/sudoku/app.py b/spaces/safebuster2/sudoku/app.py deleted file mode 100644 index f48e60fa6aff58890847d9ec4f5ae253036cf1dd..0000000000000000000000000000000000000000 --- a/spaces/safebuster2/sudoku/app.py +++ /dev/null @@ -1,35 +0,0 @@ -import gradio as gr -from fastai.tabular.all import * - -categories = [1, 2, 3, 4, 5, 6, 7, 8, 9] -cat_names = [] -for i in range(81): - name = 'cell_' + str(i) - cat_names.append(name) - - -def predict(cells_str, idx): - learn = load_learner('./models/model_' + str(idx) + '.pkl') - cells = list(map(int, cells_str.split(','))) - print(cells) - serie = dict(zip(cat_names, cells)) - print(serie) - df = pd.DataFrame(serie, columns=cat_names, index=range(81)) - df.drop(['cell_' + str(idx)], axis=1, inplace=True) - print(df.iloc[0]) - row, clas, probs = learn.predict(df.iloc[0]) - return dict(zip(categories, map(float, probs))) - - -examples = [[ - '1,2,0,3,8,6,7,5,4,7,5,3,2,1,4,9,8,6,4,6,8,9,5,7,2,3,1,8,7,2,5,6,9,1,4,3,5,1,4,7,3,2,8,6,9,9,3,6,8,4,1,5,2,7,2,9,5,6,7,3,4,1,8,6,8,1,4,9,5,3,7,2,3,4,7,1,2,8,6,9,5', - 2], [ - '1,2,9,3,8,6,7,5,4,7,5,3,2,1,4,9,8,6,4,6,8,9,5,7,2,3,1,8,7,2,5,6,9,1,4,3,5,1,4,7,3,2,8,6,9,9,3,6,8,4,1,5,2,7,2,9,5,6,7,3,4,1,8,6,8,1,4,9,5,3,7,2,3,4,7,1,2,8,6,9,0', - 80], [ - '0,0,0,0,0,0,7,5,4,7,5,3,2,1,4,9,8,6,4,6,8,9,5,7,2,3,1,8,7,2,5,6,9,1,4,3,5,1,4,7,3,2,8,6,9,9,3,6,8,4,1,5,2,7,2,9,5,6,7,3,4,1,8,6,8,1,4,9,5,3,7,2,3,4,7,1,2,8,6,9,0', - 2]] -iface = gr.Interface(fn=predict, - inputs=['text', gr.Slider(0, 80, step=1)], - outputs=gr.Label(num_top_classes=9), - examples=examples) -iface.launch() diff --git a/spaces/sasha/find-my-pedro/README.md b/spaces/sasha/find-my-pedro/README.md deleted file mode 100644 index 5b355e49b9fc82040a1b8e010f35f18cf5acd86d..0000000000000000000000000000000000000000 --- a/spaces/sasha/find-my-pedro/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Find My Pedro Pascal 😍 -emoji: 😍 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: SDbiaseval/find-my-butterfly ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sayakpaul/convert-kerascv-sd-diffusers/hub_utils/repo.py b/spaces/sayakpaul/convert-kerascv-sd-diffusers/hub_utils/repo.py deleted file mode 100644 index c13bf35de098594f2c1d66ec2881eab1dbe1abd7..0000000000000000000000000000000000000000 --- a/spaces/sayakpaul/convert-kerascv-sd-diffusers/hub_utils/repo.py +++ /dev/null @@ -1,20 +0,0 @@ -from huggingface_hub import HfApi, create_repo - - -def push_to_hub(hf_token: str, push_dir: str, repo_prefix: None) -> str: - try: - if hf_token == "": - return "No HF token provided. Model won't be pushed." - else: - hf_api = HfApi(token=hf_token) - user = hf_api.whoami()["name"] - repo_id = ( - f"{user}/{push_dir}" - if repo_prefix == "" - else f"{user}/{repo_prefix}-{push_dir}" - ) - _ = create_repo(repo_id=repo_id, token=hf_token, exist_ok=True) - url = hf_api.upload_folder(folder_path=push_dir, repo_id=repo_id) - return f"💡🚛 Model successfully pushed: [{url}]({url})" - except Exception as e: - return f"{e}" diff --git a/spaces/scedlatioru/img-to-music/example/Monsterhunter3ultimate3dsisodownload NEW!.md b/spaces/scedlatioru/img-to-music/example/Monsterhunter3ultimate3dsisodownload NEW!.md deleted file mode 100644 index bd586d4a74497f46fda695e5cd07dd41a50f634f..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Monsterhunter3ultimate3dsisodownload NEW!.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Monsterhunter3ultimate3dsisodownload


        Download Filehttps://gohhs.com/2uEzjD



        - -Monsterhunter3ultimate3dsisodownload · igo8 mio moov 2gb rom .rar · Assassins Creed Unity Multiplayer Crack For 39 · Windows 10 X64 Pro ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/scedlatioru/img-to-music/example/Vividworkshopdata121crack ((LINK)).md b/spaces/scedlatioru/img-to-music/example/Vividworkshopdata121crack ((LINK)).md deleted file mode 100644 index ca3ae15ce63159a262a8600aa4b046a4e6c69f2a..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Vividworkshopdata121crack ((LINK)).md +++ /dev/null @@ -1,12 +0,0 @@ -

        vividworkshopdata121crack


        DOWNLOAD 🌟 https://gohhs.com/2uEyX9



        -
        -We have seen success with English only and we offer a 4-day training in Greek, German, Spanish. English only. Register and Become a Vivid. Free Greek, German, Spanish Classroom Tutorials - Read and view subtitles for a. - -I have a wonderful Spanish-German-English-Greek-Turkish-Latin lesson plan, based on the GKCE 4g course. Check it out. It includes. 4g Course English Language for beginners. Learn to speak Greek language with Vivid Greek, the #1 Greek language course. Learn to speak Greek language with Vivid Greek, the #1 Greek language course. Plan your visit to Greece with our Travel Guide. Find guided tours and activities, festivals, popular sights and attractions, with easy-to-use maps and comprehensive. - -Greece for Free. On its way to the top of the most visited countries in Europe, Greece has been a gift to tourism since Ancient Times. Whether you are coming to Greece for business or leisure, it is a one of a kind destination with a natural and historical heritage that deserves to be explored. We offer you free. A 400 year old sponge cake! In my quest for authentic Greek food I stumbled on a great website written in Greek: - -Learn and review vocabulary, learn the meaning of expressions in Greek. Find these free Greek lessons, tests, and quizzes on Engvid. We offer free lessons to help. Explore the world of Greek via our languages, people, and culture education to make you. Греческие фразеологические слова в этом учебном руководстве посвящены общей характеристике различных языков Греческого фонетического синтаксиса, синтаксиса основных. Follow the simple instructions below to help 4fefd39f24
        -
        -
        -

        diff --git a/spaces/schibsted/Facial_Recognition_with_Sentiment_Detector/utils.py b/spaces/schibsted/Facial_Recognition_with_Sentiment_Detector/utils.py deleted file mode 100644 index ef0d943bbea2a6c8a989e3e6292c2820695f16cc..0000000000000000000000000000000000000000 --- a/spaces/schibsted/Facial_Recognition_with_Sentiment_Detector/utils.py +++ /dev/null @@ -1,237 +0,0 @@ -# PyTorch implementation of Darknet -# This is a custom, hard-coded version of darknet with -# YOLOv3 implementation for openimages database. This -# was written to test viability of implementing YOLO -# for face detection followed by emotion / sentiment -# analysis. -# -# Configuration, weights and data are hardcoded. -# Additional options include, ability to create -# subset of data with faces exracted for labelling. -# -# Author : Saikiran Tharimena -# Co-Authors: Kjetil Marinius Sjulsen, Juan Carlos Calvet Lopez -# Project : Emotion / Sentiment Detection from news images -# Date : 12 September 2022 -# Version : v0.1 -# -# (C) Schibsted ASA - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.autograd import Variable -import numpy as np -import cv2 - - -def unique(tensor): - tensor_np = tensor.cpu().numpy() - unique_np = np.unique(tensor_np) - unique_tensor = torch.from_numpy(unique_np) - - tensor_res = tensor.new(unique_tensor.shape) - tensor_res.copy_(unique_tensor) - return tensor_res - - -def bbox_iou(box1, box2): - """ - Returns the IoU of two bounding boxes - - - """ - #Get the coordinates of bounding boxes - b1_x1, b1_y1, b1_x2, b1_y2 = box1[:,0], box1[:,1], box1[:,2], box1[:,3] - b2_x1, b2_y1, b2_x2, b2_y2 = box2[:,0], box2[:,1], box2[:,2], box2[:,3] - - #get the corrdinates of the intersection rectangle - inter_rect_x1 = torch.max(b1_x1, b2_x1) - inter_rect_y1 = torch.max(b1_y1, b2_y1) - inter_rect_x2 = torch.min(b1_x2, b2_x2) - inter_rect_y2 = torch.min(b1_y2, b2_y2) - - #Intersection area - inter_area = torch.clamp(inter_rect_x2 - inter_rect_x1 + 1, min=0) * torch.clamp(inter_rect_y2 - inter_rect_y1 + 1, min=0) - - #Union Area - b1_area = (b1_x2 - b1_x1 + 1)*(b1_y2 - b1_y1 + 1) - b2_area = (b2_x2 - b2_x1 + 1)*(b2_y2 - b2_y1 + 1) - - iou = inter_area / (b1_area + b2_area - inter_area) - - return iou - - -def predict_transform(prediction, inp_dim, anchors, num_classes, CUDA = True): - - batch_size = prediction.size(0) - stride = inp_dim // prediction.size(2) - grid_size = inp_dim // stride - bbox_attrs = 5 + num_classes - num_anchors = len(anchors) - - prediction = prediction.view(batch_size, bbox_attrs*num_anchors, grid_size*grid_size) - prediction = prediction.transpose(1,2).contiguous() - prediction = prediction.view(batch_size, grid_size*grid_size*num_anchors, bbox_attrs) - anchors = [(a[0]/stride, a[1]/stride) for a in anchors] - - #Sigmoid the centre_X, centre_Y. and object confidencce - prediction[:,:,0] = torch.sigmoid(prediction[:,:,0]) - prediction[:,:,1] = torch.sigmoid(prediction[:,:,1]) - prediction[:,:,4] = torch.sigmoid(prediction[:,:,4]) - - #Add the center offsets - grid = np.arange(grid_size) - a,b = np.meshgrid(grid, grid) - - x_offset = torch.FloatTensor(a).view(-1,1) - y_offset = torch.FloatTensor(b).view(-1,1) - - if CUDA: - x_offset = x_offset.cuda() - y_offset = y_offset.cuda() - - x_y_offset = torch.cat((x_offset, y_offset), 1).repeat(1,num_anchors).view(-1,2).unsqueeze(0) - - prediction[:,:,:2] += x_y_offset - - #log space transform height and the width - anchors = torch.FloatTensor(anchors) - - if CUDA: - anchors = anchors.cuda() - - anchors = anchors.repeat(grid_size*grid_size, 1).unsqueeze(0) - prediction[:,:,2:4] = torch.exp(prediction[:,:,2:4])*anchors - - prediction[:,:,5: 5 + num_classes] = torch.sigmoid((prediction[:,:, 5 : 5 + num_classes])) - - prediction[:,:,:4] *= stride - - return prediction - - -def write_results(prediction, confidence, num_classes, nms_conf = 0.4): - conf_mask = (prediction[:,:,4] > confidence).float().unsqueeze(2) - prediction = prediction*conf_mask - - box_corner = prediction.new(prediction.shape) - box_corner[:,:,0] = (prediction[:,:,0] - prediction[:,:,2]/2) - box_corner[:,:,1] = (prediction[:,:,1] - prediction[:,:,3]/2) - box_corner[:,:,2] = (prediction[:,:,0] + prediction[:,:,2]/2) - box_corner[:,:,3] = (prediction[:,:,1] + prediction[:,:,3]/2) - prediction[:,:,:4] = box_corner[:,:,:4] - - batch_size = prediction.size(0) - - write = False - - - - for ind in range(batch_size): - image_pred = prediction[ind] #image Tensor - #confidence threshholding - #NMS - - max_conf, max_conf_score = torch.max(image_pred[:,5:5+ num_classes], 1) - max_conf = max_conf.float().unsqueeze(1) - max_conf_score = max_conf_score.float().unsqueeze(1) - seq = (image_pred[:,:5], max_conf, max_conf_score) - image_pred = torch.cat(seq, 1) - - non_zero_ind = (torch.nonzero(image_pred[:,4])) - try: - image_pred_ = image_pred[non_zero_ind.squeeze(),:].view(-1,7) - except: - continue - - if image_pred_.shape[0] == 0: - continue -# - - #Get the various classes detected in the image - img_classes = unique(image_pred_[:,-1]) # -1 index holds the class index - - - for cls in img_classes: - #perform NMS - - - #get the detections with one particular class - cls_mask = image_pred_*(image_pred_[:,-1] == cls).float().unsqueeze(1) - class_mask_ind = torch.nonzero(cls_mask[:,-2]).squeeze() - image_pred_class = image_pred_[class_mask_ind].view(-1,7) - - #sort the detections such that the entry with the maximum objectness - #confidence is at the top - conf_sort_index = torch.sort(image_pred_class[:,4], descending = True )[1] - image_pred_class = image_pred_class[conf_sort_index] - idx = image_pred_class.size(0) #Number of detections - - for i in range(idx): - #Get the IOUs of all boxes that come after the one we are looking at - #in the loop - try: - ious = bbox_iou(image_pred_class[i].unsqueeze(0), image_pred_class[i+1:]) - except ValueError: - break - - except IndexError: - break - - #Zero out all the detections that have IoU > treshhold - iou_mask = (ious < nms_conf).float().unsqueeze(1) - image_pred_class[i+1:] *= iou_mask - - #Remove the non-zero entries - non_zero_ind = torch.nonzero(image_pred_class[:,4]).squeeze() - image_pred_class = image_pred_class[non_zero_ind].view(-1,7) - - batch_ind = image_pred_class.new(image_pred_class.size(0), 1).fill_(ind) #Repeat the batch_id for as many detections of the class cls in the image - seq = batch_ind, image_pred_class - - if not write: - output = torch.cat(seq,1) - write = True - else: - out = torch.cat(seq,1) - output = torch.cat((output,out)) - - try: - return output - except: - return 0 - - -def letterbox_image(img, inp_dim): - '''resize image with unchanged aspect ratio using padding''' - img_w, img_h = img.shape[1], img.shape[0] - w, h = inp_dim - new_w = int(img_w * min(w/img_w, h/img_h)) - new_h = int(img_h * min(w/img_w, h/img_h)) - resized_image = cv2.resize(img, (new_w,new_h), interpolation = cv2.INTER_CUBIC) - - canvas = np.full((inp_dim[1], inp_dim[0], 3), 128) - - canvas[(h-new_h)//2:(h-new_h)//2 + new_h,(w-new_w)//2:(w-new_w)//2 + new_w, :] = resized_image - - return canvas - - -def prep_image(img, inp_dim): - """ - Prepare image for inputting to the neural network. - - Returns a Variable - """ - img = (letterbox_image(img, (inp_dim, inp_dim))) - img = img[:,:,::-1].transpose((2,0,1)).copy() - img = torch.from_numpy(img).float().div(255.0).unsqueeze(0) - return img - - -def load_classes(namesfile): - fp = open(namesfile, "r") - names = fp.read().split("\n")[:-1] - return names \ No newline at end of file diff --git a/spaces/sdhsdhk/bingo111/README.md b/spaces/sdhsdhk/bingo111/README.md deleted file mode 100644 index 218767d1d7debd26932ffddca2ec0f421c0171a9..0000000000000000000000000000000000000000 --- a/spaces/sdhsdhk/bingo111/README.md +++ /dev/null @@ -1,195 +0,0 @@ ---- -title: bingo -emoji: 📉 -colorFrom: red -colorTo: red -sdk: docker -pinned: true -license: mit -duplicated_from: hf4all/bingo ---- - -
        - -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -
        - -## 演示站点 - -https://bing.github1s.tk - - - -[![img](./docs/images/demo.png)](https://bing.github1s.tk) - -## 功能和特点 - -- 完全基于 Next.js 重写,高度还原 New Bing Web 版 UI,使用体验和 Bing AI 基本一致。 -- 支持 Docker 构建,方便快捷地部署和访问。 -- Cookie 可全局配置,全局共享。 -- 支持持续语音对话 - -## RoadMap - - - [x] 支持 wss 转发 - - [x] 支持一键部署 - - [x] 优化移动端展示 - - [x] 支持画图 - - [x] 支持语音输入(支持语音指令,目前仅支持 PC 版 Edge 及 Chrome 浏览器) - - [x] 支持语音输出(需要手动开启) - - [x] 支持图片输入 - - [x] 支持自定义域名 - - [ ] 支持历史记录 - - [ ] 适配深色模式 - - [ ] 支持内置提示词 - - [ ] 支持离线访问 - - [ ] 国际化翻译 - -## 一键部署 -你也可以一键部署自己的 New Bing AI 到 🤗 HuggingFace 。 - -### 部署到 Huggingface -1. 点击此图标 -[![Deploy to HuggingFace](https://img.shields.io/badge/%E7%82%B9%E5%87%BB%E9%83%A8%E7%BD%B2-%F0%9F%A4%97-fff)](https://huggingface.co/login?next=%2Fspaces%2Fhf4all%2Fbingo%3Fduplicate%3Dtrue%26visibility%3Dpublic),配置可以不改。 - -2. 部署署完成后,点击“设置” 》“站点域名”,点一下,复制一下 HF 域名信息,然后分享给别人即可。 - -> Huggingface 不支持绑定自己的域名,不过我们可以使用曲线救国的方式来达到这个目的 -> 1. 方式二,借助 Cloudflare Workers [部署Cloudflare Workers](#使用Cloudflare-Workers自定义域名) -> 2. 方式一,借助 Github Pages 及 iframe [如何绑定域名](https://github.com/weaigc/bingo/issues/4) - -### 使用Cloudflare Workers自定义域名 - -> 核心代码 [worker.js](./cloudflare/worker.js) - -- [注册 Cloudflare 账号](https://dash.cloudflare.com/sign-up) - -- 添加一个新的网站,需要你有自己的域名并且将域名`Name Server`托管给 Cloudflare 才行(更多信息可自行 Google) - -- 通过左侧菜单进入「Workers」,并点击「Create a Worker」。 - -- 创建 Worker 服务,复制 [worker.js](./cloudflare/worker.js) 全部代码,粘贴至创建的服务中,根据注释进行改动,保存并部署。 - -- 触发器 中自定义访问域名。 - -### 部署其它平台 -
        - -由于其他平台目前遭到 New Bing 封杀,会遇到很多问题,不再做推荐,有需要的可以自行查看 - - -#### 部署到 Netlify -[![Deploy to Netlify Button](https://www.netlify.com/img/deploy/button.svg)](https://app.netlify.com/start/deploy?repository=https://github.com/weaigc/bingo) - -#### 部署到 Vercel -如果你是 Vercel 付费用户,可以点以下链接一键部署到 Vercel。免费版本有[接口超时限制](https://vercel.com/docs/concepts/limits/overview),不推荐使用 - -[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?demo-title=bingo&demo-description=bingo&demo-url=https%3A%2F%2Fbing.github1s.tk%2F&project-name=bingo&repository-name=bingo&repository-url=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo&from=templates&skippable-integrations=1&env=BING_HEADER&envDescription=%E5%A6%82%E6%9E%9C%E4%B8%8D%E7%9F%A5%E9%81%93%E6%80%8E%E4%B9%88%E9%85%8D%E7%BD%AE%E8%AF%B7%E7%82%B9%E5%8F%B3%E4%BE%A7Learn+More&envLink=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo%2Fblob%2Fmain%2F.env.example) - -#### 部署到 Render - -[![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://render.com/deploy?repo=https://github.com/weaigc/bingo) -
        - -## 环境和依赖 - -- Node.js >= 18 -- Bing AI 的[身份信息](#如何获取-BING_HEADER)) - -## 安装和使用 - -* 使用 Node 启动 - -```bash -git clone https://github.com/weaigc/bingo.git -npm i # 推荐使用 pnpm i -npm run build -npm run start -``` - -* 使用 Docker 启动 -```bash -docker pull weaigc/bingo -docker run --rm -it -p 7860:7860 weaigc/bingo -# 或者 -docker run --rm -it -e BING_HEADER=xxxx -p 7860:7860 weaigc/bingo -``` - -## 如何获取 BING_HEADER -> 配置了 BING_HEADER 意味着你将自己的账号共享给所有使用此服务的人,如果不需要免登录画图的功能,不建议设置此变量 - -打开 https://www.bing.com 并登录,然后访问 https://www.bing.com/turing/captcha/challenge,通过人机校验,然后 - -![BING HEADER](./docs/images/curl.png) - -> 复制出来的内容应该如下所示。确认格式无误后,打开 https://effulgent-bubblegum-e2f5df.netlify.app/#dialog=%22settings%22 ,粘贴进去,点击“转成 BING_HEADER 并复制”,然后从剪切板粘贴即可得到。(你也可以先在网页上进行验证) - -以下是格式参考,需要注意的是,网页端保存的格式是以`curl`开头, 而服务端配置的 `BING_HEADER` 是 `base64` 格式,两者不能互通。 -
        -正常格式/网页端保存的格式(格式仅供参考) - -``` -curl 'https://www.bing.com/turing/captcha/challenge' \ - -H 'authority: www.bing.com' \ - -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7' \ - -H 'accept-language: zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6' \ - -H 'cache-control: max-age=0' \ - -H 'cookie: MicrosoftApplicationsTelemetryDeviceId=3399c004-fd0e-48ec-bb92-d82a27b2bbd4; _EDGE_V=1; SRCHD=AF=NOFORM; SRCHUID=V=2&GUID=29EBDDA4E6674329ACCF1A0A423C3E98&dmnchg=1; _UR=QS=0&TQS=0; _HPVN=CS=eyJQbiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiUCJ9LCJTYyI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiSCJ9LCJReiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiVCJ9LCJBcCI6dHJ1ZSwiTXV0ZSI6dHJ1ZSwiTGFkIjoiMjAyMy0wNy0yNVQwMDowMDowMFoiLCJJb3RkIjowLCJHd2IiOjAsIkRmdCI6bnVsbCwiTXZzIjowLCJGbHQiOjAsIkltcCI6Mn0=; _RwBf=ilt=1&ihpd=1&ispd=0&rc=0&rb=0&gb=0&rg=200&pc=0&mtu=0&rbb=0&g=0&cid=&clo=0&v=1&l=2023-07-25T07:00:00.0000000Z&lft=0001-01-01T00:00:00.0000000&aof=0&o=2&p=&c=&t=0&s=0001-01-01T00:00:00.0000000+00:00&ts=2023-07-25T11:00:31.7111548+00:00&rwred=0&wls=&lka=0&lkt=0&TH=&dci=0; ANON=A=0043C6590EA808ED6E395059FFFFFFFF&E=1c8b&W=1; NAP=V=1.9&E=1c31&C=DnaMSbDN_4efZ_xXqBF3Daorjr53kYqYoaP8YHsupjmiXnysX7a37A&W=1; PPLState=1; KievRPSSecAuth=FABSBBRaTOJILtFsMkpLVWSG6AN6C/svRwNmAAAEgAAACMGUA7EGVSjGEAQBGHtNsc5sNL7unmJsfPJ2t6imfo4BeUJlAia3IpMTtMUy4PU/C5QAzRI5pODtsIee0+blgllXt/5IiWwGjwmdhivsFM597pRPkjARPfwsPhNLPNbJrCPNPHdje4Is78MnCADXw6/NBq2FL8V2/byw2fH6IuAMD2MvN/VvqpEa9ZxiDjZtENj4HEj0mO2SgzjfyEhVAkjvznJqU2rw/Q2tHmX94NAM2kzlzKF/hWPhCCUmu8IHLvCnHDS6mSptvJDDP/sp3ovtzOXkP1mlM/Xju5ftesUvccVEQGffXORa1dE5hEMbKIiKXz1tDdduSXE19g9/+mRMAjaQhpwhI8XmilCTx1adb1Ll5qK+VjC9GNfEZzcbsGBPVaOl+anG8rEMq+Xnhjo7J+NqTNolavHgcuV8kJsCeJZIged33UA8eOZeFo+wAECMguxMoSqgpGH+sthqynvD/FJD6r/tiU2N3uqVq8NE8V37asrN6T14Z0FGBJOe6ET1+PGApm3s11OY9/xhFEB9T5BEPUGEbvRcLcW2ncFQX0EU+xweiPqo1Q1hNUg/dCtSI+lZ7c2H8XheePZavZ0TJQ8oNCSAuKiTqJmI0fVGpwbXwfaADkEipuawz3fIuMJBNgMU0OtA7Hm59v2fGLIBuvi6YeKS6GgVk3BIPf+P/eKahwozrxQZaFnoHTSqMkvct7xCP4atBROfXKf5Ww0CcFKp+2WX9BIskTOo2jjk6bAyyYJ+ElUB1fgLKNk5m/YSMc9iYCLIBMIGN8F0Yvy3tZ7cvh7Ue5Klo98US/I+nW1G7ZJMHRgUO8h8lpneHqEMegKd8gynO4VF7RpCjJkunDmW0Ta+RkXAP619pg0dqHMFkoOgknN78oBbGTV6fJUKotv+vi61kLhAeXZGWoHGCRXh2wUC6YgfPgKA6ESRNHtFn7E5B3HHpLc5rVMDSNhKZYfdhupV4Ezf6+5DhMcZLZhi0kk+ivDiN1gdHlVtSN55xpvf+c+XZDzR0uhgcvgy0LAbmzgk6y4WbYH+LQsMpzNNj+aC72vMiWovWrKh9jY4MYCmdgxsS/skPtLdp18muiEIRXTbZQGUmhxFpJAIbBIsCscMpzL0BgeujxUwM5wr79Sd9r4xwbgSMwmBlBfUHRVBdNyg8feepeJbCS63nD6eHOuLqMRsPIio3w/ki/EAa92UUEiZeavLsMUD/y/qAvWUdzdP5Y+C/TM+CMGS/kGL4LEdY/28MQeTvU1qv1X21kQt2aiaj3pPVL36hAzxbcLgqcMo9oymDRy87kdCXW/+g4oKLtMh6fm/G6W6Y/B01JlxohyyvueHQIG557uzkEkTJ3FnOVODSKBKpb3WZ65rExfV71zSZa25F3GmpaIG6HiYrX2YYhQAkIE9pKEQBHbnwHuwNDGottZTXZw=; WLS=C=9df3f9d8518fae19&N=wen; WLID=pGY8HgWCu4p5XYCOk2oa0+DBdftkMUfmNIn8XtSjSTKsgv/Il7GUlYs0Jpjf/E12jZMgV7x44Dy3fXOgjjUoJx7Y/ClLrLhsk20THksJJoI=; _EDGE_S=F=1&SID=17CF6EE006426448213C7DB907436588&mkt=zh-CN; MUID=225621093D8A6C27301632413C0E6D08; MUIDB=225621093D8A6C27301632413C0E6D08; SUID=A; SNRHOP=I=&TS=; _U=nGyzKQruEsDwLiu65fZFIG6e12hf2lwTJmroW__k8joUJIKmG3OIjayXKGW9dCVR3sNhF76mEVxyW6yjUGPodOfjtSa3s3J_DxMOrEK1BqXCOBI9bC66spAIASV7prsYFlVAJz73jVNENp_tBubLHJy6EbT0BKRe4AjrYkH-9uMnmCKB8Zmyg; _SS=SID=17CF6EE006426448213C7DB907436588&R=0&RB=0&GB=0&RG=200&RP=0&PC=U531; SRCHS=PC=U531; USRLOC=HS=1&ELOC=LAT=22.501529693603516|LON=113.9263687133789|N=%E5%8D%97%E5%B1%B1%E5%8C%BA%EF%BC%8C%E5%B9%BF%E4%B8%9C%E7%9C%81|ELT=2|&CLOC=LAT=22.50153029046461|LON=113.92637070632928|A=733.4464586120832|TS=230726151034|SRC=W; SRCHUSR=DOB=20230725&T=1690384908000&POEX=W; ipv6=hit=1690388509974&t=6; SRCHHPGUSR=HV=1690384945&SRCHLANG=zh-Hans&PV=15.0.0&BRW=MW&BRH=MT&CW=410&CH=794&SCW=410&SCH=794&DPR=1.5&UTC=480&DM=0&WTS=63825879627&PRVCW=410&PRVCH=794&PR=1.5; cct=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpny6Y_CVyi_MSyM94VyMWnjdYkkccVtm3czoIAtXUGQA; GC=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpR3Y_D9Ytcks4Ht6XhadXk75dvhzP4YOUS0UmoEyqyxw' \ - -H 'dnt: 1' \ - -H 'sec-ch-ua: "Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"' \ - -H 'sec-ch-ua-arch: "x86"' \ - -H 'sec-ch-ua-bitness: "64"' \ - -H 'sec-ch-ua-full-version: "116.0.1938.29"' \ - -H 'sec-ch-ua-full-version-list: "Chromium";v="116.0.5845.42", "Not)A;Brand";v="24.0.0.0", "Microsoft Edge";v="116.0.1938.29"' \ - -H 'sec-ch-ua-mobile: ?0' \ - -H 'sec-ch-ua-model: ""' \ - -H 'sec-ch-ua-platform: "Windows"' \ - -H 'sec-ch-ua-platform-version: "15.0.0"' \ - -H 'sec-fetch-dest: document' \ - -H 'sec-fetch-mode: navigate' \ - -H 'sec-fetch-site: none' \ - -H 'sec-fetch-user: ?1' \ - -H 'sec-ms-gec: B3F47AD4A283CAB374C0451C46AAFD147C6A4DACAFF6A1C13F34B2C72B024494' \ - -H 'sec-ms-gec-version: 1-116.0.1938.29' \ - -H 'upgrade-insecure-requests: 1' \ - -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.0.0' \ - -H 'x-client-data: eyIxIjoiMiIsIjEwIjoiXCJTMGg3R05HOTF2aDQ1TUZSUnZ5NHN2akRmMWdlaVJKenNxNlA3aU1WbnF3PVwiIiwiMiI6IjEiLCIzIjoiMSIsIjQiOiIyMTU4ODQ5NTM4MjY4OTM5NTA3IiwiNSI6IlwiSm9GUWpPTDk3OS9MbkRRZnlCd2N1M2FsOUN3eTZTQmdaMGNYMXBtOWVMZz1cIiIsIjYiOiJiZXRhIiwiNyI6IjE4MDM4ODYyNjQzNSIsIjkiOiJkZXNrdG9wIn0=' \ - -H 'x-edge-shopping-flag: 1' \ - --compressed -``` -
        - -
        -转成base64之后的格式(BING_HEADER只能使用 base64 之后的格式) - -``` -Y3VybCAnaHR0cHM6Ly93d3cuYmluZy5jb20vdHVyaW5nL2NvbnZlcnNhdGlvbi9jcmVhdGUnIFwgICAtSCAnYXV0aG9yaXR5OiB3d3cuYmluZy5jb20nIFwgICAtSCAnYWNjZXB0OiB0ZXh0L2h0bWwsYXBwbGljYXRpb24veGh0bWwreG1sLGFwcGxpY2F0aW9uL3htbDtxPTAuOSxpbWFnZS93ZWJwLGltYWdlL2FwbmcsKi8qO3E9MC44LGFwcGxpY2F0aW9uL3NpZ25lZC1leGNoYW5nZTt2PWIzO3E9MC43JyBcICAgLUggJ2FjY2VwdC1sYW5ndWFnZTogemgtQ04semg7cT0wLjksZW47cT0wLjgsZW4tR0I7cT0wLjcsZW4tVVM7cT0wLjYnIFwgICAtSCAnY2FjaGUtY29udHJvbDogbWF4LWFnZT0wJyBcICAgLUggJ2Nvb2tpZTogTWljcm9zb2Z0QXBwbGljYXRpb25zVGVsZW1ldHJ5RGV2aWNlSWQ9MzM5OWMwMDQtZmQwZS00OGVjLWJiOTItZDgyYTI3YjJiYmQ0OyBfRURHRV9WPTE7IFNSQ0hEPUFGPU5PRk9STTsgU1JDSFVJRD1WPTImR1VJRD0yOUVCRERBNEU2Njc0MzI5QUNDRjFBMEE0MjNDM0U5OCZkbW5jaGc9MTsgX1VSPVFTPTAmVFFTPTA7IF9IUFZOPUNTPWV5SlFiaUk2ZXlKRGJpSTZNU3dpVTNRaU9qQXNJbEZ6SWpvd0xDSlFjbTlrSWpvaVVDSjlMQ0pUWXlJNmV5SkRiaUk2TVN3aVUzUWlPakFzSWxGeklqb3dMQ0pRY205a0lqb2lTQ0o5TENKUmVpSTZleUpEYmlJNk1Td2lVM1FpT2pBc0lsRnpJam93TENKUWNtOWtJam9pVkNKOUxDSkJjQ0k2ZEhKMVpTd2lUWFYwWlNJNmRISjFaU3dpVEdGa0lqb2lNakF5TXkwd055MHlOVlF3TURvd01Eb3dNRm9pTENKSmIzUmtJam93TENKSGQySWlPakFzSWtSbWRDSTZiblZzYkN3aVRYWnpJam93TENKR2JIUWlPakFzSWtsdGNDSTZNbjA9OyBfUndCZj1pbHQ9MSZpaHBkPTEmaXNwZD0wJnJjPTAmcmI9MCZnYj0wJnJnPTIwMCZwYz0wJm10dT0wJnJiYj0wJmc9MCZjaWQ9JmNsbz0wJnY9MSZsPTIwMjMtMDctMjVUMDc6MDA6MDAuMDAwMDAwMFombGZ0PTAwMDEtMDEtMDFUMDA6MDA6MDAuMDAwMDAwMCZhb2Y9MCZvPTImcD0mYz0mdD0wJnM9MDAwMS0wMS0wMVQwMDowMDowMC4wMDAwMDAwKzAwOjAwJnRzPTIwMjMtMDctMjVUMTE6MDA6MzEuNzExMTU0OCswMDowMCZyd3JlZD0wJndscz0mbGthPTAmbGt0PTAmVEg9JmRjaT0wOyBBTk9OPUE9MDA0M0M2NTkwRUE4MDhFRDZFMzk1MDU5RkZGRkZGRkYmRT0xYzhiJlc9MTsgTkFQPVY9MS45JkU9MWMzMSZDPURuYU1TYkROXzRlZlpfeFhxQkYzRGFvcmpyNTNrWXFZb2FQOFlIc3Vwam1pWG55c1g3YTM3QSZXPTE7IFBQTFN0YXRlPTE7IEtpZXZSUFNTZWNBdXRoPUZBQlNCQlJhVE9KSUx0RnNNa3BMVldTRzZBTjZDL3N2UndObUFBQUVnQUFBQ01HVUE3RUdWU2pHRUFRQkdIdE5zYzVzTkw3dW5tSnNmUEoydDZpbWZvNEJlVUpsQWlhM0lwTVR0TVV5NFBVL0M1UUF6Ukk1cE9EdHNJZWUwK2JsZ2xsWHQvNUlpV3dHandtZGhpdnNGTTU5N3BSUGtqQVJQZndzUGhOTFBOYkpyQ1BOUEhkamU0SXM3OE1uQ0FEWHc2L05CcTJGTDhWMi9ieXcyZkg2SXVBTUQyTXZOL1Z2cXBFYTlaeGlEalp0RU5qNEhFajBtTzJTZ3pqZnlFaFZBa2p2em5KcVUycncvUTJ0SG1YOTROQU0ya3psektGL2hXUGhDQ1VtdThJSEx2Q25IRFM2bVNwdHZKRERQL3NwM292dHpPWGtQMW1sTS9YanU1ZnRlc1V2Y2NWRVFHZmZYT1JhMWRFNWhFTWJLSWlLWHoxdERkZHVTWEUxOWc5LyttUk1BamFRaHB3aEk4WG1pbENUeDFhZGIxTGw1cUsrVmpDOUdOZkVaemNic0dCUFZhT2wrYW5HOHJFTXErWG5oam83SitOcVROb2xhdkhnY3VWOGtKc0NlSlpJZ2VkMzNVQThlT1plRm8rd0FFQ01ndXhNb1NxZ3BHSCtzdGhxeW52RC9GSkQ2ci90aVUyTjN1cVZxOE5FOFYzN2Fzck42VDE0WjBGR0JKT2U2RVQxK1BHQXBtM3MxMU9ZOS94aEZFQjlUNUJFUFVHRWJ2UmNMY1cybmNGUVgwRVUreHdlaVBxbzFRMWhOVWcvZEN0U0krbFo3YzJIOFhoZWVQWmF2WjBUSlE4b05DU0F1S2lUcUptSTBmVkdwd2JYd2ZhQURrRWlwdWF3ejNmSXVNSkJOZ01VME90QTdIbTU5djJmR0xJQnV2aTZZZUtTNkdnVmszQklQZitQL2VLYWh3b3pyeFFaYUZub0hUU3FNa3ZjdDd4Q1A0YXRCUk9mWEtmNVd3MENjRktwKzJXWDlCSXNrVE9vMmpqazZiQXl5WUorRWxVQjFmZ0xLTms1bS9ZU01jOWlZQ0xJQk1JR044RjBZdnkzdFo3Y3ZoN1VlNUtsbzk4VVMvSStuVzFHN1pKTUhSZ1VPOGg4bHBuZUhxRU1lZ0tkOGd5bk80VkY3UnBDakprdW5EbVcwVGErUmtYQVA2MTlwZzBkcUhNRmtvT2drbk43OG9CYkdUVjZmSlVLb3R2K3ZpNjFrTGhBZVhaR1dvSEdDUlhoMndVQzZZZ2ZQZ0tBNkVTUk5IdEZuN0U1QjNISHBMYzVyVk1EU05oS1pZZmRodXBWNEV6ZjYrNURoTWNaTFpoaTBraytpdkRpTjFnZEhsVnRTTjU1eHB2ZitjK1haRHpSMHVoZ2N2Z3kwTEFibXpnazZ5NFdiWUgrTFFzTXB6Tk5qK2FDNzJ2TWlXb3ZXcktoOWpZNE1ZQ21kZ3hzUy9za1B0TGRwMThtdWlFSVJYVGJaUUdVbWh4RnBKQUliQklzQ3NjTXB6TDBCZ2V1anhVd001d3I3OVNkOXI0eHdiZ1NNd21CbEJmVUhSVkJkTnlnOGZlZXBlSmJDUzYzbkQ2ZUhPdUxxTVJzUElpbzN3L2tpL0VBYTkyVVVFaVplYXZMc01VRC95L3FBdldVZHpkUDVZK0MvVE0rQ01HUy9rR0w0TEVkWS8yOE1RZVR2VTFxdjFYMjFrUXQyYWlhajNwUFZMMzZoQXp4YmNMZ3FjTW85b3ltRFJ5ODdrZENYVy8rZzRvS0x0TWg2Zm0vRzZXNlkvQjAxSmx4b2h5eXZ1ZUhRSUc1NTd1emtFa1RKM0ZuT1ZPRFNLQktwYjNXWjY1ckV4ZlY3MXpTWmEyNUYzR21wYUlHNkhpWXJYMllZaFFBa0lFOXBLRVFCSGJud0h1d05ER290dFpUWFp3PTsgV0xTPUM9OWRmM2Y5ZDg1MThmYWUxOSZOPXdlbjsgV0xJRD1wR1k4SGdXQ3U0cDVYWUNPazJvYTArREJkZnRrTVVmbU5JbjhYdFNqU1RLc2d2L0lsN0dVbFlzMEpwamYvRTEyalpNZ1Y3eDQ0RHkzZlhPZ2pqVW9KeDdZL0NsTHJMaHNrMjBUSGtzSkpvST07IF9FREdFX1M9Rj0xJlNJRD0xN0NGNkVFMDA2NDI2NDQ4MjEzQzdEQjkwNzQzNjU4OCZta3Q9emgtQ047IE1VSUQ9MjI1NjIxMDkzRDhBNkMyNzMwMTYzMjQxM0MwRTZEMDg7IE1VSURCPTIyNTYyMTA5M0Q4QTZDMjczMDE2MzI0MTNDMEU2RDA4OyBTVUlEPUE7IFNOUkhPUD1JPSZUUz07IF9VPW5HeXpLUXJ1RXNEd0xpdTY1ZlpGSUc2ZTEyaGYybHdUSm1yb1dfX2s4am9VSklLbUczT0lqYXlYS0dXOWRDVlIzc05oRjc2bUVWeHlXNnlqVUdQb2RPZmp0U2EzczNKX0R4TU9yRUsxQnFYQ09CSTliQzY2c3BBSUFTVjdwcnNZRmxWQUp6NzNqVk5FTnBfdEJ1YkxISnk2RWJUMEJLUmU0QWpyWWtILTl1TW5tQ0tCOFpteWc7IF9TUz1TSUQ9MTdDRjZFRTAwNjQyNjQ0ODIxM0M3REI5MDc0MzY1ODgmUj0wJlJCPTAmR0I9MCZSRz0yMDAmUlA9MCZQQz1VNTMxOyBTUkNIUz1QQz1VNTMxOyBVU1JMT0M9SFM9MSZFTE9DPUxBVD0yMi41MDE1Mjk2OTM2MDM1MTZ8TE9OPTExMy45MjYzNjg3MTMzNzg5fE49JUU1JThEJTk3JUU1JUIxJUIxJUU1JThDJUJBJUVGJUJDJThDJUU1JUI5JUJGJUU0JUI4JTlDJUU3JTlDJTgxfEVMVD0yfCZDTE9DPUxBVD0yMi41MDE1MzAyOTA0NjQ2MXxMT049MTEzLjkyNjM3MDcwNjMyOTI4fEE9NzMzLjQ0NjQ1ODYxMjA4MzJ8VFM9MjMwNzI2MTUxMDM0fFNSQz1XOyBTUkNIVVNSPURPQj0yMDIzMDcyNSZUPTE2OTAzODQ5MDgwMDAmUE9FWD1XOyBpcHY2PWhpdD0xNjkwMzg4NTA5OTc0JnQ9NjsgU1JDSEhQR1VTUj1IVj0xNjkwMzg0OTQ1JlNSQ0hMQU5HPXpoLUhhbnMmUFY9MTUuMC4wJkJSVz1NVyZCUkg9TVQmQ1c9NDEwJkNIPTc5NCZTQ1c9NDEwJlNDSD03OTQmRFBSPTEuNSZVVEM9NDgwJkRNPTAmV1RTPTYzODI1ODc5NjI3JlBSVkNXPTQxMCZQUlZDSD03OTQmUFI9MS41OyBjY3Q9QWpXSUJZT29WUC1BZnE2Z1d3dHg4MElmNnlIbjZpQnVFVkhBMVhIZEFLcG55NllfQ1Z5aV9NU3lNOTRWeU1XbmpkWWtrY2NWdG0zY3pvSUF0WFVHUUE7IEdDPUFqV0lCWU9vVlAtQWZxNmdXd3R4ODBJZjZ5SG42aUJ1RVZIQTFYSGRBS3BSM1lfRDlZdGNrczRIdDZYaGFkWGs3NWR2aHpQNFlPVVMwVW1vRXlxeXh3JyBcICAgLUggJ2RudDogMScgXCAgIC1IICdzZWMtY2gtdWE6ICJDaHJvbWl1bSI7dj0iMTE2IiwgIk5vdClBO0JyYW5kIjt2PSIyNCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2IicgXCAgIC1IICdzZWMtY2gtdWEtYXJjaDogIng4NiInIFwgICAtSCAnc2VjLWNoLXVhLWJpdG5lc3M6ICI2NCInIFwgICAtSCAnc2VjLWNoLXVhLWZ1bGwtdmVyc2lvbjogIjExNi4wLjE5MzguMjkiJyBcICAgLUggJ3NlYy1jaC11YS1mdWxsLXZlcnNpb24tbGlzdDogIkNocm9taXVtIjt2PSIxMTYuMC41ODQ1LjQyIiwgIk5vdClBO0JyYW5kIjt2PSIyNC4wLjAuMCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2LjAuMTkzOC4yOSInIFwgICAtSCAnc2VjLWNoLXVhLW1vYmlsZTogPzAnIFwgICAtSCAnc2VjLWNoLXVhLW1vZGVsOiAiIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm06ICJXaW5kb3dzIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm0tdmVyc2lvbjogIjE1LjAuMCInIFwgICAtSCAnc2VjLWZldGNoLWRlc3Q6IGRvY3VtZW50JyBcICAgLUggJ3NlYy1mZXRjaC1tb2RlOiBuYXZpZ2F0ZScgXCAgIC1IICdzZWMtZmV0Y2gtc2l0ZTogbm9uZScgXCAgIC1IICdzZWMtZmV0Y2gtdXNlcjogPzEnIFwgICAtSCAnc2VjLW1zLWdlYzogQjNGNDdBRDRBMjgzQ0FCMzc0QzA0NTFDNDZBQUZEMTQ3QzZBNERBQ0FGRjZBMUMxM0YzNEIyQzcyQjAyNDQ5NCcgXCAgIC1IICdzZWMtbXMtZ2VjLXZlcnNpb246IDEtMTE2LjAuMTkzOC4yOScgXCAgIC1IICd1cGdyYWRlLWluc2VjdXJlLXJlcXVlc3RzOiAxJyBcICAgLUggJ3VzZXItYWdlbnQ6IE1vemlsbGEvNS4wIChXaW5kb3dzIE5UIDEwLjA7IFdpbjY0OyB4NjQpIEFwcGxlV2ViS2l0LzUzNy4zNiAoS0hUTUwsIGxpa2UgR2Vja28pIENocm9tZS8xMTYuMC4wLjAgU2FmYXJpLzUzNy4zNiBFZGcvMTE2LjAuMC4wJyBcICAgLUggJ3gtY2xpZW50LWRhdGE6IGV5SXhJam9pTWlJc0lqRXdJam9pWENKVE1HZzNSMDVIT1RGMmFEUTFUVVpTVW5aNU5ITjJha1JtTVdkbGFWSktlbk54TmxBM2FVMVdibkYzUFZ3aUlpd2lNaUk2SWpFaUxDSXpJam9pTVNJc0lqUWlPaUl5TVRVNE9EUTVOVE00TWpZNE9UTTVOVEEzSWl3aU5TSTZJbHdpU205R1VXcFBURGszT1M5TWJrUlJabmxDZDJOMU0yRnNPVU4zZVRaVFFtZGFNR05ZTVhCdE9XVk1aejFjSWlJc0lqWWlPaUppWlhSaElpd2lOeUk2SWpFNE1ETTRPRFl5TmpRek5TSXNJamtpT2lKa1pYTnJkRzl3SW4wPScgXCAgIC1IICd4LWVkZ2Utc2hvcHBpbmctZmxhZzogMScgXCAgIC0tY29tcHJlc3NlZA== -``` -
        - - -## 鸣谢 - - 感谢 [EdgeGPT](https://github.com/acheong08/EdgeGPT) 提供的代理 API 的方法。 - - 感谢 [Vercel AI](https://github.com/vercel-labs/ai-chatbot) 提供的基础脚手架和 [ChatHub](https://github.com/chathub-dev/chathub) [go-proxy-bingai](https://github.com/adams549659584/go-proxy-bingai) 提供的部分代码。 - - -## 答疑及交流 - - - -## License - -MIT © [LICENSE](https://github.com/weaigc/bingo/blob/main/LICENSE). - - diff --git a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/rnn/argument.py b/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/rnn/argument.py deleted file mode 100644 index b4c89d25f52882f0c99ec3e8c8a182e3b6dc5ee7..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/rnn/argument.py +++ /dev/null @@ -1,156 +0,0 @@ -# Copyright 2020 Hirofumi Inaguma -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""Conformer common arguments.""" - - -def add_arguments_rnn_encoder_common(group): - """Define common arguments for RNN encoder.""" - group.add_argument( - "--etype", - default="blstmp", - type=str, - choices=[ - "lstm", - "blstm", - "lstmp", - "blstmp", - "vgglstmp", - "vggblstmp", - "vgglstm", - "vggblstm", - "gru", - "bgru", - "grup", - "bgrup", - "vgggrup", - "vggbgrup", - "vgggru", - "vggbgru", - ], - help="Type of encoder network architecture", - ) - group.add_argument( - "--elayers", - default=4, - type=int, - help="Number of encoder layers", - ) - group.add_argument( - "--eunits", - "-u", - default=300, - type=int, - help="Number of encoder hidden units", - ) - group.add_argument( - "--eprojs", default=320, type=int, help="Number of encoder projection units" - ) - group.add_argument( - "--subsample", - default="1", - type=str, - help="Subsample input frames x_y_z means " - "subsample every x frame at 1st layer, " - "every y frame at 2nd layer etc.", - ) - return group - - -def add_arguments_rnn_decoder_common(group): - """Define common arguments for RNN decoder.""" - group.add_argument( - "--dtype", - default="lstm", - type=str, - choices=["lstm", "gru"], - help="Type of decoder network architecture", - ) - group.add_argument( - "--dlayers", default=1, type=int, help="Number of decoder layers" - ) - group.add_argument( - "--dunits", default=320, type=int, help="Number of decoder hidden units" - ) - group.add_argument( - "--dropout-rate-decoder", - default=0.0, - type=float, - help="Dropout rate for the decoder", - ) - group.add_argument( - "--sampling-probability", - default=0.0, - type=float, - help="Ratio of predicted labels fed back to decoder", - ) - group.add_argument( - "--lsm-type", - const="", - default="", - type=str, - nargs="?", - choices=["", "unigram"], - help="Apply label smoothing with a specified distribution type", - ) - return group - - -def add_arguments_rnn_attention_common(group): - """Define common arguments for RNN attention.""" - group.add_argument( - "--atype", - default="dot", - type=str, - choices=[ - "noatt", - "dot", - "add", - "location", - "coverage", - "coverage_location", - "location2d", - "location_recurrent", - "multi_head_dot", - "multi_head_add", - "multi_head_loc", - "multi_head_multi_res_loc", - ], - help="Type of attention architecture", - ) - group.add_argument( - "--adim", - default=320, - type=int, - help="Number of attention transformation dimensions", - ) - group.add_argument( - "--awin", default=5, type=int, help="Window size for location2d attention" - ) - group.add_argument( - "--aheads", - default=4, - type=int, - help="Number of heads for multi head attention", - ) - group.add_argument( - "--aconv-chans", - default=-1, - type=int, - help="Number of attention convolution channels \ - (negative value indicates no location-aware attention)", - ) - group.add_argument( - "--aconv-filts", - default=100, - type=int, - help="Number of attention convolution filters \ - (negative value indicates no location-aware attention)", - ) - group.add_argument( - "--dropout-rate", - default=0.0, - type=float, - help="Dropout rate for the encoder", - ) - return group diff --git a/spaces/segments-tobias/conex/espnet/transform/channel_selector.py b/spaces/segments-tobias/conex/espnet/transform/channel_selector.py deleted file mode 100644 index 9f303bd507787997244f1c33a590e366bd0300fd..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/transform/channel_selector.py +++ /dev/null @@ -1,45 +0,0 @@ -import numpy - - -class ChannelSelector(object): - """Select 1ch from multi-channel signal """ - - def __init__(self, train_channel="random", eval_channel=0, axis=1): - self.train_channel = train_channel - self.eval_channel = eval_channel - self.axis = axis - - def __repr__(self): - return ( - "{name}(train_channel={train_channel}, " - "eval_channel={eval_channel}, axis={axis})".format( - name=self.__class__.__name__, - train_channel=self.train_channel, - eval_channel=self.eval_channel, - axis=self.axis, - ) - ) - - def __call__(self, x, train=True): - # Assuming x: [Time, Channel] by default - - if x.ndim <= self.axis: - # If the dimension is insufficient, then unsqueeze - # (e.g [Time] -> [Time, 1]) - ind = tuple( - slice(None) if i < x.ndim else None for i in range(self.axis + 1) - ) - x = x[ind] - - if train: - channel = self.train_channel - else: - channel = self.eval_channel - - if channel == "random": - ch = numpy.random.randint(0, x.shape[self.axis]) - else: - ch = channel - - ind = tuple(slice(None) if i != self.axis else ch for i in range(x.ndim)) - return x[ind] diff --git a/spaces/segments-tobias/conex/espnet2/asr/decoder/rnn_decoder.py b/spaces/segments-tobias/conex/espnet2/asr/decoder/rnn_decoder.py deleted file mode 100644 index fc938225f3571e531849418bb075f23adfdea7a1..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/asr/decoder/rnn_decoder.py +++ /dev/null @@ -1,334 +0,0 @@ -import random - -import numpy as np -import torch -import torch.nn.functional as F -from typeguard import check_argument_types - -from espnet.nets.pytorch_backend.nets_utils import make_pad_mask -from espnet.nets.pytorch_backend.nets_utils import to_device -from espnet.nets.pytorch_backend.rnn.attentions import initial_att -from espnet2.asr.decoder.abs_decoder import AbsDecoder -from espnet2.utils.get_default_kwargs import get_default_kwargs - - -def build_attention_list( - eprojs: int, - dunits: int, - atype: str = "location", - num_att: int = 1, - num_encs: int = 1, - aheads: int = 4, - adim: int = 320, - awin: int = 5, - aconv_chans: int = 10, - aconv_filts: int = 100, - han_mode: bool = False, - han_type=None, - han_heads: int = 4, - han_dim: int = 320, - han_conv_chans: int = -1, - han_conv_filts: int = 100, - han_win: int = 5, -): - - att_list = torch.nn.ModuleList() - if num_encs == 1: - for i in range(num_att): - att = initial_att( - atype, - eprojs, - dunits, - aheads, - adim, - awin, - aconv_chans, - aconv_filts, - ) - att_list.append(att) - elif num_encs > 1: # no multi-speaker mode - if han_mode: - att = initial_att( - han_type, - eprojs, - dunits, - han_heads, - han_dim, - han_win, - han_conv_chans, - han_conv_filts, - han_mode=True, - ) - return att - else: - att_list = torch.nn.ModuleList() - for idx in range(num_encs): - att = initial_att( - atype[idx], - eprojs, - dunits, - aheads[idx], - adim[idx], - awin[idx], - aconv_chans[idx], - aconv_filts[idx], - ) - att_list.append(att) - else: - raise ValueError( - "Number of encoders needs to be more than one. {}".format(num_encs) - ) - return att_list - - -class RNNDecoder(AbsDecoder): - def __init__( - self, - vocab_size: int, - encoder_output_size: int, - rnn_type: str = "lstm", - num_layers: int = 1, - hidden_size: int = 320, - sampling_probability: float = 0.0, - dropout: float = 0.0, - context_residual: bool = False, - replace_sos: bool = False, - num_encs: int = 1, - att_conf: dict = get_default_kwargs(build_attention_list), - ): - # FIXME(kamo): The parts of num_spk should be refactored more more more - assert check_argument_types() - if rnn_type not in {"lstm", "gru"}: - raise ValueError(f"Not supported: rnn_type={rnn_type}") - - super().__init__() - eprojs = encoder_output_size - self.dtype = rnn_type - self.dunits = hidden_size - self.dlayers = num_layers - self.context_residual = context_residual - self.sos = vocab_size - 1 - self.eos = vocab_size - 1 - self.odim = vocab_size - self.sampling_probability = sampling_probability - self.dropout = dropout - self.num_encs = num_encs - - # for multilingual translation - self.replace_sos = replace_sos - - self.embed = torch.nn.Embedding(vocab_size, hidden_size) - self.dropout_emb = torch.nn.Dropout(p=dropout) - - self.decoder = torch.nn.ModuleList() - self.dropout_dec = torch.nn.ModuleList() - self.decoder += [ - torch.nn.LSTMCell(hidden_size + eprojs, hidden_size) - if self.dtype == "lstm" - else torch.nn.GRUCell(hidden_size + eprojs, hidden_size) - ] - self.dropout_dec += [torch.nn.Dropout(p=dropout)] - for _ in range(1, self.dlayers): - self.decoder += [ - torch.nn.LSTMCell(hidden_size, hidden_size) - if self.dtype == "lstm" - else torch.nn.GRUCell(hidden_size, hidden_size) - ] - self.dropout_dec += [torch.nn.Dropout(p=dropout)] - # NOTE: dropout is applied only for the vertical connections - # see https://arxiv.org/pdf/1409.2329.pdf - - if context_residual: - self.output = torch.nn.Linear(hidden_size + eprojs, vocab_size) - else: - self.output = torch.nn.Linear(hidden_size, vocab_size) - - self.att_list = build_attention_list( - eprojs=eprojs, dunits=hidden_size, **att_conf - ) - - def zero_state(self, hs_pad): - return hs_pad.new_zeros(hs_pad.size(0), self.dunits) - - def rnn_forward(self, ey, z_list, c_list, z_prev, c_prev): - if self.dtype == "lstm": - z_list[0], c_list[0] = self.decoder[0](ey, (z_prev[0], c_prev[0])) - for i in range(1, self.dlayers): - z_list[i], c_list[i] = self.decoder[i]( - self.dropout_dec[i - 1](z_list[i - 1]), - (z_prev[i], c_prev[i]), - ) - else: - z_list[0] = self.decoder[0](ey, z_prev[0]) - for i in range(1, self.dlayers): - z_list[i] = self.decoder[i]( - self.dropout_dec[i - 1](z_list[i - 1]), z_prev[i] - ) - return z_list, c_list - - def forward(self, hs_pad, hlens, ys_in_pad, ys_in_lens, strm_idx=0): - # to support mutiple encoder asr mode, in single encoder mode, - # convert torch.Tensor to List of torch.Tensor - if self.num_encs == 1: - hs_pad = [hs_pad] - hlens = [hlens] - - # attention index for the attention module - # in SPA (speaker parallel attention), - # att_idx is used to select attention module. In other cases, it is 0. - att_idx = min(strm_idx, len(self.att_list) - 1) - - # hlens should be list of list of integer - hlens = [list(map(int, hlens[idx])) for idx in range(self.num_encs)] - - # get dim, length info - olength = ys_in_pad.size(1) - - # initialization - c_list = [self.zero_state(hs_pad[0])] - z_list = [self.zero_state(hs_pad[0])] - for _ in range(1, self.dlayers): - c_list.append(self.zero_state(hs_pad[0])) - z_list.append(self.zero_state(hs_pad[0])) - z_all = [] - if self.num_encs == 1: - att_w = None - self.att_list[att_idx].reset() # reset pre-computation of h - else: - att_w_list = [None] * (self.num_encs + 1) # atts + han - att_c_list = [None] * self.num_encs # atts - for idx in range(self.num_encs + 1): - # reset pre-computation of h in atts and han - self.att_list[idx].reset() - - # pre-computation of embedding - eys = self.dropout_emb(self.embed(ys_in_pad)) # utt x olen x zdim - - # loop for an output sequence - for i in range(olength): - if self.num_encs == 1: - att_c, att_w = self.att_list[att_idx]( - hs_pad[0], hlens[0], self.dropout_dec[0](z_list[0]), att_w - ) - else: - for idx in range(self.num_encs): - att_c_list[idx], att_w_list[idx] = self.att_list[idx]( - hs_pad[idx], - hlens[idx], - self.dropout_dec[0](z_list[0]), - att_w_list[idx], - ) - hs_pad_han = torch.stack(att_c_list, dim=1) - hlens_han = [self.num_encs] * len(ys_in_pad) - att_c, att_w_list[self.num_encs] = self.att_list[self.num_encs]( - hs_pad_han, - hlens_han, - self.dropout_dec[0](z_list[0]), - att_w_list[self.num_encs], - ) - if i > 0 and random.random() < self.sampling_probability: - z_out = self.output(z_all[-1]) - z_out = np.argmax(z_out.detach().cpu(), axis=1) - z_out = self.dropout_emb(self.embed(to_device(self, z_out))) - ey = torch.cat((z_out, att_c), dim=1) # utt x (zdim + hdim) - else: - # utt x (zdim + hdim) - ey = torch.cat((eys[:, i, :], att_c), dim=1) - z_list, c_list = self.rnn_forward(ey, z_list, c_list, z_list, c_list) - if self.context_residual: - z_all.append( - torch.cat((self.dropout_dec[-1](z_list[-1]), att_c), dim=-1) - ) # utt x (zdim + hdim) - else: - z_all.append(self.dropout_dec[-1](z_list[-1])) # utt x (zdim) - - z_all = torch.stack(z_all, dim=1) - z_all = self.output(z_all) - z_all.masked_fill_( - make_pad_mask(ys_in_lens, z_all, 1), - 0, - ) - return z_all, ys_in_lens - - def init_state(self, x): - # to support mutiple encoder asr mode, in single encoder mode, - # convert torch.Tensor to List of torch.Tensor - if self.num_encs == 1: - x = [x] - - c_list = [self.zero_state(x[0].unsqueeze(0))] - z_list = [self.zero_state(x[0].unsqueeze(0))] - for _ in range(1, self.dlayers): - c_list.append(self.zero_state(x[0].unsqueeze(0))) - z_list.append(self.zero_state(x[0].unsqueeze(0))) - # TODO(karita): support strm_index for `asr_mix` - strm_index = 0 - att_idx = min(strm_index, len(self.att_list) - 1) - if self.num_encs == 1: - a = None - self.att_list[att_idx].reset() # reset pre-computation of h - else: - a = [None] * (self.num_encs + 1) # atts + han - for idx in range(self.num_encs + 1): - # reset pre-computation of h in atts and han - self.att_list[idx].reset() - return dict( - c_prev=c_list[:], - z_prev=z_list[:], - a_prev=a, - workspace=(att_idx, z_list, c_list), - ) - - def score(self, yseq, state, x): - # to support mutiple encoder asr mode, in single encoder mode, - # convert torch.Tensor to List of torch.Tensor - if self.num_encs == 1: - x = [x] - - att_idx, z_list, c_list = state["workspace"] - vy = yseq[-1].unsqueeze(0) - ey = self.dropout_emb(self.embed(vy)) # utt list (1) x zdim - if self.num_encs == 1: - att_c, att_w = self.att_list[att_idx]( - x[0].unsqueeze(0), - [x[0].size(0)], - self.dropout_dec[0](state["z_prev"][0]), - state["a_prev"], - ) - else: - att_w = [None] * (self.num_encs + 1) # atts + han - att_c_list = [None] * self.num_encs # atts - for idx in range(self.num_encs): - att_c_list[idx], att_w[idx] = self.att_list[idx]( - x[idx].unsqueeze(0), - [x[idx].size(0)], - self.dropout_dec[0](state["z_prev"][0]), - state["a_prev"][idx], - ) - h_han = torch.stack(att_c_list, dim=1) - att_c, att_w[self.num_encs] = self.att_list[self.num_encs]( - h_han, - [self.num_encs], - self.dropout_dec[0](state["z_prev"][0]), - state["a_prev"][self.num_encs], - ) - ey = torch.cat((ey, att_c), dim=1) # utt(1) x (zdim + hdim) - z_list, c_list = self.rnn_forward( - ey, z_list, c_list, state["z_prev"], state["c_prev"] - ) - if self.context_residual: - logits = self.output( - torch.cat((self.dropout_dec[-1](z_list[-1]), att_c), dim=-1) - ) - else: - logits = self.output(self.dropout_dec[-1](z_list[-1])) - logp = F.log_softmax(logits, dim=1).squeeze(0) - return ( - logp, - dict( - c_prev=c_list[:], - z_prev=z_list[:], - a_prev=att_w, - workspace=(att_idx, z_list, c_list), - ), - ) diff --git a/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/models/GroundingDINO/backbone/__init__.py b/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/models/GroundingDINO/backbone/__init__.py deleted file mode 100644 index 76e4b272b479a26c63d120c818c140870cd8c287..0000000000000000000000000000000000000000 --- a/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/models/GroundingDINO/backbone/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .backbone import build_backbone diff --git a/spaces/shabnam91/Sanskrit-TTS/indic_nlp_library/indicnlp/__init__.py b/spaces/shabnam91/Sanskrit-TTS/indic_nlp_library/indicnlp/__init__.py deleted file mode 100644 index 1ad075593152cf94d30a903d8add28d8200badbb..0000000000000000000000000000000000000000 --- a/spaces/shabnam91/Sanskrit-TTS/indic_nlp_library/indicnlp/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -import os -import sys - -try: - from .version import __version__ # noqa -except ImportError: - version_txt = os.path.join(os.path.dirname(__file__), "version.txt") - with open(version_txt) as f: - __version__ = f.read().strip() - diff --git a/spaces/shariqfarooq/ZoeDepth/gradio_im_to_3d.py b/spaces/shariqfarooq/ZoeDepth/gradio_im_to_3d.py deleted file mode 100644 index 73ff9f96aaf670c6dc775aa29f72456f2bd7810c..0000000000000000000000000000000000000000 --- a/spaces/shariqfarooq/ZoeDepth/gradio_im_to_3d.py +++ /dev/null @@ -1,69 +0,0 @@ -import gradio as gr -import numpy as np -import trimesh -from geometry import depth_to_points, create_triangles -from functools import partial -import tempfile - - -def depth_edges_mask(depth): - """Returns a mask of edges in the depth map. - Args: - depth: 2D numpy array of shape (H, W) with dtype float32. - Returns: - mask: 2D numpy array of shape (H, W) with dtype bool. - """ - # Compute the x and y gradients of the depth map. - depth_dx, depth_dy = np.gradient(depth) - # Compute the gradient magnitude. - depth_grad = np.sqrt(depth_dx ** 2 + depth_dy ** 2) - # Compute the edge mask. - mask = depth_grad > 0.05 - return mask - - -def predict_depth(model, image): - depth = model.infer_pil(image) - return depth - -def get_mesh(model, image, keep_edges=False): - image.thumbnail((1024,1024)) # limit the size of the input image - depth = predict_depth(model, image) - pts3d = depth_to_points(depth[None]) - pts3d = pts3d.reshape(-1, 3) - - # Create a trimesh mesh from the points - # Each pixel is connected to its 4 neighbors - # colors are the RGB values of the image - - verts = pts3d.reshape(-1, 3) - image = np.array(image) - if keep_edges: - triangles = create_triangles(image.shape[0], image.shape[1]) - else: - triangles = create_triangles(image.shape[0], image.shape[1], mask=~depth_edges_mask(depth)) - colors = image.reshape(-1, 3) - mesh = trimesh.Trimesh(vertices=verts, faces=triangles, vertex_colors=colors) - - # Save as glb - glb_file = tempfile.NamedTemporaryFile(suffix='.glb', delete=False) - glb_path = glb_file.name - mesh.export(glb_path) - return glb_path - -def create_demo(model): - - gr.Markdown("### Image to 3D mesh") - gr.Markdown("Convert a single 2D image to a 3D mesh") - - with gr.Row(): - image = gr.Image(label="Input Image", type='pil') - result = gr.Model3D(label="3d mesh reconstruction", clear_color=[ - 1.0, 1.0, 1.0, 1.0]) - - checkbox = gr.Checkbox(label="Keep occlusion edges", value=False) - submit = gr.Button("Submit") - submit.click(partial(get_mesh, model), inputs=[image, checkbox], outputs=[result]) - examples = gr.Examples(examples=["examples/aerial_beach.jpeg", "examples/mountains.jpeg", "examples/person_1.jpeg", "examples/ancient-carved.jpeg"], - inputs=[image]) - diff --git a/spaces/shi-labs/Matting-Anything/segment-anything/segment_anything/build_sam.py b/spaces/shi-labs/Matting-Anything/segment-anything/segment_anything/build_sam.py deleted file mode 100644 index 3cb5d2075361edceacf80f710fd3ca47833e71d7..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/Matting-Anything/segment-anything/segment_anything/build_sam.py +++ /dev/null @@ -1,108 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from functools import partial - -from .modeling import ImageEncoderViT, MaskDecoder, PromptEncoder, Sam, TwoWayTransformer - - -def build_sam_vit_h(checkpoint=None): - return _build_sam( - encoder_embed_dim=1280, - encoder_depth=32, - encoder_num_heads=16, - encoder_global_attn_indexes=[7, 15, 23, 31], - checkpoint=checkpoint, - ) - - -build_sam = build_sam_vit_h - - -def build_sam_vit_l(checkpoint=None): - return _build_sam( - encoder_embed_dim=1024, - encoder_depth=24, - encoder_num_heads=16, - encoder_global_attn_indexes=[5, 11, 17, 23], - checkpoint=checkpoint, - ) - - -def build_sam_vit_b(checkpoint=None): - return _build_sam( - encoder_embed_dim=768, - encoder_depth=12, - encoder_num_heads=12, - encoder_global_attn_indexes=[2, 5, 8, 11], - checkpoint=checkpoint, - ) - - -sam_model_registry = { - "default": build_sam_vit_h, - "vit_h": build_sam_vit_h, - "vit_l": build_sam_vit_l, - "vit_b": build_sam_vit_b, -} - - -def _build_sam( - encoder_embed_dim, - encoder_depth, - encoder_num_heads, - encoder_global_attn_indexes, - checkpoint=None, -): - prompt_embed_dim = 256 - image_size = 1024 - #image_size = 512 - vit_patch_size = 16 - image_embedding_size = image_size // vit_patch_size - sam = Sam( - image_encoder=ImageEncoderViT( - depth=encoder_depth, - embed_dim=encoder_embed_dim, - img_size=image_size, - mlp_ratio=4, - norm_layer=partial(torch.nn.LayerNorm, eps=1e-6), - num_heads=encoder_num_heads, - patch_size=vit_patch_size, - qkv_bias=True, - use_rel_pos=True, - global_attn_indexes=encoder_global_attn_indexes, - window_size=14, - out_chans=prompt_embed_dim, - ), - prompt_encoder=PromptEncoder( - embed_dim=prompt_embed_dim, - image_embedding_size=(image_embedding_size, image_embedding_size), - input_image_size=(image_size, image_size), - mask_in_chans=16, - ), - mask_decoder=MaskDecoder( - num_multimask_outputs=3, - transformer=TwoWayTransformer( - depth=2, - embedding_dim=prompt_embed_dim, - mlp_dim=2048, - num_heads=8, - ), - transformer_dim=prompt_embed_dim, - iou_head_depth=3, - iou_head_hidden_dim=256, - ), - pixel_mean=[123.675, 116.28, 103.53], - pixel_std=[58.395, 57.12, 57.375], - ) - sam.eval() - if checkpoint is not None: - with open(checkpoint, "rb") as f: - state_dict = torch.load(f) - sam.load_state_dict(state_dict) - return sam diff --git a/spaces/sidharthism/fashion-eye/netdissect/tool/makesample.py b/spaces/sidharthism/fashion-eye/netdissect/tool/makesample.py deleted file mode 100644 index 36276267677360d8238a8dbf71e9753dcc327681..0000000000000000000000000000000000000000 --- a/spaces/sidharthism/fashion-eye/netdissect/tool/makesample.py +++ /dev/null @@ -1,169 +0,0 @@ -''' -A simple tool to generate sample of output of a GAN, -subject to filtering, sorting, or intervention. -''' - -import torch, numpy, os, argparse, numbers, sys, shutil -from PIL import Image -from torch.utils.data import TensorDataset -from netdissect.zdataset import standard_z_sample -from netdissect.progress import default_progress, verbose_progress -from netdissect.autoeval import autoimport_eval -from netdissect.workerpool import WorkerBase, WorkerPool -from netdissect.nethook import edit_layers, retain_layers - -def main(): - parser = argparse.ArgumentParser(description='GAN sample making utility') - parser.add_argument('--model', type=str, default=None, - help='constructor for the model to test') - parser.add_argument('--pthfile', type=str, default=None, - help='filename of .pth file for the model') - parser.add_argument('--outdir', type=str, default='images', - help='directory for image output') - parser.add_argument('--size', type=int, default=100, - help='number of images to output') - parser.add_argument('--test_size', type=int, default=None, - help='number of images to test') - parser.add_argument('--layer', type=str, default=None, - help='layer to inspect') - parser.add_argument('--seed', type=int, default=1, - help='seed') - parser.add_argument('--maximize_units', type=int, nargs='+', default=None, - help='units to maximize') - parser.add_argument('--ablate_units', type=int, nargs='+', default=None, - help='units to ablate') - parser.add_argument('--quiet', action='store_true', default=False, - help='silences console output') - if len(sys.argv) == 1: - parser.print_usage(sys.stderr) - sys.exit(1) - args = parser.parse_args() - verbose_progress(not args.quiet) - - # Instantiate the model - model = autoimport_eval(args.model) - if args.pthfile is not None: - data = torch.load(args.pthfile) - if 'state_dict' in data: - meta = {} - for key in data: - if isinstance(data[key], numbers.Number): - meta[key] = data[key] - data = data['state_dict'] - model.load_state_dict(data) - # Unwrap any DataParallel-wrapped model - if isinstance(model, torch.nn.DataParallel): - model = next(model.children()) - # Examine first conv in model to determine input feature size. - first_layer = [c for c in model.modules() - if isinstance(c, (torch.nn.Conv2d, torch.nn.ConvTranspose2d, - torch.nn.Linear))][0] - # 4d input if convolutional, 2d input if first layer is linear. - if isinstance(first_layer, (torch.nn.Conv2d, torch.nn.ConvTranspose2d)): - z_channels = first_layer.in_channels - spatialdims = (1, 1) - else: - z_channels = first_layer.in_features - spatialdims = () - # Instrument the model if needed - if args.maximize_units is not None: - retain_layers(model, [args.layer]) - model.cuda() - - # Get the sample of z vectors - if args.maximize_units is None: - indexes = torch.arange(args.size) - z_sample = standard_z_sample(args.size, z_channels, seed=args.seed) - z_sample = z_sample.view(tuple(z_sample.shape) + spatialdims) - else: - # By default, if maximizing units, get a 'top 5%' sample. - if args.test_size is None: - args.test_size = args.size * 20 - z_universe = standard_z_sample(args.test_size, z_channels, - seed=args.seed) - z_universe = z_universe.view(tuple(z_universe.shape) + spatialdims) - indexes = get_highest_znums(model, z_universe, args.maximize_units, - args.size, seed=args.seed) - z_sample = z_universe[indexes] - - if args.ablate_units: - edit_layers(model, [args.layer]) - dims = max(2, max(args.ablate_units) + 1) # >=2 to avoid broadcast - model.ablation[args.layer] = torch.zeros(dims) - model.ablation[args.layer][args.ablate_units] = 1 - - save_znum_images(args.outdir, model, z_sample, indexes, - args.layer, args.ablate_units) - copy_lightbox_to(args.outdir) - - -def get_highest_znums(model, z_universe, max_units, size, - batch_size=100, seed=1): - # The model should have been instrumented already - retained_items = list(model.retained.items()) - assert len(retained_items) == 1 - layer = retained_items[0][0] - # By default, a 10% sample - progress = default_progress() - num_units = None - with torch.no_grad(): - # Pass 1: collect max activation stats - z_loader = torch.utils.data.DataLoader(TensorDataset(z_universe), - batch_size=batch_size, num_workers=2, - pin_memory=True) - scores = [] - for [z] in progress(z_loader, desc='Finding max activations'): - z = z.cuda() - model(z) - feature = model.retained[layer] - num_units = feature.shape[1] - max_feature = feature[:, max_units, ...].view( - feature.shape[0], len(max_units), -1).max(2)[0] - total_feature = max_feature.sum(1) - scores.append(total_feature.cpu()) - scores = torch.cat(scores, 0) - highest = (-scores).sort(0)[1][:size].sort(0)[0] - return highest - - -def save_znum_images(dirname, model, z_sample, indexes, layer, ablated_units, - name_template="image_{}.png", lightbox=False, batch_size=100, seed=1): - progress = default_progress() - os.makedirs(dirname, exist_ok=True) - with torch.no_grad(): - # Pass 2: now generate images - z_loader = torch.utils.data.DataLoader(TensorDataset(z_sample), - batch_size=batch_size, num_workers=2, - pin_memory=True) - saver = WorkerPool(SaveImageWorker) - if ablated_units is not None: - dims = max(2, max(ablated_units) + 1) # >=2 to avoid broadcast - mask = torch.zeros(dims) - mask[ablated_units] = 1 - model.ablation[layer] = mask[None,:,None,None].cuda() - for batch_num, [z] in enumerate(progress(z_loader, - desc='Saving images')): - z = z.cuda() - start_index = batch_num * batch_size - im = ((model(z) + 1) / 2 * 255).clamp(0, 255).byte().permute( - 0, 2, 3, 1).cpu() - for i in range(len(im)): - index = i + start_index - if indexes is not None: - index = indexes[index].item() - filename = os.path.join(dirname, name_template.format(index)) - saver.add(im[i].numpy(), filename) - saver.join() - -def copy_lightbox_to(dirname): - srcdir = os.path.realpath( - os.path.join(os.getcwd(), os.path.dirname(__file__))) - shutil.copy(os.path.join(srcdir, 'lightbox.html'), - os.path.join(dirname, '+lightbox.html')) - -class SaveImageWorker(WorkerBase): - def work(self, data, filename): - Image.fromarray(data).save(filename, optimize=True, quality=100) - -if __name__ == '__main__': - main() diff --git a/spaces/silencewing/server/youyou/.history/math_20230613231657.html b/spaces/silencewing/server/youyou/.history/math_20230613231657.html deleted file mode 100644 index a74ca09a9c36844e59753444568bf091c97f2796..0000000000000000000000000000000000000000 --- a/spaces/silencewing/server/youyou/.history/math_20230613231657.html +++ /dev/null @@ -1,234 +0,0 @@ - - - - - - - - - - Document - - - - -
        - - - - - - - - - - - - - - - - - - - - - - - - -
        题目
        答案
        正误
        得分
        -
        - - - - diff --git a/spaces/simonduerr/diffdock/esm/esm/rotary_embedding.py b/spaces/simonduerr/diffdock/esm/esm/rotary_embedding.py deleted file mode 100644 index e862196192ae30e47e6d2e0404357920338c04e9..0000000000000000000000000000000000000000 --- a/spaces/simonduerr/diffdock/esm/esm/rotary_embedding.py +++ /dev/null @@ -1,69 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Tuple - -import torch - - -def rotate_half(x): - x1, x2 = x.chunk(2, dim=-1) - return torch.cat((-x2, x1), dim=-1) - - -def apply_rotary_pos_emb(x, cos, sin): - cos = cos[:, : x.shape[-2], :] - sin = sin[:, : x.shape[-2], :] - - return (x * cos) + (rotate_half(x) * sin) - - -class RotaryEmbedding(torch.nn.Module): - """ - The rotary position embeddings from RoFormer_ (Su et. al). - A crucial insight from the method is that the query and keys are - transformed by rotation matrices which depend on the relative positions. - Other implementations are available in the Rotary Transformer repo_ and in - GPT-NeoX_, GPT-NeoX was an inspiration - .. _RoFormer: https://arxiv.org/abs/2104.09864 - .. _repo: https://github.com/ZhuiyiTechnology/roformer - .. _GPT-NeoX: https://github.com/EleutherAI/gpt-neox - .. warning: Please note that this embedding is not registered on purpose, as it is transformative - (it does not create the embedding dimension) and will likely be picked up (imported) on a ad-hoc basis - """ - - def __init__(self, dim: int, *_, **__): - super().__init__() - # Generate and save the inverse frequency buffer (non trainable) - inv_freq = 1.0 / (10000 ** (torch.arange(0, dim, 2).float() / dim)) - self.register_buffer("inv_freq", inv_freq) - - self._seq_len_cached = None - self._cos_cached = None - self._sin_cached = None - - def _update_cos_sin_tables(self, x, seq_dimension=1): - seq_len = x.shape[seq_dimension] - - # Reset the tables if the sequence length has changed, - # or if we're on a new device (possibly due to tracing for instance) - if seq_len != self._seq_len_cached or self._cos_cached.device != x.device: - self._seq_len_cached = seq_len - t = torch.arange(x.shape[seq_dimension], device=x.device).type_as(self.inv_freq) - freqs = torch.einsum("i,j->ij", t, self.inv_freq) - emb = torch.cat((freqs, freqs), dim=-1).to(x.device) - - self._cos_cached = emb.cos()[None, :, :] - self._sin_cached = emb.sin()[None, :, :] - - return self._cos_cached, self._sin_cached - - def forward(self, q: torch.Tensor, k: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - self._cos_cached, self._sin_cached = self._update_cos_sin_tables(k, seq_dimension=-2) - - return ( - apply_rotary_pos_emb(q, self._cos_cached, self._sin_cached), - apply_rotary_pos_emb(k, self._cos_cached, self._sin_cached), - ) diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/AZE PLUS Why You Should Switch to WhatsApp Plus Today.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/AZE PLUS Why You Should Switch to WhatsApp Plus Today.md deleted file mode 100644 index e673822e0106b2c87bd0d6a64c53e767ccad7429..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/AZE PLUS Why You Should Switch to WhatsApp Plus Today.md +++ /dev/null @@ -1,147 +0,0 @@ -
        -

        What is com.azeplus and why you should use it

        -

        If you are looking for a way to enhance your WhatsApp experience with more features, stickers, videos and updates, then you should check out com.azeplus. Com.azeplus is a website that offers various WhatsApp Plus mods, an app for creating and sharing stickers, a YouTube channel for learning about the latest news and tips, and a subscription service for accessing premium content. In this article, we will explain what com.azeplus is, what features it offers, what benefits it provides, and how to download and install it on your device.

        -

        com.azeplus


        Download Zip ✫✫✫ https://ssurll.com/2uNU6F



        -

        Introduction

        -

        WhatsApp is one of the most popular messaging apps in the world, with over 2 billion users. It allows you to send text messages, voice messages, photos, videos, documents, and more to your contacts. However, WhatsApp has some limitations and restrictions that may prevent you from fully enjoying its features. For example, you cannot change the theme or font of your app, you cannot hide your online status or last seen time, you cannot send more than 30 images at once, you cannot use more than one account on the same device, and so on.

        -

        That's where com.azeplus comes in. Com.azeplus is a website that provides various WhatsApp Plus mods that can overcome these limitations and add more functionality to your WhatsApp app. WhatsApp Plus mods are modified versions of the original WhatsApp app that have extra features and options that are not available in the official app. For example, you can change the theme or font of your app, hide your online status or last seen time, send more than 30 images at once, use more than one account on the same device, and so on.

        -

        Features of com.azeplus

        -

        Com.azeplus offers several features that can make your WhatsApp experience more enjoyable and convenient. Here are some of them:

        -

        WhatsApp Plus mods

        -

        Com.azeplus provides different WhatsApp Plus mods that you can download and install on your device. Each mod has its own unique features and advantages that suit different preferences and needs. For example, some mods have more themes and fonts to choose from, some mods have more privacy and security options to protect your chats and data, some mods have more media sharing options to send larger files and longer videos, and so on.

        -

        Some of the WhatsApp Plus mods that com.azeplus offers are:

        -

        com.azeplus whatsapp plus yukle
        -com.azeplus stiker app
        -com.azeplus youtube channel
        -com.azeplus vatsap plus yukle 2021
        -com.azeplus whatsapp plus yukle 2022
        -com.azeplus stiker vip subscription
        -com.azeplus whatsapp plus antiban
        -com.azeplus stiker azərbaycan dilində
        -com.azeplus youtube videos
        -com.azeplus whatsapp plus yukle en son versiya
        -com.azeplus stiker gündəmə uyğun
        -com.azeplus whatsapp plus mod apk
        -com.azeplus stiker download
        -com.azeplus whatsapp plus features
        -com.azeplus stiker review
        -com.azeplus whatsapp plus update
        -com.azeplus stiker yeni möhtəşəm
        -com.azeplus whatsapp plus download link
        -com.azeplus stiker in-app purchases
        -com.azeplus whatsapp plus yuxu yozmaları
        -com.azeplus stiker problemsiz işləməsi
        -com.azeplus whatsapp plus yenilikləriylə tanış olmaq
        -com.azeplus stiker tətbiqin fəriqi
        -com.azeplus whatsapp plus azərbaycan dilində
        -com.azeplus stiker tətbiqin inkişafı
        -com.azeplus whatsapp plus abunə olun
        -com.azeplus stiker tətbiqin xüsusiyyətləri
        -com.azeplus whatsapp plus kanalmızı izləyin
        -com.azeplus stiker tətbiqin görünümü
        -com.azeplus whatsapp plus kanalmızı dəstəkləyin
        -com.azeplus stiker tətbiqin xüsusi təklifləri
        -com.azeplus whatsapp plus kanalmızı paylaşın
        -com.azeplus stiker tətbiqin qaleriyası
        -com.azeplus whatsapp plus kanalmızı bizi seçdiyiniz üçün minnətdarıq!
        -com.azeplus stiker tətbiqin xatları düzəldi

        - - - - - - - - - - - - - - - - - - - - - - - - - - -
        NameVersionFeatures
        AZE PLUSV16- Anti-ban protection
        - Custom themes and fonts
        - Hide online status and last seen
        - Send up to 90 images at once
        - Send videos up to 50 MB
        - Use up to 4 accounts on the same device
        - Backup and restore chats
        - Support for AZE PLUS Stiker app
        AZE PLUS V11V11.9- Anti-ban protection
        - Custom themes and fonts
        - Hide online status and last seen
        - Send up to 30 images at once
        - Send videos up to 16 MB
        - Use up to 2 accounts on the same device
        - Backup and restore chats
        - Support for AZE PLUS Stiker app
        AZE PLUS V10V10.60- Anti-ban protection
        - Custom themes and fonts
        - Hide online status and last seen
        - - Send up to 30 images at once
        - Send videos up to 16 MB
        - Use up to 2 accounts on the same device
        - Backup and restore chats
        - Support for AZE PLUS Stiker app
        AZE PLUS V9V9.90- Anti-ban protection
        - Custom themes and fonts
        - Hide online status and last seen
        - Send up to 10 images at once
        - Send videos up to 10 MB
        - Use one account on the device
        - Backup and restore chats
        - Support for AZE PLUS Stiker app
        -

        You can choose the mod that suits your needs and preferences from the com.azeplus website. You can also compare the features of each mod and read the reviews and ratings of other users.

        -

        AZE PLUS Stiker app

        -

        AZE PLUS Stiker app is a companion app for WhatsApp Plus mods that allows you to create and share stickers with your friends and contacts. You can use the app to make your own stickers from photos, images, emojis, text, and more. You can also use the app to access thousands of stickers from various categories and themes, such as animals, cartoons, celebrities, memes, movies, sports, etc. You can also download and install sticker packs from other sources and use them in your WhatsApp Plus mods.

        -

        AZE PLUS Stiker app is compatible with all WhatsApp Plus mods that com.azeplus offers. You can download and install the app from the com.azeplus website or from the Google Play Store.

        -

        AZE PLUS YouTube channel

        -

        AZE PLUS YouTube channel is a source of information and entertainment for WhatsApp Plus users. You can watch videos that show you how to use the features and options of WhatsApp Plus mods, how to create and share stickers with AZE PLUS Stiker app, how to download and install WhatsApp Plus mods and AZE PLUS Stiker app, how to fix common issues and problems, and more. You can also watch videos that feature the latest news and updates about WhatsApp Plus mods, AZE PLUS Stiker app, com.azeplus website, and other related topics.

        -

        AZE PLUS YouTube channel is run by AZE PLUS team, which is a group of developers and enthusiasts who create and maintain WhatsApp Plus mods, AZE PLUS Stiker app, com.azeplus website, and other related projects. You can subscribe to the channel to get notified of new videos and updates.

        -

        AZE PLUS subscription service

        -

        AZE PLUS subscription service is a premium service that gives you access to exclusive content and features that are not available in the free versions of WhatsApp Plus mods and AZE PLUS Stiker app. By subscribing to the service, you can enjoy the following benefits:

        -
          -
        • Get unlimited themes and fonts for your WhatsApp Plus mods
        • -
        • Get unlimited stickers and sticker packs for your AZE PLUS Stiker app
        • -
        • Get priority support and assistance from AZE PLUS team
        • -
        • Get early access to new features and updates of WhatsApp Plus mods and AZE PLUS Stiker app
        • -
        • Get discounts and offers on other products and services from AZE PLUS team
        • -
        -

        AZE PLUS subscription service costs $9.99 per month or $99.99 per year. You can subscribe to the service from the com.azeplus website or from the settings of your WhatsApp Plus mods or AZE PLUS Stiker app.

        -

        Benefits of com.azeplus

        -

        Com.azeplus provides many benefits for WhatsApp users who want to enhance their messaging experience with more features, stickers, videos and updates. Here are some of them:

        -

        Customization and personalization

        -

        With com.azeplus, you can customize and personalize your WhatsApp app according to your taste and style. You can change the theme or font of your app, choose from different colors and backgrounds, adjust the size and shape of icons and buttons, add your own wallpaper or logo, etc. You can also create your own stickers with AZE PLUS Stiker app, using photos, images, emojis, text, etc. You can make your stickers funny, cute, cool, or whatever you want.

        -

        Privacy and security

        -

        With com.azeplus, you can protect your privacy and security while using WhatsApp. You can hide your online status or last seen time from others, disable read receipts or blue ticks, lock your chats with a password or fingerprint, encrypt your messages with end-to-end encryption, backup and restore your chats with cloud storage or local storage, etc. You can also control who can see your profile picture or status message, who can call you or add you to groups, who can send you messages or media, etc. You can also use anti-ban protection to prevent your account from being banned by WhatsApp.

        -

        Entertainment and fun

        -

        With com.azeplus, you can have more entertainment and fun while using WhatsApp. You can access thousands of stickers from various categories and themes, such as animals, cartoons, celebrities, memes, movies, sports, etc. You can also watch videos from AZE PLUS YouTube channel that show you the latest news and tips about WhatsApp Plus mods, AZE PLUS Stiker app, com.azeplus website, and other related topics. You can also enjoy exclusive content and features from AZE PLUS subscription service, such as unlimited themes and fonts, unlimited stickers and sticker packs, priority support and assistance, early access to new features and updates, discounts and offers, etc.

        -

        How to download and install com.azeplus

        -

        If you want to download and install com.azeplus on your device, you need to follow some simple steps and instructions. Here are they:

        -

        Requirements and compatibility

        -

        Before you download and install com.azeplus, you need to make sure that your device meets the following requirements and compatibility:

        -
          -
        • Your device must be running on Android 4.0 or higher
        • -
        • Your device must have enough storage space to download and install the files
        • -
        • Your device must have a stable internet connection to download the files
        • -
        • Your device must allow installation of apps from unknown sources (you can enable this option from your device settings)
        • -
        • You must uninstall the official WhatsApp app from your device (you can backup your chats before uninstalling)
        • -
        -

        Steps and instructions

        -

        After you have checked the requirements and compatibility, you can follow these steps and instructions to download and install com.azeplus:

        -
          -
        1. Go to the com.azeplus website from your device browser
        2. -
        3. Select the WhatsApp Plus mod that you want to download from the list of available mods
        4. -
        5. Click on the download button and wait for the file to be downloaded on your device
        6. -
        7. After the file is downloaded, go to your device file manager and locate the file
        8. -
        9. Tap on the file and follow the on-screen instructions to install the mod on your device
        10. -
        11. After the installation is completed, open the mod and verify your phone number with an OTP code
        12. -
        13. Restore your chats from backup if you have any or start a new chat with your contacts
        14. -
        15. Enjoy the features and options of the mod
        16. -
        -

        To download and install AZE PLUS Stiker app, you can follow the same steps as above, except that you need to select the AZE PLUS Stiker app from the list of available apps instead of a WhatsApp Plus mod.

        -

        Conclusion

        -

        Com.azeplus is a website that offers various WhatsApp Plus mods, an app for creating and sharing stickers, a YouTube channel for learning about the latest news and tips, and a subscription service for accessing premium content. Com.azeplus can enhance your WhatsApp experience with more features, stickers, videos and updates. You can customize and personalize your WhatsApp app, protect your privacy and security, have more entertainment and fun, and enjoy exclusive content and features with com.azeplus. You can download and install com.azeplus on your device by following some simple steps and instructions.

        -

        FAQs

        -

        Here are some frequently asked questions about com.azeplus:

        -

        Is com.azeplus safe to use?

        -

        Com.azeplus is safe to use as long as you download it from the official website or the Google Play Store. Com.azeplus does not contain any viruses or malware that can harm your device or data. Com.azeplus also uses anti-ban protection to prevent your account from being banned by WhatsApp.

        -

        Is com.azeplus free to use?

        -

        Com.azeplus is free to use for most of its features and options. However, if you want to access exclusive content and features that are not available in the free versions of WhatsApp Plus mods and AZE PLUS Stiker app, you need to subscribe to AZE PLUS subscription service, which costs $9.99 per month or $99.99 per year.

        -

        Can I use com.azeplus with other WhatsApp mods?

        -

        No, you cannot use com.azeplus with other WhatsApp mods, such as GBWhatsApp, FMWhatsApp, YOWhatsApp, etc. Com.azeplus is only compatible with its own WhatsApp Plus mods that it provides on its website.

        -

        Can I use com. azeplus with the official WhatsApp app?

        -

        No, you cannot use com.azeplus with the official WhatsApp app. You need to uninstall the official WhatsApp app from your device before you can download and install com.azeplus. You can backup your chats before uninstalling the official WhatsApp app and restore them after installing com.azeplus.

        -

        How can I contact AZE PLUS team for support or feedback?

        -

        If you have any questions, issues, suggestions, or feedback about com.azeplus, you can contact AZE PLUS team by using the following methods:

        -
          -
        • Email: azeplus@gmail.com
        • -
        • WhatsApp: +90 555 555 5555
        • -
        • YouTube: AZE PLUS
        • -
        • Facebook: AZE PLUS
        • -
        • Instagram: azeplus
        • -
        • Twitter: @azeplus
        • -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/GTA 5 Android Apk Data The Ultimate Guide to Download and Play the Game in 400MB without Verification.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/GTA 5 Android Apk Data The Ultimate Guide to Download and Play the Game in 400MB without Verification.md deleted file mode 100644 index 4b85a725475e363b296ac82c53cf79d371bff4ba..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/GTA 5 Android Apk Data The Ultimate Guide to Download and Play the Game in 400MB without Verification.md +++ /dev/null @@ -1,86 +0,0 @@ - -

        GTA 5 Android APK + Data Download 400MB No Verification

        -

        Introduction

        -

        If you are a fan of action-adventure games, you must have heard of GTA 5, one of the most popular and successful games of all time. GTA 5 is a masterpiece by Rockstar Games that offers an amazing experience of playing as three different characters in a vast and dynamic open world. The game was originally released for PlayStation 3 and Xbox 360 in 2013, and later for PlayStation 4, Xbox One, and PC in 2014 and 2015. But what if you want to play GTA 5 on your Android device? Is it possible to download GTA 5 for Android in a small size without any verification? The answer is yes, and in this article, we will show you how to do it.

        -

        gta 5 android apk + data download 400mb no verification


        Download Zip === https://ssurll.com/2uNSdp



        -

        What is GTA 5?

        -

        GTA 5 is the fifth main installment in the Grand Theft Auto series, which is known for its open-world sandbox gameplay, where you can explore, drive, shoot, fight, and do whatever you want. GTA 5 follows the story of three protagonists: Michael, a retired bank robber; Franklin, a street hustler; and Trevor, a psychopathic criminal. The game is set in Los Santos, a fictional city based on Los Angeles, and its surrounding areas. You can switch between the three characters at any time, and experience their unique perspectives and missions. You can also play online with other players in GTA Online, where you can create your own character, join crews, compete in races, heists, deathmatches, and more.

        -

        Why download GTA 5 for Android?

        -

        GTA 5 is undoubtedly one of the best games ever made, but it is also a very demanding game that requires a powerful device to run smoothly. Not everyone has access to a console or a PC that can handle GTA 5, but many people have an Android smartphone or tablet that they use every day. Downloading GTA 5 for Android allows you to enjoy the game on your mobile device, without compromising on the quality or the features. You can play GTA 5 on your Android device anytime and anywhere, as long as you have enough storage space and battery life. You can also connect a controller or a keyboard and mouse to your Android device, for a more comfortable gaming experience.

        -

        How to download GTA 5 for Android in 400MB without verification

        -

        Downloading GTA 5 for Android is not as hard as you might think. You don't need to root your device or go through any complicated verification process. All you need is a stable internet connection, some free storage space, and a few minutes of your time. Here are the steps to follow:

        -

        Step 1: Download the APK and OBB files

        -

        The first thing you need to do is to download the APK and OBB files of GTA 5 for Android. These are the files that contain the game data and the installation package. You can find many websites that offer these files for free, but be careful not to download any fake or malicious files that might harm your device. One of the trusted sources that we recommend is [Wapzola](^1^), where you can find the latest version of GTA 5 for Android in different sizes (35 MB, 400 MB, or 2.6 GB) depending on your preference. Just click on the link below and choose the size that suits you best.

        -

        Congratulations, you have successfully downloaded GTA 5 for Android in 400MB without verification. Now you can enjoy the game on your mobile device, with all the features and functions that you would expect from the PC or console version. You can explore the city of Los Santos, complete missions, drive cars, bikes, planes, boats, and more. You can also play online with other players, join crews, customize your character, and participate in various events and activities. GTA 5 for Android is a game that will keep you entertained for hours and hours.

        -

        Features of GTA 5 for Android

        -

        GTA 5 for Android is not just a port of the original game, but a fully optimized and enhanced version that takes advantage of the capabilities of mobile devices. Here are some of the features that you can expect from GTA 5 for Android:

        -

        gta 5 android apk + data download 400mb no verification offline
        -gta 5 android apk + data download 400mb no verification highly compressed
        -gta 5 android apk + data download 400mb no verification full version
        -gta 5 android apk + data download 400mb no verification latest update
        -gta 5 android apk + data download 400mb no verification free link
        -gta 5 android apk + data download 400mb no verification working mod
        -gta 5 android apk + data download 400mb no verification easy steps
        -gta 5 android apk + data download 400mb no verification real game
        -gta 5 android apk + data download 400mb no verification best graphics
        -gta 5 android apk + data download 400mb no verification direct download
        -gta 5 android apk + data download 400mb no verification online play
        -gta 5 android apk + data download 400mb no verification unlimited money
        -gta 5 android apk + data download 400mb no verification fast speed
        -gta 5 android apk + data download 400mb no verification new features
        -gta 5 android apk + data download 400mb no verification safe and secure
        -gta 5 android apk + data download 400mb no verification without root
        -gta 5 android apk + data download 400mb no verification all missions
        -gta 5 android apk + data download 400mb no verification original file
        -gta 5 android apk + data download 400mb no verification low requirements
        -gta 5 android apk + data download 400mb no verification smooth gameplay
        -gta 5 android apk + data download 400mb no verification premium quality
        -gta 5 android apk + data download 400mb no verification support all devices
        -gta 5 android apk + data download 400mb no verification with cheats
        -gta 5 android apk + data download 400mb no verification mega link
        -gta 5 android apk + data download 400mb no verification mediafire link
        -gta 5 android apk + data download 400mb no verification google drive link
        -gta 5 android apk + data download 400mb no verification zip file
        -gta 5 android apk + data download 400mb no verification rar file
        -gta 5 android apk + data download 400mb no verification obb file
        -gta 5 android apk + data download 400mb no verification iso file
        -gta 5 android apk + data download 400mb no verification ppsspp emulator
        -gta 5 android apk + data download 400mb no verification ps4 emulator
        -gta 5 android apk + data download 400mb no verification xbox emulator
        -gta 5 android apk + data download 400mb no verification pc emulator
        -gta 5 android apk + data download 400mb no verification phone emulator
        -gta 5 android apk + data download 400mb no verification tablet emulator
        -gta

        -

        Stunning graphics and realistic physics

        -

        GTA 5 for Android has amazing graphics that rival the PC or console version. The game uses advanced lighting, shadows, reflections, textures, and animations to create a realistic and immersive environment. The game also has realistic physics that affect the movement of vehicles, objects, and characters. You can see the damage effects on cars, buildings, and people, as well as the weather effects such as rain, fog, snow, and wind. GTA 5 for Android is a game that will impress you with its visual quality and detail.

        -

        Immersive gameplay and story mode

        -

        GTA 5 for Android has a captivating gameplay that will keep you hooked for hours. The game has a story mode that follows the lives of three protagonists: Michael, Franklin, and Trevor. You can switch between them at any time, and experience their unique personalities, skills, and missions. The game has a nonlinear and branching storyline that depends on your choices and actions. You can also interact with various characters, such as friends, enemies, strangers, and animals. GTA 5 for Android has a gameplay that will make you feel like you are living in a virtual world.

        -

        Open world and multiple activities

        -

        GTA 5 for Android has an open world that is huge and diverse. The game is set in Los Santos, a fictional city based on Los Angeles, and its surrounding areas. You can explore the urban areas, the countryside, the mountains, the desert, the ocean, and more. You can also find and visit various landmarks, such as the Hollywood sign, the Santa Monica pier, the Griffith Observatory, and more. The game has multiple activities that you can do in the open world, such as racing, golfing, tennis, hunting, fishing, yoga, parachuting, scuba diving, and more. GTA 5 for Android has an open world that will never bore you with its variety and possibilities.

        -

        Online multiplayer and customization

        -

        GTA 5 for Android has an online multiplayer mode called GTA Online, where you can play with other players from around the world. You can create your own character, join crews, compete in races, heists

        GTA 5 for Android has an online multiplayer mode called GTA Online, where you can play with other players from around the world. You can create your own character, join crews, compete in races, heists, deathmatches, and more. You can also customize your character, vehicles, weapons, clothes, and properties. GTA 5 for Android has an online multiplayer mode that will give you endless fun and challenges.

        -

        Conclusion

        -

        GTA 5 for Android is a game that you should not miss if you love action-adventure games. The game has everything you need to enjoy a thrilling and immersive gaming experience on your mobile device. You can download GTA 5 for Android in 400MB without verification by following the simple steps that we have shown you in this article. You can also choose the size that suits your device and preference. GTA 5 for Android is a game that will make you feel like you are part of a living and breathing world.

        -

        Summary of the article

        -

        In this article, we have covered the following topics:

        -
          -
        • What is GTA 5 and why download it for Android?
        • -
        • How to download GTA 5 for Android in 400MB without verification?
        • -
        • What are the features of GTA 5 for Android?
        • -
        -

        FAQs

        -

        Here are some of the frequently asked questions about GTA 5 for Android:

        -
          -
        1. Is GTA 5 for Android safe to download and play?
        2. -

          Yes, GTA 5 for Android is safe to download and play, as long as you download it from a trusted source, such as [Wapzola]. You should also scan the files with an antivirus app before installing them on your device.

          -
        3. Is GTA 5 for Android compatible with all devices?
        4. -

          No, GTA 5 for Android is not compatible with all devices. The game requires a device that has at least 2 GB of RAM, 4 GB of free storage space, and Android 4.0 or higher. The game also works better on devices that have a powerful processor and GPU.

          -
        5. Can I play GTA 5 for Android offline?
        6. -

          Yes, you can play GTA 5 for Android offline, but only the story mode. You will need an internet connection to play GTA Online, which is the online multiplayer mode.

          -
        7. Can I use cheats in GTA 5 for Android?
        8. -

          Yes, you can use cheats in GTA 5 for Android, but only in the story mode. You can find various cheat codes online that can help you unlock weapons, vehicles, money, health, and more. However, using cheats might affect your game progress and achievements.

          -
        9. Can I update GTA 5 for Android?
        10. -

          Yes, you can update GTA 5 for Android whenever there is a new version available. You can check the [Wapzola] website regularly to see if there are any updates or patches for the game. You can also follow their social media accounts to get the latest news and updates about the game.

          -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/utils/file_client.py b/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/utils/file_client.py deleted file mode 100644 index 7f38d9796da3899048924f2f803d1088927966b0..0000000000000000000000000000000000000000 --- a/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/utils/file_client.py +++ /dev/null @@ -1,167 +0,0 @@ -# Modified from https://github.com/open-mmlab/mmcv/blob/master/mmcv/fileio/file_client.py # noqa: E501 -from abc import ABCMeta, abstractmethod - - -class BaseStorageBackend(metaclass=ABCMeta): - """Abstract class of storage backends. - - All backends need to implement two apis: ``get()`` and ``get_text()``. - ``get()`` reads the file as a byte stream and ``get_text()`` reads the file - as texts. - """ - - @abstractmethod - def get(self, filepath): - pass - - @abstractmethod - def get_text(self, filepath): - pass - - -class MemcachedBackend(BaseStorageBackend): - """Memcached storage backend. - - Attributes: - server_list_cfg (str): Config file for memcached server list. - client_cfg (str): Config file for memcached client. - sys_path (str | None): Additional path to be appended to `sys.path`. - Default: None. - """ - - def __init__(self, server_list_cfg, client_cfg, sys_path=None): - if sys_path is not None: - import sys - sys.path.append(sys_path) - try: - import mc - except ImportError: - raise ImportError('Please install memcached to enable MemcachedBackend.') - - self.server_list_cfg = server_list_cfg - self.client_cfg = client_cfg - self._client = mc.MemcachedClient.GetInstance(self.server_list_cfg, self.client_cfg) - # mc.pyvector servers as a point which points to a memory cache - self._mc_buffer = mc.pyvector() - - def get(self, filepath): - filepath = str(filepath) - import mc - self._client.Get(filepath, self._mc_buffer) - value_buf = mc.ConvertBuffer(self._mc_buffer) - return value_buf - - def get_text(self, filepath): - raise NotImplementedError - - -class HardDiskBackend(BaseStorageBackend): - """Raw hard disks storage backend.""" - - def get(self, filepath): - filepath = str(filepath) - with open(filepath, 'rb') as f: - value_buf = f.read() - return value_buf - - def get_text(self, filepath): - filepath = str(filepath) - with open(filepath, 'r') as f: - value_buf = f.read() - return value_buf - - -class LmdbBackend(BaseStorageBackend): - """Lmdb storage backend. - - Args: - db_paths (str | list[str]): Lmdb database paths. - client_keys (str | list[str]): Lmdb client keys. Default: 'default'. - readonly (bool, optional): Lmdb environment parameter. If True, - disallow any write operations. Default: True. - lock (bool, optional): Lmdb environment parameter. If False, when - concurrent access occurs, do not lock the database. Default: False. - readahead (bool, optional): Lmdb environment parameter. If False, - disable the OS filesystem readahead mechanism, which may improve - random read performance when a database is larger than RAM. - Default: False. - - Attributes: - db_paths (list): Lmdb database path. - _client (list): A list of several lmdb envs. - """ - - def __init__(self, db_paths, client_keys='default', readonly=True, lock=False, readahead=False, **kwargs): - try: - import lmdb - except ImportError: - raise ImportError('Please install lmdb to enable LmdbBackend.') - - if isinstance(client_keys, str): - client_keys = [client_keys] - - if isinstance(db_paths, list): - self.db_paths = [str(v) for v in db_paths] - elif isinstance(db_paths, str): - self.db_paths = [str(db_paths)] - assert len(client_keys) == len(self.db_paths), ('client_keys and db_paths should have the same length, ' - f'but received {len(client_keys)} and {len(self.db_paths)}.') - - self._client = {} - for client, path in zip(client_keys, self.db_paths): - self._client[client] = lmdb.open(path, readonly=readonly, lock=lock, readahead=readahead, **kwargs) - - def get(self, filepath, client_key): - """Get values according to the filepath from one lmdb named client_key. - - Args: - filepath (str | obj:`Path`): Here, filepath is the lmdb key. - client_key (str): Used for distinguishing differnet lmdb envs. - """ - filepath = str(filepath) - assert client_key in self._client, (f'client_key {client_key} is not ' 'in lmdb clients.') - client = self._client[client_key] - with client.begin(write=False) as txn: - value_buf = txn.get(filepath.encode('ascii')) - return value_buf - - def get_text(self, filepath): - raise NotImplementedError - - -class FileClient(object): - """A general file client to access files in different backend. - - The client loads a file or text in a specified backend from its path - and return it as a binary file. it can also register other backend - accessor with a given name and backend class. - - Attributes: - backend (str): The storage backend type. Options are "disk", - "memcached" and "lmdb". - client (:obj:`BaseStorageBackend`): The backend object. - """ - - _backends = { - 'disk': HardDiskBackend, - 'memcached': MemcachedBackend, - 'lmdb': LmdbBackend, - } - - def __init__(self, backend='disk', **kwargs): - if backend not in self._backends: - raise ValueError(f'Backend {backend} is not supported. Currently supported ones' - f' are {list(self._backends.keys())}') - self.backend = backend - self.client = self._backends[backend](**kwargs) - - def get(self, filepath, client_key='default'): - # client_key is used only for lmdb, where different fileclients have - # different lmdb environments. - if self.backend == 'lmdb': - return self.client.get(filepath, client_key) - else: - return self.client.get(filepath) - - def get_text(self, filepath): - return self.client.get_text(filepath) diff --git a/spaces/sooolee/summarize-transcripts-gradio/README.md b/spaces/sooolee/summarize-transcripts-gradio/README.md deleted file mode 100644 index 9a1b16a163c34fee213146e9f6dd2e7e3340f1ee..0000000000000000000000000000000000000000 --- a/spaces/sooolee/summarize-transcripts-gradio/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Summarize Transcripts Gradio -emoji: 👀 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/spock74/whisper-webui/src/modelCache.py b/spaces/spock74/whisper-webui/src/modelCache.py deleted file mode 100644 index 680a4b386fc37e17ed2353e72d04a646ece2c4a6..0000000000000000000000000000000000000000 --- a/spaces/spock74/whisper-webui/src/modelCache.py +++ /dev/null @@ -1,17 +0,0 @@ -class ModelCache: - def __init__(self): - self._cache = dict() - - def get(self, model_key: str, model_factory): - result = self._cache.get(model_key) - - if result is None: - result = model_factory() - self._cache[model_key] = result - return result - - def clear(self): - self._cache.clear() - -# A global cache of models. This is mainly used by the daemon processes to avoid loading the same model multiple times. -GLOBAL_MODEL_CACHE = ModelCache() \ No newline at end of file diff --git a/spaces/springml111/T5_Paraphrase_demo/README.md b/spaces/springml111/T5_Paraphrase_demo/README.md deleted file mode 100644 index ab5fda3f67cc7e6726be26972b67d325b8b949b1..0000000000000000000000000000000000000000 --- a/spaces/springml111/T5_Paraphrase_demo/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: T5_Paraphrase_demo -emoji: 👁 -colorFrom: indigo -colorTo: blue -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/sqc1729/bingi/src/components/chat-notification.tsx b/spaces/sqc1729/bingi/src/components/chat-notification.tsx deleted file mode 100644 index 4be24d0f1755c8058698cfa66c736d8d4792475a..0000000000000000000000000000000000000000 --- a/spaces/sqc1729/bingi/src/components/chat-notification.tsx +++ /dev/null @@ -1,77 +0,0 @@ -import { useEffect } from 'react' -import Image from 'next/image' - -import IconWarning from '@/assets/images/warning.svg' -import { ChatError, ErrorCode, ChatMessageModel } from '@/lib/bots/bing/types' -import { ExternalLink } from './external-link' -import { useBing } from '@/lib/hooks/use-bing' - -export interface ChatNotificationProps extends Pick, 'bot'> { - message?: ChatMessageModel -} - -function getAction(error: ChatError, reset: () => void) { - if (error.code === ErrorCode.THROTTLE_LIMIT) { - reset() - return ( -
        - ) - } - if (error.code === ErrorCode.BING_FORBIDDEN) { - return ( - - 你的账号已在黑名单,请尝试更换账号及申请解封 - - ) - } - if (error.code === ErrorCode.CONVERSATION_LIMIT) { - return ( -
        - 当前话题已中止,请点 - 重新开始 - 开启新的对话 -
        - ) - } - if (error.code === ErrorCode.BING_CAPTCHA) { - return ( - - 点击通过人机验证 - - ) - } - if (error.code === ErrorCode.BING_UNAUTHORIZED) { - reset() - return ( - 没有获取到身份信息或身份信息失效,点此重新设置 - ) - } - return error.message -} - -export function ChatNotification({ message, bot }: ChatNotificationProps) { - useEffect(() => { - window.scrollBy(0, 2000) - }, [message]) - - if (!message?.error) return - - return ( -
        -
        -
        -
        -
        - error - {getAction(message.error, () => bot.resetConversation())} -
        -
        -
        -
        -
        - ) -} diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/translation_moe/translation_moe_src/mean_pool_gating_network.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/translation_moe/translation_moe_src/mean_pool_gating_network.py deleted file mode 100644 index efc7ae40bf8fed6c2384cbc6f94477c4caa4c10c..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/translation_moe/translation_moe_src/mean_pool_gating_network.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn.functional as F - - -class MeanPoolGatingNetwork(torch.nn.Module): - """A simple mean-pooling gating network for selecting experts. - - This module applies mean pooling over an encoder's output and returns - reponsibilities for each expert. The encoder format is expected to match - :class:`fairseq.models.transformer.TransformerEncoder`. - """ - - def __init__(self, embed_dim, num_experts, dropout=None): - super().__init__() - self.embed_dim = embed_dim - self.num_experts = num_experts - - self.fc1 = torch.nn.Linear(embed_dim, embed_dim) - self.dropout = torch.nn.Dropout(dropout) if dropout is not None else None - self.fc2 = torch.nn.Linear(embed_dim, num_experts) - - def forward(self, encoder_out): - if not ( - "encoder_out" in encoder_out - and "encoder_padding_mask" in encoder_out - and encoder_out["encoder_out"][0].size(2) == self.embed_dim - ): - raise ValueError("Unexpected format for encoder_out") - - # mean pooling over time - encoder_padding_mask = encoder_out["encoder_padding_mask"][0] # B x T - encoder_out = encoder_out["encoder_out"][0].transpose(0, 1) # B x T x C - if encoder_padding_mask is not None: - encoder_out = encoder_out.clone() # required because of transpose above - encoder_out[encoder_padding_mask] = 0 - ntokens = torch.sum(~encoder_padding_mask, dim=1, keepdim=True) - x = torch.sum(encoder_out, dim=1) / ntokens.type_as(encoder_out) - else: - x = torch.mean(encoder_out, dim=1) - - x = torch.tanh(self.fc1(x)) - if self.dropout is not None: - x = self.dropout(x) - x = self.fc2(x) - return F.log_softmax(x, dim=-1, dtype=torch.float32).type_as(x) diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/clib/cuda/ngram_repeat_block_cuda.cpp b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/clib/cuda/ngram_repeat_block_cuda.cpp deleted file mode 100644 index 707219105a17a691e43de1296a72bbaffa0c7fe9..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/clib/cuda/ngram_repeat_block_cuda.cpp +++ /dev/null @@ -1,55 +0,0 @@ -/* -Copyright (c) Microsoft Corporation. -Licensed under the MIT License. -*/ - -#include -#include - -/* -CPP Binding for CUDA OP -*/ - -// CUDA forward declarations -torch::Tensor ngram_repeat_block_cuda_forward( - torch::Tensor tokens, - torch::Tensor lprobs, - int bsz, - int step, - int beam_size, - int no_repeat_ngram_size); - -#define CHECK_CUDA(x) \ - TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) \ - TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) \ - CHECK_CUDA(x); \ - CHECK_CONTIGUOUS(x) - -// Input check and call to CUDA OP -// Backward method not required -torch::Tensor ngram_repeat_block_forward( - torch::Tensor tokens, - torch::Tensor lprobs, - int bsz, - int step, - int beam_size, - int no_repeat_ngram_size) { - CHECK_INPUT(tokens); - CHECK_INPUT(lprobs); - assert(bsz > 0); - assert(step >= 0); - assert(beam_size > 0); - assert(no_repeat_ngram_size > 0); - - return ngram_repeat_block_cuda_forward( - tokens, lprobs, bsz, step, beam_size, no_repeat_ngram_size); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def( - "forward", - &ngram_repeat_block_forward, - "No Repeat Ngram Block forward (CUDA)"); -} diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/optim/lr_scheduler/fairseq_lr_scheduler.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/optim/lr_scheduler/fairseq_lr_scheduler.py deleted file mode 100644 index ac6340fa0744a08d2b527972dfc669573fb4e1c3..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/optim/lr_scheduler/fairseq_lr_scheduler.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from argparse import Namespace - -from fairseq.dataclass.utils import gen_parser_from_dataclass -from fairseq.optim import FairseqOptimizer - - -class FairseqLRScheduler(object): - def __init__(self, cfg, optimizer): - super().__init__() - if optimizer is not None and not isinstance(optimizer, FairseqOptimizer): - raise ValueError("optimizer must be an instance of FairseqOptimizer") - self.cfg = cfg - self.optimizer = optimizer - self.best = None - - @classmethod - def add_args(cls, parser): - """Add arguments to the parser for this LR scheduler.""" - dc = getattr(cls, "__dataclass", None) - if dc is not None: - gen_parser_from_dataclass(parser, dc()) - - def state_dict(self): - """Return the LR scheduler state dict.""" - return {"best": self.best} - - def load_state_dict(self, state_dict): - """Load an LR scheduler state dict.""" - self.best = state_dict["best"] - - def step_begin_epoch(self, epoch): - """Update the learning rate at the beginning of the given epoch.""" - pass - - def step(self, epoch, val_loss=None): - """Update the learning rate at the end of the given epoch.""" - if val_loss is not None: - if self.best is None: - self.best = val_loss - else: - self.best = min(self.best, val_loss) - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - return self.optimizer.get_lr() - - def reinit(self, total_num_update, num_updates): - pass - - -class LegacyFairseqLRScheduler(FairseqLRScheduler): - def __init__(self, args: Namespace, optimizer): - if not isinstance(optimizer, FairseqOptimizer): - raise ValueError("optimizer must be an instance of FairseqOptimizer") - self.args = args - self.optimizer = optimizer - self.best = None diff --git a/spaces/stomexserde/gpt4-ui/Examples/Corel Draw X6 Serial Number Change UPD.md b/spaces/stomexserde/gpt4-ui/Examples/Corel Draw X6 Serial Number Change UPD.md deleted file mode 100644 index e43c295a96ff6c5cccd97d2e748a3499462b4ff9..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Corel Draw X6 Serial Number Change UPD.md +++ /dev/null @@ -1,34 +0,0 @@ - -

        How to Change Serial Number in Corel Draw X6

        -

        If you have installed Corel Draw X6 on multiple computers using the same serial number, you may encounter problems with activation or licensing. To avoid this, you need to change the serial number on each computer to match the one you purchased. Here are the steps to do that:

        -

        Corel Draw X6 Serial Number Change


        Download ❤❤❤ https://urlgoal.com/2uIaFB



        -
          -
        1. Open Corel Draw X6 and click on the Help menu.
        2. -
        3. Select About Corel Draw and click on the License Information button.
        4. -
        5. Click on the Change Serial Number button and enter your new serial number.
        6. -
        7. Click on OK and restart Corel Draw X6.
        8. -
        9. Repeat these steps for each computer where you have installed Corel Draw X6.
        10. -
        -

        Alternatively, you can uninstall Corel Draw X6 from each computer and reinstall it using your new serial number. However, before you uninstall, make sure you are signed in to your Corel account (green man icon showing in the bottom right corner). Otherwise, Corel's records will show the license key as being in use and it can prevent you from reinstalling[^1^].

        -

        Changing your serial number in Corel Draw X6 is a simple process that can help you avoid activation or licensing issues. By following these steps, you can enjoy using your Corel Draw X6 software without any problems.

        - -

        Why Use Corel Draw X6?

        -

        Corel Draw X6 is a powerful and versatile vector graphics software that can help you create stunning designs for different purposes. Whether you want to design logos, brochures, posters, banners, illustrations, or web graphics, Corel Draw X6 has the tools and features you need to unleash your creativity. Here are some of the benefits of using Corel Draw X6:

        -

        -
          -
        • It supports 64-bit and multi-core processing, which means it can handle large and complex files faster and smoother.
        • -
        • It has a touch-friendly GUI that adapts to your device and preferences, making it easy to work on touchscreens and tablets.
        • -
        • It has advanced stylus modifications that let you adjust pressure, tilt, bearing, and rotation of your pen device for more natural and expressive strokes.
        • -
        • It has improved vector previews, handles, and nodes that give you more control and accuracy over your shapes and curves.
        • -
        • It allows you to choose a shape for each node type, such as smooth, cusp, or symmetrical, for more flexibility and customization.
        • -
        • It has a Gaussian Blur special effect that lets you apply realistic blurs to your objects without affecting their outlines.
        • -
        • It has a Smart Carver tool that lets you remove unwanted objects from your photos seamlessly and adjust the aspect ratio of your images.
        • -
        • It has a Smear, Twirl, Attract, and Repel tool that lets you refine your vector objects by pushing, pulling, rotating, or shrinking them.
        • -
        • It has a Placeholder Text tool that lets you mock up a page layout with dummy text so you can focus on the design elements.
        • -
        • It has a Font Manager that lets you manage and organize your fonts easily and efficiently.
        • -
        -

        How to Get Corel Draw X6?

        -

        If you are interested in using Corel Draw X6 for your graphic design projects, you have two options: you can either buy it or download it for free. Buying Corel Draw X6 will give you access to the full version of the software with all the features and updates. You can buy Corel Draw X6 from the official website of Corel Corporation or from authorized resellers. The price of Corel Draw X6 varies depending on the edition (Standard or Premium) and the region. You can also get a free trial version of Corel Draw X6 for 15 days to test its capabilities before buying it.

        -

        If you want to download Corel Draw X6 for free, you have to be careful about the source and the legality of the download. There are many websites that claim to offer free downloads of Corel Draw X6, but they may contain viruses, malware, or spyware that can harm your computer or steal your personal information. Moreover, downloading Corel Draw X6 for free may violate the copyright laws and terms of use of Corel Corporation. Therefore, it is not recommended to download Corel Draw X6 for free from unauthorized sources. Instead, you should use the official website of Corel Corporation or other trusted platforms to get Corel Draw X6 legally and safely.

        81aa517590
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Deadwood.S01.S02.S03.Complete.720p.X264.anoXmous.md b/spaces/stomexserde/gpt4-ui/Examples/Deadwood.S01.S02.S03.Complete.720p.X264.anoXmous.md deleted file mode 100644 index 413f2a34514ff5ee8b83d71b51acdd87767e81d4..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Deadwood.S01.S02.S03.Complete.720p.X264.anoXmous.md +++ /dev/null @@ -1,15 +0,0 @@ - -Here is a possible title and article with HTML formatting for the keyword "Deadwood.S01.S02.S03.Complete.720p.X264.anoXmous": - -

        Deadwood: A Western Drama Series Worth Watching

        -

        If you are a fan of westerns, you might want to check out Deadwood, a critically acclaimed drama series that aired on HBO from 2004 to 2006. The show is set in the 1870s, in the lawless town of Deadwood, South Dakota, where gold miners, outlaws, prostitutes, and entrepreneurs clash and coexist. The show features a large ensemble cast of memorable characters, such as Al Swearengen (Ian McShane), the ruthless owner of the Gem Saloon; Seth Bullock (Timothy Olyphant), the former marshal who becomes the town's unofficial sheriff; and Calamity Jane (Robin Weigert), the legendary frontierswoman and friend of Wild Bill Hickok (Keith Carradine).

        -

        Deadwood.S01.S02.S03.Complete.720p.X264.anoXmous


        Download Filehttps://urlgoal.com/2uIaEu



        -

        The show is known for its realistic and gritty portrayal of the Old West, its complex and nuanced storytelling, its rich and authentic dialogue, and its superb acting and directing. The show has won eight Emmy Awards and one Golden Globe Award, and has been ranked among the best TV shows of all time by various critics and publications. The show was canceled after three seasons due to budget issues, but was revived in 2019 with a feature-length film that served as a satisfying conclusion to the series.

        -

        If you want to watch or rewatch Deadwood, you can download the complete series in high quality from the torrent link below. The torrent contains all three seasons of the show, with a total of 36 episodes, each with a resolution of 720p and a file size of around 500 MB. The video format is X264, which is compatible with most media players and devices. The torrent also includes English subtitles for each episode.

        -

        Download Deadwood.S01.S02.S03.Complete.720p.X264.anoXmous

        Here is a possible continuation of the article: - -

        Deadwood is not only a historical drama, but also a political and social commentary on the themes of power, corruption, violence, morality, and civilization. The show explores how the town of Deadwood evolves from a chaotic and anarchic camp to a more organized and structured community, and how the characters adapt to the changing circumstances and conflicts. The show also depicts the cultural and racial diversity of the town, as well as the tensions and alliances that arise among the different groups, such as the Native Americans, the Chinese, the African Americans, and the immigrants. The show also incorporates real historical events and figures into its fictional narrative, such as the Black Hills Gold Rush, the Smallpox Epidemic, the Deadwood Fire, and the assassination of Wild Bill Hickok.

        -

        -

        Deadwood is a show that will immerse you in a different time and place, and make you care about the fate of its characters. It is a show that will challenge you with its complex and layered storytelling, and reward you with its brilliant and satisfying writing. It is a show that will impress you with its stunning and authentic production design, and captivate you with its outstanding and charismatic performances. It is a show that will make you laugh, cry, gasp, and cheer. It is a show that you will not regret watching.

        7196e7f11a
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Donnie Darko HOT Full Movie Download In Hindi Aeree White Simpson.md b/spaces/stomexserde/gpt4-ui/Examples/Donnie Darko HOT Full Movie Download In Hindi Aeree White Simpson.md deleted file mode 100644 index 53592a241794c6ee91fa2a21f300c5357cd60854..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Donnie Darko HOT Full Movie Download In Hindi Aeree White Simpson.md +++ /dev/null @@ -1,18 +0,0 @@ - -

        Donnie Darko Full Movie Download In Hindi: A Mind-Bending Sci-Fi Drama

        -

        If you are looking for a movie that will challenge your perception of reality and time, then you should watch Donnie Darko. This 2001 film, directed by Richard Kelly, stars Jake Gyllenhaal as a troubled teenager who has visions of a mysterious rabbit-like creature named Frank. Frank tells Donnie that the world will end in 28 days, and guides him to commit various acts that seem to have a deeper purpose.

        -

        Donnie Darko Full Movie Download In Hindi aeree white simpson


        Download Zip ☆☆☆ https://urlgoal.com/2uI6sj



        -

        Donnie Darko is a cult classic that explores themes such as fate, free will, parallel universes, and the meaning of life. It is a complex and layered story that requires multiple viewings to fully appreciate. The movie also features an impressive cast of supporting actors, including Jena Malone, Drew Barrymore, Patrick Swayze, Noah Wyle, and Maggie Gyllenhaal.

        -

        If you want to watch Donnie Darko in Hindi, you can download it from MoviesMint[^1^], a website that offers dual audio movies in high quality. You can choose from 480p, 720p, or 1080p resolutions, depending on your preference and internet speed. You can also stream the movie online if you don't want to download it.

        -

        Donnie Darko is a movie that will make you think and feel. It is a masterpiece of sci-fi cinema that deserves your attention. Download it today and enjoy this mind-bending drama.

        -

        - -

        Donnie Darko has received critical acclaim for its originality, intelligence, and performances. The movie has a 87% rating on Rotten Tomatoes[^2^], with critics praising its daring and mind-bending vision. Roger Ebert gave the movie three and a half stars out of four, calling it \"a kind of movie that calls out not merely to be experienced but to be solved.\"[^3^] He also complimented Gyllenhaal's \"remarkable performance\" as the troubled title character.

        -

        The movie has also developed a loyal cult following over the years, with fans analyzing and debating its intricate plot and symbolism. The movie has spawned several websites, books, and documentaries that attempt to explain its mysteries and themes. The movie also features a memorable soundtrack that includes songs by Tears for Fears, Echo and the Bunnymen, Joy Division, and Duran Duran.

        -

        Donnie Darko is a movie that will stay with you long after you watch it. It is a rare film that combines sci-fi, drama, horror, comedy, and romance in a captivating and unique way. It is a movie that challenges you to think and feel, and rewards you with a rich and satisfying experience. Don't miss this opportunity to watch Donnie Darko in Hindi. Download it now from MoviesMint[^1^] and enjoy this cult classic.

        - -

        Donnie Darko also has many interesting trivia and behind-the-scenes facts that add to its appeal. For example, did you know that the movie was filmed over a period of 28 days, which matches the length of time depicted in the film? Or that Jake Gyllenhaal used the strategy of rarely blinking to enhance his psychotic creepiness as he is driven by Frank? Or that Patrick Swayze wore his own clothes from the 1980s for the film?

        -

        Another fun fact is that writer and director Richard Kelly came up with the idea for the future blobs while watching football. He saw John Madden use a \"telestrator\", where he'd diagram a paused video to show where the players were about to go moments before letting the tape roll. Kelly watched this while high, and started to think about what would happen, hypothetically, if \"someone upstairs\" was doing that to humans.

        -

        Donnie Darko is a movie that is full of surprises and secrets. It is a movie that invites you to explore its world and its characters. It is a movie that will make you question your own reality and destiny. It is a movie that you will never forget. Download Donnie Darko in Hindi from MoviesMint[^1^] today and join the cult of Donnie Darko.

        7b8c122e87
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/English Hindi Zara Sambhal Ke Book Free Download.md b/spaces/stomexserde/gpt4-ui/Examples/English Hindi Zara Sambhal Ke Book Free Download.md deleted file mode 100644 index 2cbc80371655ead7de194d2faa7420cc9a5e918d..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/English Hindi Zara Sambhal Ke Book Free Download.md +++ /dev/null @@ -1,35 +0,0 @@ -
        -

        How to Download Zara Sambhal Ke Book in English and Hindi for Free

        -

        Zara Sambhal Ke is a 2013 Indian movie that portrays the life of sex workers and their children. It also exhibits the problem of HIV and AIDS in sex workers and society. The movie was directed by Sharadsingh Thakur and starred Yoggitta Dandaykar, Esshan Khan, Mohini Nillakant, and Sunita Rao. The movie received positive reviews from critics and audiences for its realistic depiction of the issues faced by sex workers.

        -

        If you are interested in watching this movie or reading its novelization, you might be wondering how to download Zara Sambhal Ke book in English and Hindi for free. Well, you are in luck because we have some tips for you to get your hands on this book without spending a dime.

        -

        english hindi Zara Sambhal Ke book free download


        Download Zip ★★★★★ https://urlgoal.com/2uIbSQ



        -

        Tip 1: Use Google Play

        -

        One of the easiest ways to download Zara Sambhal Ke book in English and Hindi for free is to use Google Play. Google Play is a digital distribution service that offers various kinds of content, including movies, books, games, apps, and more. You can access Google Play from your web browser or from your Android device.

        -

        To download Zara Sambhal Ke book in English and Hindi for free from Google Play, you need to follow these steps:

        -
          -
        1. Go to this link to find the movie on Google Play.
        2. -
        3. Click on the "Add to wishlist" button to save the movie for later.
        4. -
        5. Wait for a promotion or a discount offer that might make the movie free or cheaper.
        6. -
        7. When you see such an offer, click on the "Buy" or "Rent" button to get the movie.
        8. -
        9. Once you have the movie, you can watch it online or offline on your device.
        10. -
        11. You can also find the novelization of the movie on Google Play by searching for "Zara Sambhal Ke book".
        12. -
        -

        Tip 2: Use IMDb

        -

        Another way to download Zara Sambhal Ke book in English and Hindi for free is to use IMDb. IMDb is an online database that provides information about movies, TV shows, actors, directors, writers, and more. You can also watch some movies and TV shows for free on IMDb with ads.

        -

        -

        To download Zara Sambhal Ke book in English and Hindi for free from IMDb, you need to follow these steps:

        -
          -
        1. Go to this link to find the movie on IMDb.
        2. -
        3. Click on the "Watch on Eros Now with Prime Video Channels" button to see if you can watch the movie for free with a subscription.
        4. -
        5. If you have a subscription to Eros Now or Prime Video, you can watch the movie for free. If not, you can sign up for a free trial or pay a monthly fee.
        6. -
        7. Once you have access to the movie, you can watch it online or offline on your device.
        8. -
        9. You can also find the novelization of the movie on IMDb by searching for "Zara Sambhal Ke book".
        10. -
        -

        Tip 3: Use Lineserved

        -

        A third way to download Zara Sambhal Ke book in English and Hindi for free is to use Lineserved. Lineserved is a website that offers various kinds of ebooks, including novels, poetry, essays, and more. You can download ebooks in PDF format for free from Lineserved.

        -

        To download Zara Sambhal Ke book in English and Hindi for free from Lineserved, you need to follow these steps:

        -
          -
        1. Go to this link to find the novel on Lineserved.
        2. -
        3. Click on the "Download" button to

          81aa517590
          -
          -
          \ No newline at end of file diff --git a/spaces/sudo-ai/zero123plus-demo-space/Dockerfile b/spaces/sudo-ai/zero123plus-demo-space/Dockerfile deleted file mode 100644 index 3c5b6b1fb2eb6d1c18d4940065e2e6932131b914..0000000000000000000000000000000000000000 --- a/spaces/sudo-ai/zero123plus-demo-space/Dockerfile +++ /dev/null @@ -1,41 +0,0 @@ -FROM nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04 - -ARG DEBIAN_FRONTEND=noninteractive - -ENV PYTHONUNBUFFERED=1 - -RUN apt-get update && apt-get install --no-install-recommends -y \ - build-essential \ - python3.10 \ - curl \ - python3-pip \ - git \ - ffmpeg \ - && apt-get clean && rm -rf /var/lib/apt/lists/* - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -# Set up a new user named "user" with user ID 1000 -RUN useradd -m -u 1000 user -# Switch to the "user" user -USER user -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH \ - PYTHONPATH=$HOME/app \ - PYTHONUNBUFFERED=1 \ - SYSTEM=spaces - -RUN pip3 install --no-cache-dir --upgrade -r /code/requirements.txt - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app - -RUN python3 download_checkpoints.py - -CMD ["python3", "gradio_app.py"] diff --git a/spaces/szukevin/VISOR-GPT/train/scripts/convert_s2t_from_tencentpretrain_to_huggingface.py b/spaces/szukevin/VISOR-GPT/train/scripts/convert_s2t_from_tencentpretrain_to_huggingface.py deleted file mode 100644 index 5fb417b56013ddb76aa4c28b577a8c17f6f41804..0000000000000000000000000000000000000000 --- a/spaces/szukevin/VISOR-GPT/train/scripts/convert_s2t_from_tencentpretrain_to_huggingface.py +++ /dev/null @@ -1,141 +0,0 @@ -import argparse -import collections -import torch - - -def convert_transformer_encoder_from_huggingface_to_tencentpretrain(input_model, output_model, layers_num): - for i in range(layers_num): - output_model['model.encoder.layers.' + str(i) + '.self_attn.q_proj.weight'] = \ - input_model['encoder.transformer.' + str(i) + '.self_attn.linear_layers.0.weight'] - output_model['model.encoder.layers.' + str(i) + '.self_attn.q_proj.bias'] = \ - input_model['encoder.transformer.' + str(i) + '.self_attn.linear_layers.0.bias'] - output_model['model.encoder.layers.' + str(i) + '.self_attn.k_proj.weight'] = \ - input_model['encoder.transformer.' + str(i) + '.self_attn.linear_layers.1.weight'] - output_model['model.encoder.layers.' + str(i) + '.self_attn.k_proj.bias'] = \ - input_model['encoder.transformer.' + str(i) + '.self_attn.linear_layers.1.bias'] - output_model['model.encoder.layers.' + str(i) + '.self_attn.v_proj.weight'] = \ - input_model['encoder.transformer.' + str(i) + '.self_attn.linear_layers.2.weight'] - output_model['model.encoder.layers.' + str(i) + '.self_attn.v_proj.bias'] = \ - input_model['encoder.transformer.' + str(i) + '.self_attn.linear_layers.2.bias'] - output_model['model.encoder.layers.' + str(i) + '.self_attn.out_proj.weight'] = \ - input_model['encoder.transformer.' + str(i) + '.self_attn.final_linear.weight'] - output_model['model.encoder.layers.' + str(i) + '.self_attn.out_proj.bias'] = \ - input_model['encoder.transformer.' + str(i) + '.self_attn.final_linear.bias'] - - output_model['model.encoder.layers.' + str(i) + '.self_attn_layer_norm.weight'] = \ - input_model['encoder.transformer.' + str(i) + '.layer_norm_1.gamma'] - output_model['model.encoder.layers.' + str(i) + '.self_attn_layer_norm.bias'] = \ - input_model['encoder.transformer.' + str(i) + '.layer_norm_1.beta'] - - output_model['model.encoder.layers.' + str(i) + '.fc1.weight'] = \ - input_model['encoder.transformer.' + str(i) + '.feed_forward.linear_1.weight'] - output_model['model.encoder.layers.' + str(i) + '.fc1.bias'] = \ - input_model['encoder.transformer.' + str(i) + '.feed_forward.linear_1.bias'] - output_model['model.encoder.layers.' + str(i) + '.fc2.weight'] = \ - input_model['encoder.transformer.' + str(i) + '.feed_forward.linear_2.weight'] - output_model['model.encoder.layers.' + str(i) + '.fc2.bias'] = \ - input_model['encoder.transformer.' + str(i) + '.feed_forward.linear_2.bias'] - - output_model['model.encoder.layers.' + str(i) + '.final_layer_norm.weight'] = \ - input_model['encoder.transformer.' + str(i) + '.layer_norm_2.gamma'] - output_model['model.encoder.layers.' + str(i) + '.final_layer_norm.bias'] = \ - input_model['encoder.transformer.' + str(i) + '.layer_norm_2.beta'] - - -def convert_transformer_decoder_from_huggingface_to_tencentpretrain(input_model, output_model, layers_num): - for i in range(layers_num): - output_model['model.decoder.layers.' + str(i) + '.self_attn.q_proj.weight'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.self_attn.linear_layers.0.weight'] - output_model['model.decoder.layers.' + str(i) + '.self_attn.q_proj.bias'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.self_attn.linear_layers.0.bias'] - output_model['model.decoder.layers.' + str(i) + '.self_attn.k_proj.weight'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.self_attn.linear_layers.1.weight'] - output_model['model.decoder.layers.' + str(i) + '.self_attn.k_proj.bias'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.self_attn.linear_layers.1.bias'] - output_model['model.decoder.layers.' + str(i) + '.self_attn.v_proj.weight'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.self_attn.linear_layers.2.weight'] - output_model['model.decoder.layers.' + str(i) + '.self_attn.v_proj.bias'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.self_attn.linear_layers.2.bias'] - - output_model['model.decoder.layers.' + str(i) + '.self_attn.out_proj.weight'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.self_attn.final_linear.weight'] - output_model['model.decoder.layers.' + str(i) + '.self_attn.out_proj.bias'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.self_attn.final_linear.bias'] - output_model['model.decoder.layers.' + str(i) + '.self_attn_layer_norm.weight'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.layer_norm_1.gamma'] - output_model['model.decoder.layers.' + str(i) + '.self_attn_layer_norm.bias'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.layer_norm_1.beta'] - - output_model['model.decoder.layers.' + str(i) + '.encoder_attn.q_proj.weight'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.context_attn.linear_layers.0.weight'] - output_model['model.decoder.layers.' + str(i) + '.encoder_attn.q_proj.bias'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.context_attn.linear_layers.0.bias'] - output_model['model.decoder.layers.' + str(i) + '.encoder_attn.k_proj.weight'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.context_attn.linear_layers.1.weight'] - output_model['model.decoder.layers.' + str(i) + '.encoder_attn.k_proj.bias'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.context_attn.linear_layers.1.bias'] - output_model['model.decoder.layers.' + str(i) + '.encoder_attn.v_proj.weight'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.context_attn.linear_layers.2.weight'] - output_model['model.decoder.layers.' + str(i) + '.encoder_attn.v_proj.bias'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.context_attn.linear_layers.2.bias'] - - output_model['model.decoder.layers.' + str(i) + '.encoder_attn.out_proj.weight'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.context_attn.final_linear.weight'] - output_model['model.decoder.layers.' + str(i) + '.encoder_attn.out_proj.bias'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.context_attn.final_linear.bias'] - output_model['model.decoder.layers.' + str(i) + '.encoder_attn_layer_norm.weight'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.layer_norm_2.gamma'] - output_model['model.decoder.layers.' + str(i) + '.encoder_attn_layer_norm.bias'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.layer_norm_2.beta'] - - output_model['model.decoder.layers.' + str(i) + '.fc1.weight'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.feed_forward.linear_1.weight'] - output_model['model.decoder.layers.' + str(i) + '.fc1.bias'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.feed_forward.linear_1.bias'] - output_model['model.decoder.layers.' + str(i) + '.fc2.weight'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.feed_forward.linear_2.weight'] - output_model['model.decoder.layers.' + str(i) + '.fc2.bias'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.feed_forward.linear_2.bias'] - - output_model['model.decoder.layers.' + str(i) + '.final_layer_norm.weight'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.layer_norm_3.gamma'] - output_model['model.decoder.layers.' + str(i) + '.final_layer_norm.bias'] = \ - input_model['decoder.transformer_decoder.' + str(i) + '.layer_norm_3.beta'] - - -def main(): - parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - parser.add_argument("--input_model_path", type=str, default="models/input_model.pt", - help=".") - parser.add_argument("--output_model_path", type=str, default="models/output_model.bin", - help=".") - parser.add_argument("--layers_num", type=int, default=12, help=".") - parser.add_argument("--decoder_layers_num", type=int, default=6, help=".") - - args = parser.parse_args() - - input_model = torch.load(args.input_model_path) - - output_model = collections.OrderedDict() - - for i in range(2): - output_model["model.encoder.conv.conv_layers." + str(i) + ".weight"] = \ - input_model["embedding.speech.conv.conv_layers." + str(i) + ".0.weight"] - output_model["model.encoder.conv.conv_layers." + str(i) + ".bias"] = \ - input_model["embedding.speech.conv.conv_layers." + str(i) + ".0.bias"] - - output_model['model.decoder.embed_tokens.weight'] = input_model['tgt_embedding.word.embedding.weight'] - - convert_transformer_encoder_from_huggingface_to_tencentpretrain(input_model, output_model, args.layers_num) - convert_transformer_decoder_from_huggingface_to_tencentpretrain(input_model, output_model, args.decoder_layers_num) - output_model['model.encoder.layer_norm.weight'] = input_model['encoder.layer_norm.gamma'] - output_model['model.encoder.layer_norm.bias'] = input_model['encoder.layer_norm.beta'] - output_model['model.decoder.layer_norm.weight'] = input_model['decoder.layer_norm.gamma'] - output_model['model.decoder.layer_norm.bias'] = input_model['decoder.layer_norm.beta'] - - output_model["lm_head.weight"] = input_model['target.lm.output_layer.weight'] - - torch.save(output_model, args.output_model_path) - -if __name__ == "__main__": - main() diff --git a/spaces/t13718236382/web-ui/_next/static/css/aa52c84dc63fe0c2.css b/spaces/t13718236382/web-ui/_next/static/css/aa52c84dc63fe0c2.css deleted file mode 100644 index a5906762da87944d1af55175437321b2b935f7a6..0000000000000000000000000000000000000000 --- a/spaces/t13718236382/web-ui/_next/static/css/aa52c84dc63fe0c2.css +++ /dev/null @@ -1,21 +0,0 @@ -@font-face{font-family:Inter;font-style:normal;font-weight:400;font-display:swap;src:url(_next/static/media/Inter-Regular.f1f0c35b.woff2) format("woff2"),url(_next/static/media/Inter-Regular.f356e84a.woff) format("woff")}@font-face{font-family:Inter;font-style:normal;font-weight:500;font-display:swap;src:url(_next/static/media/Inter-Medium.dc792b50.woff2) format("woff2"),url(_next/static/media/Inter-Medium.ec7dd2d9.woff) format("woff")}@font-face{font-family:Inter;font-style:normal;font-weight:600;font-display:swap;src:url(_next/static/media/Inter-SemiBold.fcb100c7.woff2) format("woff2"),url(_next/static/media/Inter-SemiBold.55027e47.woff) format("woff")}@font-face{font-family:Inter;font-style:normal;font-weight:700;font-display:swap;src:url(_next/static/media/Inter-Bold.579e0f95.woff2) format("woff2"),url(_next/static/media/Inter-Bold.b1234477.woff) format("woff")}@font-face{font-family:Inter var;font-weight:100 900;font-style:normal;font-named-instance:"Regular";font-display:swap;src:url(_next/static/media/Inter-roman.var.b2129c00.woff2) format("woff2 supports variations(gvar)"),url(_next/static/media/Inter-roman.var.b2129c00.woff2) format("woff2-variations"),url(_next/static/media/Inter-roman.var.b2129c00.woff2) format("woff2")}/* -! tailwindcss v3.3.2 | MIT License | https://tailwindcss.com -*/*,:after,:before{box-sizing:border-box;border:0 solid #e5e7eb}:after,:before{--tw-content:""}html{line-height:1.5;-webkit-text-size-adjust:100%;-moz-tab-size:4;-o-tab-size:4;tab-size:4;font-family:ui-sans-serif,system-ui,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,Apple Color Emoji,Segoe UI Emoji,Segoe UI Symbol,Noto Color Emoji;font-feature-settings:normal;font-variation-settings:normal}body{margin:0;line-height:inherit}hr{height:0;color:inherit;border-top-width:1px}abbr:where([title]){-webkit-text-decoration:underline dotted;text-decoration:underline dotted}h1,h2,h3,h4,h5,h6{font-size:inherit;font-weight:inherit}a{color:inherit;text-decoration:inherit}b,strong{font-weight:bolder}code,kbd,pre,samp{font-family:ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,monospace;font-size:1em}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}table{text-indent:0;border-color:inherit;border-collapse:collapse}button,input,optgroup,select,textarea{font-family:inherit;font-size:100%;font-weight:inherit;line-height:inherit;color:inherit;margin:0;padding:0}button,select{text-transform:none}[type=button],[type=reset],[type=submit],button{-webkit-appearance:button;background-color:transparent;background-image:none}:-moz-focusring{outline:auto}:-moz-ui-invalid{box-shadow:none}progress{vertical-align:baseline}::-webkit-inner-spin-button,::-webkit-outer-spin-button{height:auto}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}summary{display:list-item}blockquote,dd,dl,figure,h1,h2,h3,h4,h5,h6,hr,p,pre{margin:0}fieldset{margin:0}fieldset,legend{padding:0}menu,ol,ul{list-style:none;margin:0;padding:0}textarea{resize:vertical}input::-moz-placeholder,textarea::-moz-placeholder{opacity:1;color:#9ca3af}input::placeholder,textarea::placeholder{opacity:1;color:#9ca3af}[role=button],button{cursor:pointer}:disabled{cursor:default}audio,canvas,embed,iframe,img,object,svg,video{display:block;vertical-align:middle}img,video{max-width:100%;height:auto}[hidden]{display:none}*{scrollbar-color:auto;scrollbar-width:auto}:root{opacity:.88}*,:after,:before{--tw-border-spacing-x:0;--tw-border-spacing-y:0;--tw-translate-x:0;--tw-translate-y:0;--tw-rotate:0;--tw-skew-x:0;--tw-skew-y:0;--tw-scale-x:1;--tw-scale-y:1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness:proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width:0px;--tw-ring-offset-color:#fff;--tw-ring-color:rgba(59,130,246,.5);--tw-ring-offset-shadow:0 0 #0000;--tw-ring-shadow:0 0 #0000;--tw-shadow:0 0 #0000;--tw-shadow-colored:0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }::backdrop{--tw-border-spacing-x:0;--tw-border-spacing-y:0;--tw-translate-x:0;--tw-translate-y:0;--tw-rotate:0;--tw-skew-x:0;--tw-skew-y:0;--tw-scale-x:1;--tw-scale-y:1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness:proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width:0px;--tw-ring-offset-color:#fff;--tw-ring-color:rgba(59,130,246,.5);--tw-ring-offset-shadow:0 0 #0000;--tw-ring-shadow:0 0 #0000;--tw-shadow:0 0 #0000;--tw-shadow-colored:0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }.container{width:100%}@media (min-width:640px){.container{max-width:640px}}@media (min-width:768px){.container{max-width:768px}}@media (min-width:1024px){.container{max-width:1024px}}@media (min-width:1280px){.container{max-width:1280px}}@media (min-width:1536px){.container{max-width:1536px}}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0,0,0,0);white-space:nowrap;border-width:0}.pointer-events-none{pointer-events:none}.invisible{visibility:hidden}.collapse{visibility:collapse}.fixed{position:fixed}.absolute{position:absolute}.relative{position:relative}.inset-0{inset:0}.inset-y-0{top:0;bottom:0}.left-0{left:0}.right-0{right:0}.right-4{right:1rem}.right-\[-8px\]{right:-8px}.top-4{top:1rem}.top-\[-8px\]{top:-8px}.isolate{isolation:isolate}.z-10{z-index:10}.z-50{z-index:50}.m-5{margin:1.25rem}.-mx-1{margin-left:-.25rem;margin-right:-.25rem}.mx-10{margin-left:2.5rem;margin-right:2.5rem}.mx-3{margin-left:.75rem;margin-right:.75rem}.mx-5{margin-left:1.25rem;margin-right:1.25rem}.mx-auto{margin-left:auto;margin-right:auto}.my-3{margin-top:.75rem;margin-bottom:.75rem}.my-5{margin-top:1.25rem;margin-bottom:1.25rem}.my-8{margin-top:2rem;margin-bottom:2rem}.-ml-px{margin-left:-1px}.-mr-1{margin-right:-.25rem}.mb-0{margin-bottom:0}.mb-1{margin-bottom:.25rem}.mb-2{margin-bottom:.5rem}.mb-3{margin-bottom:.75rem}.mb-4{margin-bottom:1rem}.mb-\[5px\]{margin-bottom:5px}.ml-1{margin-left:.25rem}.ml-2{margin-left:.5rem}.ml-6{margin-left:1.5rem}.ml-auto{margin-left:auto}.mr-1{margin-right:.25rem}.mr-2{margin-right:.5rem}.mr-\[10px\]{margin-right:10px}.mr-\[6px\]{margin-right:6px}.mt-1{margin-top:.25rem}.mt-2{margin-top:.5rem}.mt-3{margin-top:.75rem}.mt-5{margin-top:1.25rem}.mt-\[12px\]{margin-top:12px}.mt-auto{margin-top:auto}.block{display:block}.inline-block{display:inline-block}.inline{display:inline}.flex{display:flex}.inline-flex{display:inline-flex}.grid{display:grid}.hidden{display:none}.h-11{height:2.75rem}.h-4{height:1rem}.h-5{height:1.25rem}.h-6{height:1.5rem}.h-8{height:2rem}.h-9{height:2.25rem}.h-\[18px\]{height:18px}.h-\[1px\]{height:1px}.h-\[250px\]{height:250px}.h-\[400px\]{height:400px}.h-\[45px\]{height:45px}.h-full{height:100%}.h-px{height:1px}.h-screen{height:100vh}.max-h-60{max-height:15rem}.max-h-\[300px\]{max-height:300px}.max-h-full{max-height:100%}.max-h-screen{max-height:100vh}.min-h-\[2rem\]{min-height:2rem}.min-h-\[300px\]{min-height:300px}.min-h-\[400px\]{min-height:400px}.w-1\/6{width:16.666667%}.w-11{width:2.75rem}.w-11\/12{width:91.666667%}.w-3\/6{width:50%}.w-4{width:1rem}.w-5{width:1.25rem}.w-56{width:14rem}.w-6{width:1.5rem}.w-8{width:2rem}.w-\[1000px\]{width:1000px}.w-\[18px\]{width:18px}.w-\[200px\]{width:200px}.w-\[230px\]{width:230px}.w-\[300px\]{width:300px}.w-\[30px\]{width:30px}.w-\[400px\]{width:400px}.w-\[600px\]{width:600px}.w-\[79px\]{width:79px}.w-\[800px\]{width:800px}.w-fit{width:-moz-fit-content;width:fit-content}.w-full{width:100%}.min-w-0{min-width:0}.min-w-\[150px\]{min-width:150px}.min-w-\[2rem\]{min-width:2rem}.min-w-max{min-width:-moz-max-content;min-width:max-content}.max-w-fit{max-width:-moz-fit-content;max-width:fit-content}.flex-1{flex:1 1 0%}.shrink-0{flex-shrink:0}.grow{flex-grow:1}.origin-top-right{transform-origin:top right}.translate-x-1{--tw-translate-x:0.25rem}.translate-x-1,.translate-x-6{transform:translate(var(--tw-translate-x),var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y))}.translate-x-6{--tw-translate-x:1.5rem}.rotate-180{--tw-rotate:180deg}.rotate-180,.scale-100{transform:translate(var(--tw-translate-x),var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y))}.scale-100{--tw-scale-x:1;--tw-scale-y:1}.scale-95{--tw-scale-x:.95;--tw-scale-y:.95}.scale-95,.transform{transform:translate(var(--tw-translate-x),var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y))}.cursor-default{cursor:default}.cursor-not-allowed{cursor:not-allowed}.cursor-pointer{cursor:pointer}.cursor-wait{cursor:wait}.select-none{-webkit-user-select:none;-moz-user-select:none;user-select:none}.resize-none{resize:none}.auto-rows-fr{grid-auto-rows:minmax(0,1fr)}.grid-cols-1{grid-template-columns:repeat(1,minmax(0,1fr))}.grid-cols-2{grid-template-columns:repeat(2,minmax(0,1fr))}.grid-cols-3{grid-template-columns:repeat(3,minmax(0,1fr))}.grid-cols-\[auto_1fr\]{grid-template-columns:auto 1fr}.flex-row{flex-direction:row}.flex-row-reverse{flex-direction:row-reverse}.flex-col{flex-direction:column}.flex-col-reverse{flex-direction:column-reverse}.flex-wrap{flex-wrap:wrap}.items-start{align-items:flex-start}.items-center{align-items:center}.justify-center{justify-content:center}.justify-between{justify-content:space-between}.gap-1{gap:.25rem}.gap-2{gap:.5rem}.gap-3{gap:.75rem}.gap-4{gap:1rem}.gap-5{gap:1.25rem}.gap-\[10px\]{gap:10px}.gap-\[12px\]{gap:12px}.gap-\[5px\]{gap:5px}.space-x-3>:not([hidden])~:not([hidden]){--tw-space-x-reverse:0;margin-right:calc(.75rem * var(--tw-space-x-reverse));margin-left:calc(.75rem * calc(1 - var(--tw-space-x-reverse)))}.space-x-4>:not([hidden])~:not([hidden]){--tw-space-x-reverse:0;margin-right:calc(1rem * var(--tw-space-x-reverse));margin-left:calc(1rem * calc(1 - var(--tw-space-x-reverse)))}.space-y-2>:not([hidden])~:not([hidden]){--tw-space-y-reverse:0;margin-top:calc(.5rem * calc(1 - var(--tw-space-y-reverse)));margin-bottom:calc(.5rem * var(--tw-space-y-reverse))}.space-y-4>:not([hidden])~:not([hidden]){--tw-space-y-reverse:0;margin-top:calc(1rem * calc(1 - var(--tw-space-y-reverse)));margin-bottom:calc(1rem * var(--tw-space-y-reverse))}.divide-y>:not([hidden])~:not([hidden]){--tw-divide-y-reverse:0;border-top-width:calc(1px * calc(1 - var(--tw-divide-y-reverse)));border-bottom-width:calc(1px * var(--tw-divide-y-reverse))}.divide-gray-100>:not([hidden])~:not([hidden]){--tw-divide-opacity:1;border-color:rgb(243 244 246/var(--tw-divide-opacity))}.self-end{align-self:flex-end}.overflow-auto{overflow:auto}.overflow-hidden{overflow:hidden}.overflow-y-auto{overflow-y:auto}.overflow-x-hidden{overflow-x:hidden}.truncate{overflow:hidden;text-overflow:ellipsis}.truncate,.whitespace-nowrap{white-space:nowrap}.whitespace-pre-wrap{white-space:pre-wrap}.break-all{word-break:break-all}.rounded{border-radius:.25rem}.rounded-2xl{border-radius:1rem}.rounded-3xl{border-radius:1.5rem}.rounded-\[10px\]{border-radius:10px}.rounded-\[15px\]{border-radius:15px}.rounded-\[20px\]{border-radius:20px}.rounded-\[30px\]{border-radius:30px}.rounded-\[6px\]{border-radius:6px}.rounded-full{border-radius:9999px}.rounded-lg{border-radius:.5rem}.rounded-md{border-radius:.375rem}.rounded-sm{border-radius:.125rem}.rounded-xl{border-radius:.75rem}.rounded-b-lg{border-bottom-right-radius:.5rem;border-bottom-left-radius:.5rem}.rounded-l-md{border-top-left-radius:.375rem;border-bottom-left-radius:.375rem}.rounded-r-md{border-top-right-radius:.375rem;border-bottom-right-radius:.375rem}.border{border-width:1px}.border-0{border-width:0}.border-2{border-width:2px}.border-b{border-bottom-width:1px}.border-t{border-top-width:1px}.border-solid{border-style:solid}.border-dashed{border-style:dashed}.border-\[\#ffffff4d\]{border-color:#ffffff4d}.border-gray-200{--tw-border-opacity:1;border-color:rgb(229 231 235/var(--tw-border-opacity))}.border-gray-300{--tw-border-opacity:1;border-color:rgb(209 213 219/var(--tw-border-opacity))}.border-primary-border{--tw-border-opacity:1;border-color:rgb(var(--primary-border)/var(--tw-border-opacity))}.border-b-slate-100{--tw-border-opacity:1;border-bottom-color:rgb(241 245 249/var(--tw-border-opacity))}.bg-\[\#00000014\]{background-color:#00000014}.bg-\[\#e6e7e8\]{--tw-bg-opacity:1;background-color:rgb(230 231 232/var(--tw-bg-opacity))}.bg-black{--tw-bg-opacity:1;background-color:rgb(0 0 0/var(--tw-bg-opacity))}.bg-black\/30{background-color:rgba(0,0,0,.3)}.bg-black\/50{background-color:rgba(0,0,0,.5)}.bg-blue-600{--tw-bg-opacity:1;background-color:rgb(37 99 235/var(--tw-bg-opacity))}.bg-gray-100{--tw-bg-opacity:1;background-color:rgb(243 244 246/var(--tw-bg-opacity))}.bg-gray-200{--tw-bg-opacity:1;background-color:rgb(229 231 235/var(--tw-bg-opacity))}.bg-primary-background{--tw-bg-opacity:1;background-color:rgb(var(--primary-background)/var(--tw-bg-opacity))}.bg-primary-blue{--tw-bg-opacity:1;background-color:rgb(var(--color-primary-blue)/var(--tw-bg-opacity))}.bg-primary-border{--tw-bg-opacity:1;background-color:rgb(var(--primary-border)/var(--tw-bg-opacity))}.bg-secondary{--tw-bg-opacity:1;background-color:rgb(var(--color-secondary)/var(--tw-bg-opacity))}.bg-slate-100{--tw-bg-opacity:1;background-color:rgb(241 245 249/var(--tw-bg-opacity))}.bg-transparent{background-color:transparent}.bg-violet-500{--tw-bg-opacity:1;background-color:rgb(139 92 246/var(--tw-bg-opacity))}.bg-white{--tw-bg-opacity:1;background-color:rgb(255 255 255/var(--tw-bg-opacity))}.bg-opacity-20{--tw-bg-opacity:0.2}.bg-opacity-40{--tw-bg-opacity:0.4}.bg-opacity-70{--tw-bg-opacity:0.7}.bg-opacity-90{--tw-bg-opacity:0.9}.object-contain{-o-object-fit:contain;object-fit:contain}.\!p-0{padding:0!important}.p-2{padding:.5rem}.p-3{padding:.75rem}.p-5{padding:1.25rem}.p-6{padding:1.5rem}.p-\[6px\]{padding:6px}.px-1{padding-left:.25rem;padding-right:.25rem}.px-10{padding-left:2.5rem;padding-right:2.5rem}.px-2{padding-left:.5rem;padding-right:.5rem}.px-2\.5{padding-left:.625rem;padding-right:.625rem}.px-3{padding-left:.75rem;padding-right:.75rem}.px-4{padding-left:1rem;padding-right:1rem}.px-5{padding-left:1.25rem;padding-right:1.25rem}.px-6{padding-left:1.5rem;padding-right:1.5rem}.px-\[14px\]{padding-left:14px;padding-right:14px}.px-\[15px\]{padding-left:15px;padding-right:15px}.py-1{padding-top:.25rem;padding-bottom:.25rem}.py-1\.5{padding-top:.375rem;padding-bottom:.375rem}.py-2{padding-top:.5rem;padding-bottom:.5rem}.py-3{padding-top:.75rem;padding-bottom:.75rem}.py-4{padding-top:1rem;padding-bottom:1rem}.py-5{padding-top:1.25rem;padding-bottom:1.25rem}.py-6{padding-top:1.5rem;padding-bottom:1.5rem}.py-\[10px\]{padding-top:10px;padding-bottom:10px}.py-\[5px\]{padding-top:5px;padding-bottom:5px}.py-\[6px\]{padding-top:6px;padding-bottom:6px}.pb-10{padding-bottom:2.5rem}.pb-4{padding-bottom:1rem}.pb-\[10px\]{padding-bottom:10px}.pl-3{padding-left:.75rem}.pr-10{padding-right:2.5rem}.pr-2{padding-right:.5rem}.pr-4{padding-right:1rem}.pr-8{padding-right:2rem}.pr-9{padding-right:2.25rem}.pt-2{padding-top:.5rem}.pt-3{padding-top:.75rem}.text-left{text-align:left}.text-center{text-align:center}.text-right{text-align:right}.\!text-base{font-size:1rem!important;line-height:1.5rem!important}.text-base{font-size:1rem;line-height:1.5rem}.text-lg{font-size:1.125rem;line-height:1.75rem}.text-sm{font-size:.875rem;line-height:1.25rem}.text-xs{font-size:.75rem;line-height:1rem}.font-bold{font-weight:700}.font-medium{font-weight:500}.font-normal{font-weight:400}.font-semibold{font-weight:600}.leading-6{line-height:1.5rem}.leading-none{line-height:1}.leading-tight{line-height:1.25}.tracking-widest{letter-spacing:.1em}.text-\[\#303030\]{--tw-text-opacity:1;color:rgb(48 48 48/var(--tw-text-opacity))}.text-gray-400{--tw-text-opacity:1;color:rgb(156 163 175/var(--tw-text-opacity))}.text-gray-600{--tw-text-opacity:1;color:rgb(75 85 99/var(--tw-text-opacity))}.text-gray-800{--tw-text-opacity:1;color:rgb(31 41 55/var(--tw-text-opacity))}.text-gray-900{--tw-text-opacity:1;color:rgb(17 24 39/var(--tw-text-opacity))}.text-indigo-600{--tw-text-opacity:1;color:rgb(79 70 229/var(--tw-text-opacity))}.text-light-text{--tw-text-opacity:1;color:rgb(var(--light-text)/var(--tw-text-opacity))}.text-primary-text{--tw-text-opacity:1;color:rgb(var(--primary-text)/var(--tw-text-opacity))}.text-red-500{--tw-text-opacity:1;color:rgb(239 68 68/var(--tw-text-opacity))}.text-secondary-text{--tw-text-opacity:1;color:rgb(var(--secondary-text)/var(--tw-text-opacity))}.text-slate-500{--tw-text-opacity:1;color:rgb(100 116 139/var(--tw-text-opacity))}.text-slate-700{--tw-text-opacity:1;color:rgb(51 65 85/var(--tw-text-opacity))}.text-slate-900{--tw-text-opacity:1;color:rgb(15 23 42/var(--tw-text-opacity))}.text-violet-200{--tw-text-opacity:1;color:rgb(221 214 254/var(--tw-text-opacity))}.text-violet-400{--tw-text-opacity:1;color:rgb(167 139 250/var(--tw-text-opacity))}.text-white{--tw-text-opacity:1;color:rgb(255 255 255/var(--tw-text-opacity))}.underline{text-decoration-line:underline}.opacity-0{opacity:0}.opacity-100{opacity:1}.opacity-30{opacity:.3}.opacity-50{opacity:.5}.opacity-70{opacity:.7}.opacity-80{opacity:.8}.shadow-2xl{--tw-shadow:0 25px 50px -12px rgba(0,0,0,.25);--tw-shadow-colored:0 25px 50px -12px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow,0 0 #0000),var(--tw-ring-shadow,0 0 #0000),var(--tw-shadow)}.shadow-\[hsl\(206_22\%_7\%_\/_35\%\)_0px_10px_38px_-10px\2c _hsl\(206_22\%_7\%_\/_20\%\)_0px_10px_20px_-15px\]{--tw-shadow:rgba(14,18,22,.35) 0px 10px 38px -10px,rgba(14,18,22,.2) 0px 10px 20px -15px;--tw-shadow-colored:0px 10px 38px -10px var(--tw-shadow-color),0px 10px 20px -15px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow,0 0 #0000),var(--tw-ring-shadow,0 0 #0000),var(--tw-shadow)}.shadow-lg{--tw-shadow:0 10px 15px -3px rgba(0,0,0,.1),0 4px 6px -4px rgba(0,0,0,.1);--tw-shadow-colored:0 10px 15px -3px var(--tw-shadow-color),0 4px 6px -4px var(--tw-shadow-color)}.shadow-lg,.shadow-sm{box-shadow:var(--tw-ring-offset-shadow,0 0 #0000),var(--tw-ring-shadow,0 0 #0000),var(--tw-shadow)}.shadow-sm{--tw-shadow:0 1px 2px 0 rgba(0,0,0,.05);--tw-shadow-colored:0 1px 2px 0 var(--tw-shadow-color)}.outline-none{outline:2px solid transparent;outline-offset:2px}.ring-1{--tw-ring-offset-shadow:var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow:var(--tw-ring-inset) 0 0 0 calc(1px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow,0 0 #0000)}.ring-inset{--tw-ring-inset:inset}.ring-black{--tw-ring-opacity:1;--tw-ring-color:rgb(0 0 0/var(--tw-ring-opacity))}.ring-gray-300{--tw-ring-opacity:1;--tw-ring-color:rgb(209 213 219/var(--tw-ring-opacity))}.ring-primary-border{--tw-ring-opacity:1;--tw-ring-color:rgb(var(--primary-border)/var(--tw-ring-opacity))}.ring-opacity-5{--tw-ring-opacity:0.05}.filter{filter:var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow)}.backdrop-blur-sm{--tw-backdrop-blur:blur(4px);-webkit-backdrop-filter:var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia);backdrop-filter:var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia)}.transition{transition-property:color,background-color,border-color,text-decoration-color,fill,stroke,opacity,box-shadow,transform,filter,-webkit-backdrop-filter;transition-property:color,background-color,border-color,text-decoration-color,fill,stroke,opacity,box-shadow,transform,filter,backdrop-filter;transition-property:color,background-color,border-color,text-decoration-color,fill,stroke,opacity,box-shadow,transform,filter,backdrop-filter,-webkit-backdrop-filter;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:.15s}.transition-all{transition-property:all;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:.15s}.transition-opacity{transition-property:opacity;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:.15s}.duration-100{transition-duration:.1s}.duration-75{transition-duration:75ms}.ease-in{transition-timing-function:cubic-bezier(.4,0,1,1)}.ease-out{transition-timing-function:cubic-bezier(0,0,.2,1)}.will-change-\[transform\2c opacity\]{will-change:transform,opacity}.scrollbar-thin{scrollbar-color:var(--scrollbar-thumb,initial) var(--scrollbar-track,initial)}.scrollbar-thin::-webkit-scrollbar-track{background-color:var(--scrollbar-track);border-radius:var(--scrollbar-track-radius)}.scrollbar-thin::-webkit-scrollbar-track:hover{background-color:var(--scrollbar-track-hover,var(--scrollbar-track))}.scrollbar-thin::-webkit-scrollbar-track:active{background-color:var(--scrollbar-track-active,var(--scrollbar-track-hover,var(--scrollbar-track)))}.scrollbar-thin::-webkit-scrollbar-thumb{background-color:var(--scrollbar-thumb);border-radius:var(--scrollbar-thumb-radius)}.scrollbar-thin::-webkit-scrollbar-thumb:hover{background-color:var(--scrollbar-thumb-hover,var(--scrollbar-thumb))}.scrollbar-thin::-webkit-scrollbar-thumb:active{background-color:var(--scrollbar-thumb-active,var(--scrollbar-thumb-hover,var(--scrollbar-thumb)))}.scrollbar-thin::-webkit-scrollbar-corner{background-color:var(--scrollbar-corner);border-radius:var(--scrollbar-corner-radius)}.scrollbar-thin::-webkit-scrollbar-corner:hover{background-color:var(--scrollbar-corner-hover,var(--scrollbar-corner))}.scrollbar-thin::-webkit-scrollbar-corner:active{background-color:var(--scrollbar-corner-active,var(--scrollbar-corner-hover,var(--scrollbar-corner)))}.scrollbar-thin{scrollbar-width:thin}.scrollbar-thin::-webkit-scrollbar{display:block;width:8px;height:8px}.scrollbar-none{scrollbar-width:none}.scrollbar-none::-webkit-scrollbar{display:none}body,html{font-family:Inter,"system-ui"}@supports(font-variation-settings:normal){body,html{font-family:Inter var,"system-ui"}}body{font-size:100%}:focus-visible{outline:none}:root.light{color-scheme:light;--color-primary-blue:73 135 252;--color-secondary:242 242 242;--color-primary-purple:103 86 189;--primary-background:255 255 255;--primary-text:48 48 48;--secondary-text:128 128 128;--light-text:190 190 190;--primary-border:237 237 237/*! - Theme: GitHub - Description: Light theme as seen on github.com - Author: github.com - Maintainer: @Hirse - Updated: 2021-05-15 - - Outdated base version: https://github.com/primer/github-syntax-light - Current colors taken from GitHub's CSS -*/}:root.light pre code.hljs{display:block;overflow-x:auto;padding:1em}:root.light code.hljs{padding:3px 5px}:root.light .hljs{color:#24292e;background:#fff}:root.light .hljs-doctag,:root.light .hljs-keyword,:root.light .hljs-meta .hljs-keyword,:root.light .hljs-template-tag,:root.light .hljs-template-variable,:root.light .hljs-type,:root.light .hljs-variable.language_{color:#d73a49}:root.light .hljs-title,:root.light .hljs-title.class_,:root.light .hljs-title.class_.inherited__,:root.light .hljs-title.function_{color:#6f42c1}:root.light .hljs-attr,:root.light .hljs-attribute,:root.light .hljs-literal,:root.light .hljs-meta,:root.light .hljs-number,:root.light .hljs-operator,:root.light .hljs-selector-attr,:root.light .hljs-selector-class,:root.light .hljs-selector-id,:root.light .hljs-variable{color:#005cc5}:root.light .hljs-meta .hljs-string,:root.light .hljs-regexp,:root.light .hljs-string{color:#032f62}:root.light .hljs-built_in,:root.light .hljs-symbol{color:#e36209}:root.light .hljs-code,:root.light .hljs-comment,:root.light .hljs-formula{color:#6a737d}:root.light .hljs-name,:root.light .hljs-quote,:root.light .hljs-selector-pseudo,:root.light .hljs-selector-tag{color:#22863a}:root.light .hljs-subst{color:#24292e}:root.light .hljs-section{color:#005cc5;font-weight:700}:root.light .hljs-bullet{color:#735c0f}:root.light .hljs-emphasis{color:#24292e;font-style:italic}:root.light .hljs-strong{color:#24292e;font-weight:700}:root.light .hljs-addition{color:#22863a;background-color:#f0fff4}:root.light .hljs-deletion{color:#b31d28;background-color:#ffeef0}:root.dark{color-scheme:dark;--color-primary-blue:50 104 206;--color-secondary:46 46 46;--color-primary-purple:57 41 141;--primary-background:25 25 25;--primary-text:223 223 223;--secondary-text:127 127 127;--light-text:79 79 79;--primary-border:53 53 53/*! - Theme: GitHub Dark - Description: Dark theme as seen on github.com - Author: github.com - Maintainer: @Hirse - Updated: 2021-05-15 - - Outdated base version: https://github.com/primer/github-syntax-dark - Current colors taken from GitHub's CSS -*/}:root.dark pre code.hljs{display:block;overflow-x:auto;padding:1em}:root.dark code.hljs{padding:3px 5px}:root.dark .hljs{color:#c9d1d9;background:#0d1117}:root.dark .hljs-doctag,:root.dark .hljs-keyword,:root.dark .hljs-meta .hljs-keyword,:root.dark .hljs-template-tag,:root.dark .hljs-template-variable,:root.dark .hljs-type,:root.dark .hljs-variable.language_{color:#ff7b72}:root.dark .hljs-title,:root.dark .hljs-title.class_,:root.dark .hljs-title.class_.inherited__,:root.dark .hljs-title.function_{color:#d2a8ff}:root.dark .hljs-attr,:root.dark .hljs-attribute,:root.dark .hljs-literal,:root.dark .hljs-meta,:root.dark .hljs-number,:root.dark .hljs-operator,:root.dark .hljs-selector-attr,:root.dark .hljs-selector-class,:root.dark .hljs-selector-id,:root.dark .hljs-variable{color:#79c0ff}:root.dark .hljs-meta .hljs-string,:root.dark .hljs-regexp,:root.dark .hljs-string{color:#a5d6ff}:root.dark .hljs-built_in,:root.dark .hljs-symbol{color:#ffa657}:root.dark .hljs-code,:root.dark .hljs-comment,:root.dark .hljs-formula{color:#8b949e}:root.dark .hljs-name,:root.dark .hljs-quote,:root.dark .hljs-selector-pseudo,:root.dark .hljs-selector-tag{color:#7ee787}:root.dark .hljs-subst{color:#c9d1d9}:root.dark .hljs-section{color:#1f6feb;font-weight:700}:root.dark .hljs-bullet{color:#f2cc60}:root.dark .hljs-emphasis{color:#c9d1d9;font-style:italic}:root.dark .hljs-strong{color:#c9d1d9;font-weight:700}:root.dark .hljs-addition{color:#aff5b4;background-color:#033a16}:root.dark .hljs-deletion{color:#ffdcd7;background-color:#67060c}.placeholder\:text-gray-400::-moz-placeholder{--tw-text-opacity:1;color:rgb(156 163 175/var(--tw-text-opacity))}.placeholder\:text-gray-400::placeholder{--tw-text-opacity:1;color:rgb(156 163 175/var(--tw-text-opacity))}.placeholder\:text-slate-400::-moz-placeholder{--tw-text-opacity:1;color:rgb(148 163 184/var(--tw-text-opacity))}.placeholder\:text-slate-400::placeholder{--tw-text-opacity:1;color:rgb(148 163 184/var(--tw-text-opacity))}.group:hover .group-hover\:visible{visibility:visible}.group:hover .group-hover\:block{display:block}.aria-selected\:bg-slate-100[aria-selected=true]{--tw-bg-opacity:1;background-color:rgb(241 245 249/var(--tw-bg-opacity))}.data-\[disabled\]\:pointer-events-none[data-disabled]{pointer-events:none}@keyframes slideUpAndFade{0%{opacity:0;transform:translateY(2px)}to{opacity:1;transform:translateY(0)}}.data-\[state\=delayed-open\]\:data-\[side\=bottom\]\:animate-slideUpAndFade[data-side=bottom][data-state=delayed-open]{animation:slideUpAndFade .4s cubic-bezier(.16,1,.3,1)}@keyframes slideRightAndFade{0%{opacity:0;transform:translateX(2px)}to{opacity:1;transform:translateX(0)}}.data-\[state\=delayed-open\]\:data-\[side\=left\]\:animate-slideRightAndFade[data-side=left][data-state=delayed-open]{animation:slideRightAndFade .4s cubic-bezier(.16,1,.3,1)}@keyframes slideLeftAndFade{0%{opacity:0;transform:translateX(2px)}to{opacity:1;transform:translateX(0)}}.data-\[state\=delayed-open\]\:data-\[side\=right\]\:animate-slideLeftAndFade[data-side=right][data-state=delayed-open]{animation:slideLeftAndFade .4s cubic-bezier(.16,1,.3,1)}@keyframes slideDownAndFade{0%{opacity:0;transform:translateY(-2px)}to{opacity:1;transform:translateY(0)}}.data-\[state\=delayed-open\]\:data-\[side\=top\]\:animate-slideDownAndFade[data-side=top][data-state=delayed-open]{animation:slideDownAndFade .4s cubic-bezier(.16,1,.3,1)}.data-\[state\=open\]\:bg-slate-100[data-state=open]{--tw-bg-opacity:1;background-color:rgb(241 245 249/var(--tw-bg-opacity))}.data-\[disabled\]\:opacity-50[data-disabled]{opacity:.5}.ui-active\:bg-primary-blue[data-headlessui-state~=active]{--tw-bg-opacity:1;background-color:rgb(var(--color-primary-blue)/var(--tw-bg-opacity))}.ui-active\:text-white[data-headlessui-state~=active]{--tw-text-opacity:1;color:rgb(255 255 255/var(--tw-text-opacity))}:where([data-headlessui-state~=active]) .ui-active\:bg-primary-blue{--tw-bg-opacity:1;background-color:rgb(var(--color-primary-blue)/var(--tw-bg-opacity))}:where([data-headlessui-state~=active]) .ui-active\:text-white{--tw-text-opacity:1;color:rgb(255 255 255/var(--tw-text-opacity))}.ui-not-active\:text-secondary-text[data-headlessui-state]:not([data-headlessui-state~=active]){--tw-text-opacity:1;color:rgb(var(--secondary-text)/var(--tw-text-opacity))}:where([data-headlessui-state]:not([data-headlessui-state~=active])) .ui-not-active\:text-secondary-text:not([data-headlessui-state]){--tw-text-opacity:1;color:rgb(var(--secondary-text)/var(--tw-text-opacity))}.hover\:border-gray-400:hover{--tw-border-opacity:1;border-color:rgb(156 163 175/var(--tw-border-opacity))}.hover\:bg-gray-50:hover{--tw-bg-opacity:1;background-color:rgb(249 250 251/var(--tw-bg-opacity))}.hover\:bg-opacity-100:hover{--tw-bg-opacity:1}.hover\:text-primary-text:hover{--tw-text-opacity:1;color:rgb(var(--primary-text)/var(--tw-text-opacity))}.hover\:text-violet-100:hover{--tw-text-opacity:1;color:rgb(237 233 254/var(--tw-text-opacity))}.hover\:opacity-100:hover{opacity:1}.hover\:opacity-80:hover{opacity:.8}.focus\:z-10:focus{z-index:10}.focus\:outline-none:focus{outline:2px solid transparent;outline-offset:2px}.focus\:ring-2:focus{--tw-ring-offset-shadow:var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow:var(--tw-ring-inset) 0 0 0 calc(2px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow,0 0 #0000)}.focus\:ring-inset:focus{--tw-ring-inset:inset}.focus\:ring-indigo-600:focus{--tw-ring-opacity:1;--tw-ring-color:rgb(79 70 229/var(--tw-ring-opacity))}.focus\:ring-slate-400:focus{--tw-ring-opacity:1;--tw-ring-color:rgb(148 163 184/var(--tw-ring-opacity))}.focus\:ring-offset-2:focus{--tw-ring-offset-width:2px}.focus-visible\:ring-2:focus-visible{--tw-ring-offset-shadow:var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow:var(--tw-ring-inset) 0 0 0 calc(2px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow,0 0 #0000)}.focus-visible\:ring-white:focus-visible{--tw-ring-opacity:1;--tw-ring-color:rgb(255 255 255/var(--tw-ring-opacity))}.focus-visible\:ring-opacity-75:focus-visible{--tw-ring-opacity:0.75}.disabled\:pointer-events-none:disabled{pointer-events:none}.disabled\:cursor-not-allowed:disabled{cursor:not-allowed}.disabled\:opacity-50:disabled{opacity:.5}:is(.dark .dark\:border-gray-500){--tw-border-opacity:1;border-color:rgb(107 114 128/var(--tw-border-opacity))}:is(.dark .dark\:border-b-slate-700){--tw-border-opacity:1;border-bottom-color:rgb(51 65 85/var(--tw-border-opacity))}:is(.dark .dark\:bg-\[\#444a5354\]){background-color:#444a5354}:is(.dark .dark\:bg-\[\#ffffff26\]){background-color:#ffffff26}:is(.dark .dark\:bg-gray-600){--tw-bg-opacity:1;background-color:rgb(75 85 99/var(--tw-bg-opacity))}:is(.dark .dark\:bg-primary-blue){--tw-bg-opacity:1;background-color:rgb(var(--color-primary-blue)/var(--tw-bg-opacity))}:is(.dark .dark\:bg-slate-700){--tw-bg-opacity:1;background-color:rgb(51 65 85/var(--tw-bg-opacity))}:is(.dark .dark\:bg-slate-800){--tw-bg-opacity:1;background-color:rgb(30 41 59/var(--tw-bg-opacity))}:is(.dark .dark\:bg-slate-900){--tw-bg-opacity:1;background-color:rgb(15 23 42/var(--tw-bg-opacity))}:is(.dark .dark\:text-gray-100){--tw-text-opacity:1;color:rgb(243 244 246/var(--tw-text-opacity))}:is(.dark .dark\:text-gray-300){--tw-text-opacity:1;color:rgb(209 213 219/var(--tw-text-opacity))}:is(.dark .dark\:text-primary-text){--tw-text-opacity:1;color:rgb(var(--primary-text)/var(--tw-text-opacity))}:is(.dark .dark\:text-slate-400){--tw-text-opacity:1;color:rgb(148 163 184/var(--tw-text-opacity))}:is(.dark .dark\:text-slate-50){--tw-text-opacity:1;color:rgb(248 250 252/var(--tw-text-opacity))}:is(.dark .dark\:aria-selected\:bg-slate-700[aria-selected=true]){--tw-bg-opacity:1;background-color:rgb(51 65 85/var(--tw-bg-opacity))}:is(.dark .dark\:data-\[state\=open\]\:bg-slate-800[data-state=open]){--tw-bg-opacity:1;background-color:rgb(30 41 59/var(--tw-bg-opacity))}:is(.dark .dark\:focus\:ring-slate-400:focus){--tw-ring-opacity:1;--tw-ring-color:rgb(148 163 184/var(--tw-ring-opacity))}:is(.dark .dark\:focus\:ring-offset-slate-900:focus){--tw-ring-offset-color:#0f172a}@media (min-width:640px){.sm\:flex{display:flex}.sm\:max-w-lg{max-width:32rem}.sm\:grid-cols-2{grid-template-columns:repeat(2,minmax(0,1fr))}.sm\:flex-row{flex-direction:row}.sm\:items-center{align-items:center}.sm\:justify-end{justify-content:flex-end}.sm\:space-x-2>:not([hidden])~:not([hidden]){--tw-space-x-reverse:0;margin-right:calc(.5rem * var(--tw-space-x-reverse));margin-left:calc(.5rem * calc(1 - var(--tw-space-x-reverse)))}.sm\:space-x-3>:not([hidden])~:not([hidden]){--tw-space-x-reverse:0;margin-right:calc(.75rem * var(--tw-space-x-reverse));margin-left:calc(.75rem * calc(1 - var(--tw-space-x-reverse)))}.sm\:space-y-0>:not([hidden])~:not([hidden]){--tw-space-y-reverse:0;margin-top:calc(0px * calc(1 - var(--tw-space-y-reverse)));margin-bottom:calc(0px * var(--tw-space-y-reverse))}.sm\:rounded-lg{border-radius:.5rem}.sm\:text-left{text-align:left}.sm\:text-sm{font-size:.875rem;line-height:1.25rem}.sm\:leading-6{line-height:1.5rem}}.\[\&_\[cmdk-group-heading\]\]\:px-2 [cmdk-group-heading]{padding-left:.5rem;padding-right:.5rem}.\[\&_\[cmdk-group-heading\]\]\:pb-1\.5 [cmdk-group-heading]{padding-bottom:.375rem}.\[\&_\[cmdk-group-heading\]\]\:text-sm [cmdk-group-heading]{font-size:.875rem;line-height:1.25rem}.\[\&_\[cmdk-group-heading\]\]\:font-medium [cmdk-group-heading]{font-weight:500}.\[\&_\[cmdk-group-heading\]\]\:font-semibold [cmdk-group-heading]{font-weight:600}.\[\&_\[cmdk-group-heading\]\]\:text-slate-500 [cmdk-group-heading]{--tw-text-opacity:1;color:rgb(100 116 139/var(--tw-text-opacity))}.\[\&_\[cmdk-group-heading\]\]\:text-slate-900 [cmdk-group-heading]{--tw-text-opacity:1;color:rgb(15 23 42/var(--tw-text-opacity))}:is(.dark .\[\&_\[cmdk-group-heading\]\]\:dark\:text-slate-300) [cmdk-group-heading]{--tw-text-opacity:1;color:rgb(203 213 225/var(--tw-text-opacity))}.\[\&_\[cmdk-group\]\]\:px-2 [cmdk-group]{padding-left:.5rem;padding-right:.5rem}.\[\&_\[cmdk-input-wrapper\]_svg\]\:h-5 [cmdk-input-wrapper] svg{height:1.25rem}.\[\&_\[cmdk-input-wrapper\]_svg\]\:w-5 [cmdk-input-wrapper] svg{width:1.25rem}.\[\&_\[cmdk-input\]\]\:h-12 [cmdk-input]{height:3rem}.\[\&_\[cmdk-item\]\]\:px-2 [cmdk-item]{padding-left:.5rem;padding-right:.5rem}.\[\&_\[cmdk-item\]\]\:py-3 [cmdk-item]{padding-top:.75rem;padding-bottom:.75rem}.\[\&_\[cmdk-item\]_svg\]\:h-5 [cmdk-item] svg{height:1.25rem}.\[\&_\[cmdk-item\]_svg\]\:w-5 [cmdk-item] svg{width:1.25rem}.\[\&_\[dialog-overlay\]\]\:bg-red-100 [dialog-overlay]{--tw-bg-opacity:1;background-color:rgb(254 226 226/var(--tw-bg-opacity))} \ No newline at end of file diff --git a/spaces/taesiri/ConvolutionalHoughMatchingNetworks/model/base/correlation.py b/spaces/taesiri/ConvolutionalHoughMatchingNetworks/model/base/correlation.py deleted file mode 100644 index 024fc9eb717f2564562dcc0e776eec1ed7d6667d..0000000000000000000000000000000000000000 --- a/spaces/taesiri/ConvolutionalHoughMatchingNetworks/model/base/correlation.py +++ /dev/null @@ -1,68 +0,0 @@ -r""" Provides functions that creates/manipulates correlation matrices """ - -import math - -from torch.nn.functional import interpolate as resize -import torch - -from .geometry import Geometry - - -class Correlation: - - @classmethod - def mutual_nn_filter(cls, correlation_matrix, eps=1e-30): - r""" Mutual nearest neighbor filtering (Rocco et al. NeurIPS'18 )""" - corr_src_max = torch.max(correlation_matrix, dim=2, keepdim=True)[0] - corr_trg_max = torch.max(correlation_matrix, dim=1, keepdim=True)[0] - corr_src_max[corr_src_max == 0] += eps - corr_trg_max[corr_trg_max == 0] += eps - - corr_src = correlation_matrix / corr_src_max - corr_trg = correlation_matrix / corr_trg_max - - return correlation_matrix * (corr_src * corr_trg) - - @classmethod - def build_correlation6d(self, src_feat, trg_feat, scales, conv2ds): - r""" Build 6-dimensional correlation tensor """ - - bsz, _, side, side = src_feat.size() - - # Construct feature pairs with multiple scales - _src_feats = [] - _trg_feats = [] - for scale, conv in zip(scales, conv2ds): - s = (round(side * math.sqrt(scale)),) * 2 - _src_feat = conv(resize(src_feat, s, mode='bilinear', align_corners=True)) - _trg_feat = conv(resize(trg_feat, s, mode='bilinear', align_corners=True)) - _src_feats.append(_src_feat) - _trg_feats.append(_trg_feat) - - # Build multiple 4-dimensional correlation tensor - corr6d = [] - for src_feat in _src_feats: - ch = src_feat.size(1) - - src_side = src_feat.size(-1) - src_feat = src_feat.view(bsz, ch, -1).transpose(1, 2) - src_norm = src_feat.norm(p=2, dim=2, keepdim=True) - - for trg_feat in _trg_feats: - trg_side = trg_feat.size(-1) - trg_feat = trg_feat.view(bsz, ch, -1) - trg_norm = trg_feat.norm(p=2, dim=1, keepdim=True) - - correlation = torch.bmm(src_feat, trg_feat) / torch.bmm(src_norm, trg_norm) - correlation = correlation.view(bsz, src_side, src_side, trg_side, trg_side).contiguous() - corr6d.append(correlation) - - # Resize the spatial sizes of the 4D tensors to the same size - for idx, correlation in enumerate(corr6d): - corr6d[idx] = Geometry.interpolate4d(correlation, [side, side]) - - # Build 6-dimensional correlation tensor - corr6d = torch.stack(corr6d).view(len(scales), len(scales), - bsz, side, side, side, side).permute(2, 0, 1, 3, 4, 5, 6) - return corr6d.clamp(min=0) - diff --git a/spaces/takanabe/space-demo-andite-anything-v4.0/app.py b/spaces/takanabe/space-demo-andite-anything-v4.0/app.py deleted file mode 100644 index 47a2051db6dadeea03edf70d62694fd3e5e88ba7..0000000000000000000000000000000000000000 --- a/spaces/takanabe/space-demo-andite-anything-v4.0/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/andite/anything-v4.0").launch() \ No newline at end of file diff --git a/spaces/tennant/MUG_caption/model.py b/spaces/tennant/MUG_caption/model.py deleted file mode 100644 index e893ba234dfbae294e6c5e0c0a11d399b0b02309..0000000000000000000000000000000000000000 --- a/spaces/tennant/MUG_caption/model.py +++ /dev/null @@ -1,840 +0,0 @@ -# -------------------------------------------------------- -# References: -# timm: https://github.com/rwightman/pytorch-image-models/tree/master/timm -# DeiT: https://github.com/facebookresearch/deit -# -------------------------------------------------------- - -from functools import partial - -import torch -from torch._C import Value -import torch.nn as nn -import numpy as np - -from timm.models.vision_transformer import PatchEmbed, Block -from transformers import EncoderDecoderModel, BertTokenizer, AutoTokenizer - - -from torch import einsum, nn -import torch.nn.functional as F -from einops import rearrange, repeat - -import torch -import torch.nn as nn -import torch.nn.functional as F - -class FocalLoss(nn.CrossEntropyLoss): - ''' Focal loss for classification tasks on imbalanced datasets ''' - - def __init__(self, gamma=1.0, alpha=None, ignore_index=-100, reduction='none'): - super().__init__(weight=alpha, ignore_index=ignore_index, reduction='none') - self.reduction = reduction - self.gamma = gamma - - def forward(self, input_, target): - cross_entropy = super().forward(input_, target) - # Temporarily mask out ignore index to '0' for valid gather-indices input. - # This won't contribute final loss as the cross_entropy contribution - # for these would be zero. - target = target * (target != self.ignore_index).long() - input_prob = torch.gather(F.softmax(input_, 1), 1, target.unsqueeze(1)).squeeze(1) - loss = torch.pow(1 - input_prob, self.gamma) * cross_entropy - return torch.mean(loss) if self.reduction == 'mean' \ - else torch.sum(loss) if self.reduction == 'sum' \ - else loss - - -# helper functions - -import math -from functools import reduce - -def prob_mask_like(t, prob): - return torch.zeros_like(t).float().uniform_(0, 1) < prob - -def mask_with_tokens(t, token_ids): - init_no_mask = torch.full_like(t, False, dtype=torch.bool) - mask = reduce(lambda acc, el: acc | (t == el), token_ids, init_no_mask) - return mask - -def get_mask_subset_with_prob(mask, prob): - batch, seq_len, device = *mask.shape, mask.device - max_masked = math.ceil(prob * seq_len) - - num_tokens = mask.sum(dim=-1, keepdim=True) - mask_excess = (mask.cumsum(dim=-1) > (num_tokens * prob).ceil()) - mask_excess = mask_excess[:, :max_masked] - - rand = torch.rand((batch, seq_len), device=device).masked_fill(~mask, -1e9) - _, sampled_indices = rand.topk(max_masked, dim=-1) - sampled_indices = (sampled_indices + 1).masked_fill_(mask_excess, 0) - - new_mask = torch.zeros((batch, seq_len + 1), device=device) - new_mask.scatter_(-1, sampled_indices, 1) - return new_mask[:, 1:].bool() - - -def exists(val): - return val is not None - -def default(val, d): - return val if exists(val) else d - -# normalization -# they use layernorm without bias, something that pytorch does not offer - - -class LayerNorm(nn.Module): - def __init__(self, dim): - super().__init__() - self.gamma = nn.Parameter(torch.ones(dim)) - self.register_buffer("beta", torch.zeros(dim)) - - def forward(self, x): - return F.layer_norm(x, x.shape[-1:], self.gamma, self.beta) - -# residual -class Residual(nn.Module): - def __init__(self, fn): - super().__init__() - self.fn = fn - - def forward(self, x, *args, **kwargs): - return self.fn(x, *args, **kwargs) + x - -# rotary positional embedding -# https://arxiv.org/abs/2104.09864 -class RotaryEmbedding(nn.Module): - def __init__(self, dim): - super().__init__() - inv_freq = 1.0 / (10000 ** (torch.arange(0, dim, 2).float() / dim)) - self.register_buffer("inv_freq", inv_freq) - - def forward(self, max_seq_len, *, device): - seq = torch.arange(max_seq_len, device=device, dtype=self.inv_freq.dtype) - freqs = einsum("i , j -> i j", seq, self.inv_freq) - return torch.cat((freqs, freqs), dim=-1) - - -def rotate_half(x): - x = rearrange(x, "... (j d) -> ... j d", j=2) - x1, x2 = x.unbind(dim=-2) - return torch.cat((-x2, x1), dim=-1) - - -def apply_rotary_pos_emb(pos, t): - return (t * pos.cos()) + (rotate_half(t) * pos.sin()) - - -# classic Noam Shazeer paper, except here they use SwiGLU instead of the more popular GELU for gating the feedforward -# https://arxiv.org/abs/2002.05202 -class SwiGLU(nn.Module): - def forward(self, x): - x, gate = x.chunk(2, dim=-1) - return F.silu(gate) * x - - -# parallel attention and feedforward with residual -# discovered by Wang et al + EleutherAI from GPT-J fame -class ParallelTransformerBlock(nn.Module): - def __init__(self, dim, dim_head=64, heads=8, ff_mult=4, attn_drop_rate=0.0): - super().__init__() - self.norm = LayerNorm(dim) - - attn_inner_dim = dim_head * heads - ff_inner_dim = dim * ff_mult - self.fused_dims = (attn_inner_dim, dim_head, dim_head, (ff_inner_dim * 2)) - - self.heads = heads - self.scale = dim_head**-0.5 - self.rotary_emb = RotaryEmbedding(dim_head) - - self.fused_attn_ff_proj = nn.Linear(dim, sum(self.fused_dims), bias=False) - self.attn_out = nn.Linear(attn_inner_dim, dim, bias=False) - - self.ff_out = nn.Sequential( - SwiGLU(), - nn.Linear(ff_inner_dim, dim, bias=False) - ) - - self.attn_drop_rate = attn_drop_rate - - # for caching causal mask and rotary embeddings - - self.register_buffer("mask", None, persistent=False) - self.register_buffer("pos_emb", None, persistent=False) - - def get_mask(self, n, device): - if self.mask is not None and self.mask.shape[-1] >= n: - return self.mask[:n, :n] - - mask = torch.ones((n, n), device=device, dtype=torch.bool).triu(1) - self.register_buffer("mask", mask, persistent=False) - return mask - - def get_rotary_embedding(self, n, device): - if self.pos_emb is not None and self.pos_emb.shape[-2] >= n: - return self.pos_emb[:n] - - pos_emb = self.rotary_emb(n, device=device) - self.register_buffer("pos_emb", pos_emb, persistent=False) - return pos_emb - - def forward(self, x, attn_mask=None): - """ - Performs self attention and feedforward - einstein notation - b - batch - h - heads - n, i, j - sequence length (base sequence length, source, target) - d - feature dimension - """ - - n, device, h = x.shape[1], x.device, self.heads - # pre layernorm - x = self.norm(x) - # attention queries, keys, values, and feedforward inner - q, k, v, ff = self.fused_attn_ff_proj(x).split(self.fused_dims, dim=-1) - - # split heads - # they use multi-query single-key-value attention, yet another Noam Shazeer paper - # they found no performance loss past a certain scale, and more efficient decoding obviously - # https://arxiv.org/abs/1911.02150 - q = rearrange(q, "b n (h d) -> b h n d", h=h) - # rotary embeddings - positions = self.get_rotary_embedding(n, device) - q, k = map(lambda t: apply_rotary_pos_emb(positions, t), (q, k)) - # scale - q = q * self.scale - # similarity - sim = einsum("b h i d, b j d -> b h i j", q, k) - # causal mask - causal_mask = self.get_mask(n, device) - sim = sim.masked_fill(causal_mask, -torch.finfo(sim.dtype).max) - - # extra attention mask - for masking out attention from text CLS token to padding - if exists(attn_mask): - attn_mask = rearrange(attn_mask, 'b i j -> b 1 i j') - sim = sim.masked_fill(~attn_mask, -torch.finfo(sim.dtype).max) - - if self.attn_drop_rate != 0.: - # import ipdb; ipdb.set_trace() - drop_ind = sim != -torch.finfo(sim.dtype).max - dropout_mask = torch.cuda.FloatTensor(*sim[drop_ind].shape).uniform_() > self.attn_drop_rate - sim[drop_ind] = sim[drop_ind].masked_fill(~dropout_mask, -torch.finfo(sim.dtype).max) - - # attention - sim = sim - sim.amax(dim=-1, keepdim=True).detach() - attn = sim.softmax(dim=-1) - # aggregate values - out = einsum("b h i j, b j d -> b h i d", attn, v) - # merge heads - out = rearrange(out, "b h n d -> b n (h d)") - return self.attn_out(out) + self.ff_out(ff) - -# cross attention - using multi-query + one-headed key / values as in PaLM w/ optional parallel feedforward -class CrossAttention(nn.Module): - def __init__( - self, - dim, - *, - context_dim=None, - dim_head=64, - heads=8, - parallel_ff=False, - ff_mult=4, - norm_context=False, - dropout=0.0, - ): - super().__init__() - self.heads = heads - self.scale = dim_head ** -0.5 - inner_dim = heads * dim_head - context_dim = default(context_dim, dim) - - self.norm = LayerNorm(dim) - self.context_norm = LayerNorm(context_dim) if norm_context else nn.Identity() - - self.to_q = nn.Linear(dim, inner_dim, bias=False) - self.to_kv = nn.Linear(context_dim, dim_head * 2, bias=False) - self.to_out = nn.Linear(inner_dim, dim, bias=False) - - self.dropout = dropout - - # whether to have parallel feedforward - ff_inner_dim = ff_mult * dim - - self.ff = nn.Sequential( - nn.Linear(dim, ff_inner_dim * 2, bias=False), - SwiGLU(), - nn.Linear(ff_inner_dim, dim, bias=False) - ) if parallel_ff else None - - def forward(self, x, context): - """ - Use text and query, and image as kv - einstein notation - b - batch - h - heads - n, i, j - sequence length (base sequence length, source, target) - d - feature dimension - """ - - # pre-layernorm, for queries and context - x = self.norm(x) - context = self.context_norm(context) - # get queries - q = self.to_q(x) - q = rearrange(q, 'b n (h d) -> b h n d', h = self.heads) - # scale - q = q * self.scale - # get key / values - k, v = self.to_kv(context).chunk(2, dim=-1) - # query / key similarity - sim = einsum('b h i d, b j d -> b h i j', q, k) - - # dropout - if self.training: - dropout_mask = torch.cuda.FloatTensor(*sim.shape).uniform_() > self.dropout - sim = sim.masked_fill(~dropout_mask, -torch.finfo(sim.dtype).max) - - # attention - sim = sim - sim.amax(dim=-1, keepdim=True) - attn = sim.softmax(dim=-1) - # aggregate - out = einsum('b h i j, b j d -> b h i d', attn, v) - # merge and combine heads - out = rearrange(out, 'b h n d -> b n (h d)') - out = self.to_out(out) - # add parallel feedforward (for multimodal layers) - if exists(self.ff): - out = out + self.ff(x) - return out - - - -def get_2d_sincos_pos_embed(embed_dim, grid_size, cls_token=False): - """ - grid_size: int of the grid height and width - return: - pos_embed: [grid_size*grid_size, embed_dim] or [1+grid_size*grid_size, embed_dim] (w/ or w/o cls_token) - """ - grid_h = np.arange(grid_size, dtype=np.float32) - grid_w = np.arange(grid_size, dtype=np.float32) - grid = np.meshgrid(grid_w, grid_h) # here w goes first - grid = np.stack(grid, axis=0) - - grid = grid.reshape([2, 1, grid_size, grid_size]) - pos_embed = get_2d_sincos_pos_embed_from_grid(embed_dim, grid) - if cls_token: - pos_embed = np.concatenate([np.zeros([1, embed_dim]), pos_embed], axis=0) - return pos_embed - -def get_2d_sincos_pos_embed_from_grid(embed_dim, grid): - assert embed_dim % 2 == 0 - - # use half of dimensions to encode grid_h - emb_h = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[0]) # (H*W, D/2) - emb_w = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[1]) # (H*W, D/2) - - emb = np.concatenate([emb_h, emb_w], axis=1) # (H*W, D) - return emb - -def get_1d_sincos_pos_embed_from_grid(embed_dim, pos): - """ - embed_dim: output dimension for each position - pos: a list of positions to be encoded: size (M,) - out: (M, D) - """ - assert embed_dim % 2 == 0 - omega = np.arange(embed_dim // 2, dtype=np.float32) - omega /= embed_dim / 2. - omega = 1. / 10000**omega # (D/2,) - - pos = pos.reshape(-1) # (M,) - out = np.einsum('m,d->md', pos, omega) # (M, D/2), outer product - - emb_sin = np.sin(out) # (M, D/2) - emb_cos = np.cos(out) # (M, D/2) - - emb = np.concatenate([emb_sin, emb_cos], axis=1) # (M, D) - return emb - -class MaskedAutoencoderViT(nn.Module): - """ Masked Autoencoder with VisionTransformer backbone - """ - def __init__(self, img_size=224, patch_size=16, in_chans=3, - embed_dim=1024, depth=24, num_heads=16, - decoder_embed_dim=512, decoder_depth=8, decoder_num_heads=16, - mlp_ratio=4., norm_layer=nn.LayerNorm, norm_pix_loss=True, - unimodal_depth=2, multimodal_depth=8, dim_head=64,heads=8, - ff_mult=4, extract_multi_level=False, use_focal_loss=False, focal_gamma=1.0, - less_u=False, use_weak_negative=False, use_label_smooth=False, ls_coef=0.1, - use_maximum_entropy=False, ce_additional=False, use_word_weights=False, use_token_pos=False, - use_expect_k=False, use_top_k=False, mae_decoder_caption=False, decoder_slot_depth=2, disable_decoder_vis_token_grad=False, - cross_attn_dropout=0.0, predict_next_k_words=False, next_k=3, masked_text=False, masked_text_ratio=0.25, text_length=70, - projector_layer=0, uni_dim=1024, uni_dim_head=64, uni_heads=8, uni_ff_mult=4, text_drop_attn=0.): - super().__init__() - - # -------------------------------------------------------------------------- - # MAE encoder specifics - self.patch_embed = PatchEmbed(img_size, patch_size, in_chans, embed_dim) - num_patches = self.patch_embed.num_patches - - self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) - self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, embed_dim), requires_grad=False) # fixed sin-cos embedding - - self.blocks = nn.ModuleList([ - Block(embed_dim, num_heads, mlp_ratio, qkv_bias=True, norm_layer=norm_layer) - for i in range(depth)]) - self.norm = norm_layer(embed_dim) - # -------------------------------------------------------------------------- - - # -------------------------------------------------------------------------- - # MAE decoder specifics - self.decoder_embed = nn.Linear(embed_dim, decoder_embed_dim, bias=True) - - self.mask_token = nn.Parameter(torch.zeros(1, 1, decoder_embed_dim)) - - self.decoder_pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, decoder_embed_dim), requires_grad=False) # fixed sin-cos embedding - - self.mae_decoder_depth = decoder_depth - self.mae_decoder_caption = mae_decoder_caption - self.decoder_blocks = nn.ModuleList([ - Block(decoder_embed_dim, decoder_num_heads, mlp_ratio, qkv_bias=True, norm_layer=norm_layer) - for i in range(decoder_depth)]) - - if self.mae_decoder_caption: - - self.decoder_slot_layers = nn.ModuleList([]) - for _ in range(decoder_slot_depth): - self.decoder_slot_layers.append( - Residual(CrossAttention(dim=decoder_embed_dim, dim_head=dim_head, heads=heads, parallel_ff=True, ff_mult=ff_mult,)), - # Residual(CrossAttention(dim=decoder_embed_dim, dim_head=dim_head, heads=heads, parallel_ff=True, ff_mult=ff_mult,)) - ) - self.decoder_caption_proj = nn.Linear(decoder_embed_dim, embed_dim) - self.disable_decoder_vis_token_grad = disable_decoder_vis_token_grad - - self.decoder_norm = norm_layer(decoder_embed_dim) - self.decoder_pred = nn.Linear(decoder_embed_dim, patch_size**2 * in_chans, bias=True) # encoder to decoder - # -------------------------------------------------------------------------- - - self.norm_pix_loss = norm_pix_loss - - # -------------------------------------------------------------------------- - # captioner specifics - # unimodal layer is for text tokens. - # multimodal layer is for text to query from image. - self.tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased", ) - - # token embeddings - # NOTE: +1 for mask token used by MLM objective - # self.token_emb = nn.Embedding(len(self.tokenizer.vocab) + 1, uni_dim) - - self.token_emb = nn.Embedding(len(self.tokenizer.vocab), uni_dim) - self.text_cls_token = nn.Parameter(torch.randn(uni_dim)) - - self.embed_dim = embed_dim - self.uni_dim = uni_dim - - #import ipdb; ipdb.set_trace() - # unimodal layers - # TODO: search on the four parameters - # uni_dim=1024, uni_dim_head=64, uni_heads=8, uni_ff_mult=4 - self.text_drop_attn = text_drop_attn - self.unimodal_layers = nn.ModuleList([]) - for _ in range(unimodal_depth): - self.unimodal_layers.append( - Residual(ParallelTransformerBlock(dim=uni_dim, dim_head=uni_dim_head, - heads=uni_heads, ff_mult=uni_ff_mult, attn_drop_rate=self.text_drop_attn)), - ) - - self.need_uni_2_mul_proj = False - if uni_dim != embed_dim: - self.need_uni_2_mul_proj = True - self.uni_2_mul_proj = nn.Linear(uni_dim, embed_dim) - - # multimodal layers - self.multimodal_layers = nn.ModuleList([]) - self.less_u = less_u - if less_u: - for _ in range(multimodal_depth): - self.multimodal_layers.append(nn.ModuleList([ - Residual(CrossAttention(dim=embed_dim, dim_head=dim_head, heads=heads, parallel_ff=True, ff_mult=ff_mult, dropout=cross_attn_dropout)), - Residual(CrossAttention(dim=embed_dim, dim_head=dim_head, heads=heads, parallel_ff=True, ff_mult=ff_mult, dropout=cross_attn_dropout)) - ])) - else: - for _ in range(multimodal_depth): - self.multimodal_layers.append(nn.ModuleList([ - Residual(ParallelTransformerBlock(dim=embed_dim, dim_head=dim_head, heads=heads, ff_mult=ff_mult)), - Residual(CrossAttention(dim=embed_dim, dim_head=dim_head, heads=heads, parallel_ff=True, ff_mult=ff_mult, dropout=cross_attn_dropout)) - ])) - - # to logits: for softmax caption loss - self.to_logits = nn.Sequential( - LayerNorm(embed_dim), - nn.Linear(embed_dim, len(self.tokenizer.vocab), bias=False) - ) - - self.ce_additional = ce_additional - if ce_additional: - # to logits: for other losses - self.to_logits_1 = nn.Sequential( - LayerNorm(embed_dim), - nn.Linear(embed_dim, len(self.tokenizer.vocab), bias=False) - ) - - nn.init.normal_(self.token_emb.weight, std=0.02) - - self.pad_id = 0 - self.cls_id = 101 - self.sep_id = 102 - - self.logsoftmax = nn.LogSoftmax(dim=1) - - self.extract_multi_level = extract_multi_level - if self.extract_multi_level: - self.projectors = nn.ModuleList([nn.Sequential( - nn.Linear(embed_dim, embed_dim // 2), - nn.GELU(), - nn.Linear(embed_dim // 2, embed_dim), - norm_layer(embed_dim) - ) for _ in [2, 5, 8,]]) - # -------------------------------------------------------------------------- - - self.use_focal_loss = use_focal_loss - - self.use_weak_negative = use_weak_negative - self.use_label_smooth = use_label_smooth - self.ls_coef = ls_coef - self.use_entropy = use_maximum_entropy - self.use_word_weights = use_word_weights - self.use_token_pos = use_token_pos - - self.predict_next_k_words = predict_next_k_words - self.next_k = next_k - self.pad = torch.nn.ReplicationPad1d((0, self.next_k-1)) - - self.use_expect_k = use_expect_k - self.use_top_k = use_top_k - - if self.use_word_weights or self.use_token_pos: - self.focal_loss = FocalLoss(ignore_index=self.pad_id, gamma=focal_gamma, reduction='none') - else: - self.focal_loss = FocalLoss(ignore_index=self.pad_id, gamma=focal_gamma, reduction='mean') - - self.masked_text = masked_text - self.masked_text_ratio = masked_text_ratio - # self.text_mask_token = nn.Parameter(torch.randn(embed_dim)) - self.mask_token_id = len(self.tokenizer.vocab) - - # self.text_position_embed = nn.Parameter(torch.zeros(1, text_length, embed_dim), requires_grad=False) - self.text_length = text_length - - self.latent_projector_layer = projector_layer - if self.latent_projector_layer != 0: - self.latent_projector = [ - nn.Linear(embed_dim, embed_dim), - nn.ReLU() - ] * (self.latent_projector_layer - 1) - self.latent_projector.append(nn.Linear(embed_dim, embed_dim)) - - self.latent_projector = nn.Sequential(*self.latent_projector) - - - self.initialize_weights() - - - def initialize_weights(self): - # initialization - # initialize (and freeze) pos_embed by sin-cos embedding - pos_embed = get_2d_sincos_pos_embed(self.pos_embed.shape[-1], int(self.patch_embed.num_patches**.5), cls_token=True) - self.pos_embed.data.copy_(torch.from_numpy(pos_embed).float().unsqueeze(0)) - - decoder_pos_embed = get_2d_sincos_pos_embed(self.decoder_pos_embed.shape[-1], int(self.patch_embed.num_patches**.5), cls_token=True) - self.decoder_pos_embed.data.copy_(torch.from_numpy(decoder_pos_embed).float().unsqueeze(0)) - - # text_pos_embed = get_1d_sincos_pos_embed_from_grid(self.embed_dim, ) - # torch.nn.init.xavier_normal_(self.text_position_embed) # learnable text position embedding - - # initialize patch_embed like nn.Linear (instead of nn.Conv2d) - w = self.patch_embed.proj.weight.data - torch.nn.init.xavier_uniform_(w.view([w.shape[0], -1])) - - # timm's trunc_normal_(std=.02) is effectively normal_(std=0.02) as cutoff is too big (2.) - torch.nn.init.normal_(self.cls_token, std=.02) - torch.nn.init.normal_(self.mask_token, std=.02) - # torch.nn.init.normal_(self.text_mask_token, std=.02) - - # initialize nn.Linear and nn.LayerNorm - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - # we use xavier_uniform following official JAX ViT: - torch.nn.init.xavier_uniform_(m.weight) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - def patchify(self, imgs): - """ - imgs: (N, 3, H, W) - x: (N, L, patch_size**2 *3) - """ - p = self.patch_embed.patch_size[0] - assert imgs.shape[2] == imgs.shape[3] and imgs.shape[2] % p == 0 - - h = w = imgs.shape[2] // p - x = imgs.reshape(shape=(imgs.shape[0], 3, h, p, w, p)) - x = torch.einsum('nchpwq->nhwpqc', x) - x = x.reshape(shape=(imgs.shape[0], h * w, p**2 * 3)) - return x - - def unpatchify(self, x): - """ - x: (N, L, patch_size**2 *3) - imgs: (N, 3, H, W) - """ - p = self.patch_embed.patch_size[0] - h = w = int(x.shape[1]**.5) - assert h * w == x.shape[1] - - x = x.reshape(shape=(x.shape[0], h, w, p, p, 3)) - x = torch.einsum('nhwpqc->nchpwq', x) - imgs = x.reshape(shape=(x.shape[0], 3, h * p, h * p)) - return imgs - - def random_masking(self, x, mask_ratio): - """ - Perform per-sample random masking by per-sample shuffling. - Per-sample shuffling is done by argsort random noise. - x: [N, L, D], sequence - """ - N, L, D = x.shape # batch, length, dim - len_keep = int(L * (1 - mask_ratio)) - - noise = torch.rand(N, L, device=x.device) # noise in [0, 1] - - # sort noise for each sample - ids_shuffle = torch.argsort(noise, dim=1) # ascend: small is keep, large is remove - ids_restore = torch.argsort(ids_shuffle, dim=1) - - # keep the first subset - ids_keep = ids_shuffle[:, :len_keep] - x_masked = torch.gather(x, dim=1, index=ids_keep.unsqueeze(-1).repeat(1, 1, D)) - - # generate the binary mask: 0 is keep, 1 is remove - mask = torch.ones([N, L], device=x.device) - mask[:, :len_keep] = 0 - # unshuffle to get the binary mask - mask = torch.gather(mask, dim=1, index=ids_restore) - - return x_masked, mask, ids_restore, ids_keep - - def forward_encoder(self, x, mask_ratio): - # embed patches - x = self.patch_embed(x) - - # add pos embed w/o cls token - x = x + self.pos_embed[:, 1:, :] - - # masking: length -> length * mask_ratio - x, mask, ids_restore, ids_keep = self.random_masking(x, mask_ratio) - - # append cls token - cls_token = self.cls_token + self.pos_embed[:, :1, :] - cls_tokens = cls_token.expand(x.shape[0], -1, -1) - x = torch.cat((cls_tokens, x), dim=1) - - if self.extract_multi_level: - multi_level_feats = [] - # apply Transformer blocks - for blk_idx, blk in enumerate(self.blocks): - x = blk(x) - if blk_idx in [2, 5, 8]: - multi_level_feats.append(self.projectors[[2,5,8].index(blk_idx)](x)) - x = self.norm(x) - multi_level_feats.append(x) - - return multi_level_feats, mask, ids_restore - - - # apply Transformer blocks - for blk_idx, blk in enumerate(self.blocks): - x = blk(x) - x = self.norm(x) - - return x, mask, ids_restore, ids_keep - - def forward_decoder(self, x, ids_restore): - # embed tokens - x = self.decoder_embed(x) - # non_mask_token = x - - # append mask tokens to sequence - mask_tokens = self.mask_token.repeat(x.shape[0], ids_restore.shape[1] + 1 - x.shape[1], 1) - x_ = torch.cat([x[:, 1:, :], mask_tokens], dim=1) # no cls token - x_ = torch.gather(x_, dim=1, index=ids_restore.unsqueeze(-1).repeat(1, 1, x.shape[2])) # unshuffle - x = torch.cat([x[:, :1, :], x_], dim=1) # append cls token - - # add pos embed - x = x + self.decoder_pos_embed - - # apply Transformer blocks - decoder_feat = [] - for idx, blk in enumerate(self.decoder_blocks): - x = blk(x) - if idx == self.mae_decoder_depth // 2: - decoder_feat.append(x) - - x = self.decoder_norm(x) - - # use the output from decoder to do captioning - - # predictor projection - x = self.decoder_pred(x) - - # remove cls token - x = x[:, 1:, :] - - return x, decoder_feat - - def forward_loss(self, imgs, pred, mask): - """ - imgs: [N, 3, H, W] - pred: [N, L, p*p*3] - mask: [N, L], 0 is keep, 1 is remove, - """ - target = self.patchify(imgs) - if self.norm_pix_loss: - mean = target.mean(dim=-1, keepdim=True) - var = target.var(dim=-1, keepdim=True) - target = (target - mean) / (var + 1.e-6)**.5 - - loss = (pred - target) ** 2 - loss = loss.mean(dim=-1) # [N, L], mean loss per patch - - loss = (loss * mask).sum() / mask.sum() # mean loss on removed patches - return loss - - def embed_text(self, text): - batch, device = text.shape[0], text.device - - seq = text.shape[1] - - text_tokens = self.token_emb(text) - - # append text cls tokens - text_cls_tokens = repeat(self.text_cls_token, 'd -> b 1 d', b=batch) - text_tokens = torch.cat((text_tokens, text_cls_tokens), dim=-2) - - # create specific mask for text cls token at the end - # to prevent it from attending to padding - cls_mask = rearrange(text != self.pad_id, 'b j -> b 1 j') - attn_mask = F.pad(cls_mask, (0, 1, seq, 0), value=True) - - # go through unimodal layers - for attn_ff in self.unimodal_layers: - text_tokens = attn_ff(text_tokens, attn_mask=attn_mask) - - if self.need_uni_2_mul_proj: - text_tokens = self.uni_2_mul_proj(text_tokens) - - # get text cls token - text_tokens, text_cls_tokens = text_tokens[:, :-1], text_tokens[:, -1] - return text_tokens - - - - def forward(self, imgs, caption_ids=None, attention_mask=None, mask_ratio=0.75, - freeze_bert=False, teacher_forcing=False, caption_only=False, - encoder_only=False, word_weights=None, syn_count=None): - latent, mask, ids_restore, ids_keep = self.forward_encoder(imgs, mask_ratio) - - if not caption_only: - pred, decoder_feat = self.forward_decoder(latent, ids_restore) # [N, L, p*p*3] - mae_loss = self.forward_loss(imgs, pred, mask) - else: - mae_loss = 0. - - if self.latent_projector_layer != 0: - latent = self.latent_projector(latent) - - # latent: visual info: N, L, C - # caption_ids: N, Len - text, labels = caption_ids[:, :-1], caption_ids[:, 1:] - - seq = text.shape[1] - text_tokens = self.embed_text(text) # N, Len, C - - # create specific mask for text cls token at the end - # to prevent it from attending to padding - cls_mask = rearrange(text != self.pad_id, 'b j -> b 1 j') - attn_mask = F.pad(cls_mask, (0, 1, seq, 0), value=True) - unimodal_text_tokens = text_tokens - if not self.less_u: - for attn_ff, cross_attn in self.multimodal_layers: - text_tokens = attn_ff(text_tokens, attn_mask=attn_mask[:, :-1, :-1]) - text_tokens = cross_attn(text_tokens, latent) - else: - # dim, num_head, - for cross_attn1, cross_attn2 in self.multimodal_layers: - text_tokens = cross_attn1(text_tokens, latent) - text_tokens = cross_attn2(text_tokens, latent) - - logits = self.to_logits(text_tokens) # N, Len, NVocab - logits = logits.reshape(-1, len(self.tokenizer.vocab)) - labels = labels.reshape(-1) - - caption_loss = F.cross_entropy(logits, labels, ignore_index=self.pad_id,) - - - return mae_loss, caption_loss, None - - - -def mae_vit_small_patch16_dec512d8b(**kwargs): - model = MaskedAutoencoderViT( - patch_size=16, embed_dim=384, depth=12, num_heads=6, - decoder_embed_dim=512, decoder_depth=8, decoder_num_heads=16, - mlp_ratio=4, norm_layer=partial(nn.LayerNorm, eps=1e-6), **kwargs) - return model - - - -def mae_vit_base_patch16_dec512d8b(**kwargs): - model = MaskedAutoencoderViT( - patch_size=16, embed_dim=768, depth=12, num_heads=12, - decoder_embed_dim=512, decoder_depth=8, decoder_num_heads=16, - mlp_ratio=4, norm_layer=partial(nn.LayerNorm, eps=1e-6), **kwargs) - return model - -def mae_vit_large_patch16_dec512d8b(**kwargs): - model = MaskedAutoencoderViT( - patch_size=16, embed_dim=1024, depth=24, num_heads=16, - decoder_embed_dim=512, decoder_depth=8, decoder_num_heads=16, - mlp_ratio=4, norm_layer=partial(nn.LayerNorm, eps=1e-6), **kwargs) - return model - - -def mae_vit_huge_patch14_dec512d8b(**kwargs): - model = MaskedAutoencoderViT( - patch_size=14, embed_dim=1280, depth=32, num_heads=16, - decoder_embed_dim=512, decoder_depth=8, decoder_num_heads=16, - mlp_ratio=4, norm_layer=partial(nn.LayerNorm, eps=1e-6), **kwargs) - return model - - -# set recommended archs -mae_vit_small_patch16 = mae_vit_small_patch16_dec512d8b -mae_vit_base_patch16 = mae_vit_base_patch16_dec512d8b # decoder: 512 dim, 8 blocks -mae_vit_large_patch16 = mae_vit_large_patch16_dec512d8b # decoder: 512 dim, 8 blocks -mae_vit_huge_patch14 = mae_vit_huge_patch14_dec512d8b # decoder: 512 dim, 8 blocks - - - - - diff --git a/spaces/terfces0erbo/CollegeProjectV2/5 Ci Sinif Azerbaycan Dili Suallar Summativ Qiymetlendirme UPD.md b/spaces/terfces0erbo/CollegeProjectV2/5 Ci Sinif Azerbaycan Dili Suallar Summativ Qiymetlendirme UPD.md deleted file mode 100644 index 811744211821bf7da7b28b925631a29fdaa724b8..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/5 Ci Sinif Azerbaycan Dili Suallar Summativ Qiymetlendirme UPD.md +++ /dev/null @@ -1,6 +0,0 @@ -
          -

          Use your own username and API key with the following links. Subscribe to newsletter To be informed of the latest articles, subscribe.To list all the configurations, you can use the configuration base URL.y default, only the 20 most recent configurations will be returned. Shif d c taprn rtind June 28th, - 5ci sinif test toplusu 8 a baby development seed github V sinif oxluqlar, vurma mli v xasslri, tnliklr, blnm lamtlri, ksrlr v s. Azerbaycan dili tesde olan yalan 5 ci sinif Xahi edirm sh dki. DM-in Riyaziyyatdan 5-ci sinif n sinif testi. Loading riyaziyyat 1 sinif kicik summativ qiymetlendirme 3. Riyaziyyat Testleri 7 Ci Sinif Cavablari. Title: 9 Cu Sinif Riyaziyyat imtahan Testleri. Cmi ne ayaq var Tarix suallari ci sinif Nriyyatmzn sas faliyyt istiqamti abituriyent hazrl 9, 10 v ci siniflr zr n drs vsaitlri, test banklar v aylq snaq imtahanlar hazrlamaqdr Snaqlarn dzgn cavablar Riyaziyyat - 9-cu sinif buraxl imtahanlarna hazrlaanlar n Riyaziyyatn tdrisi n bir riyaziyyat mllim kimi hazrladm resurslar daim bu.

          -

          5 ci sinif azerbaycan dili suallar summativ qiymetlendirme


          DOWNLOAD --->>> https://bytlly.com/2uGj2a



          -

          About press copyright contact us creators advertise developers terms privacy policy & safety how works test new features press copyright contact us creators. Ksq. 6 ci sinif riyaziyyat. kicik summativ qiymetlendirme n 1kanalima abone olmagi ve vdeo dersler beyendysenz lke elemey unutmayin )). Open navigation menu. close suggestions search search. en change language. 6 sinif bsq 1 1. 6 sinif i yariml zr byk summatv qymtlndrm soyad, ad sinif 1. kompterin giri qurularnn adlarn rivy aln. monitor, klaviatura, prosessor, sistem lvh, operativ yadda, mikrofon, skaner, printer, srt disk, fl yadda 2. 6 sinif riyaziyyat ksq 1 6 sinif riyaziyyat ksq 2 6 sinif riyaziyyat ksq 3 6 sinif riyaziyyat ksq 4 6 sinif riyaziyyat ksq 5 6 sinif riyaziyyat ksq 6 6 sinif riyaziyyat ksq 7 6 sinif riyaziyyat ksq 8 6 sinif riyaziyyat bsq 1 yklmk cn bu testleri google hesabnz olmalidi.

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Dameware Nt Utilities 8.0.1.151 Crack [REPACK] Cocaine.md b/spaces/terfces0erbo/CollegeProjectV2/Dameware Nt Utilities 8.0.1.151 Crack [REPACK] Cocaine.md deleted file mode 100644 index bd9269f6c443d8ade8f71281fdbd3042abc4aec3..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Dameware Nt Utilities 8.0.1.151 Crack [REPACK] Cocaine.md +++ /dev/null @@ -1,9 +0,0 @@ -

          dameware nt utilities 8.0.1.151 crack cocaine


          Download Filehttps://bytlly.com/2uGjKc



          - -Bianca Rivera added dameware nt 8.0.1.151 crack cocaine utilities to the list of important but less urgent. Board Eisenhower Matrix Task board · utilities dameware nt . I am writing to you just now and I have two very important and urgent tasks that I must complete today. -As a bonus to this task, I have to rent a car in the city today. -And finally, I have a very urgent task that I have to complete today, which means I have two free hours that day. -And finally, I have an urgent task that I need to get done today, which means I have two free hours that day. 8a78ff9644
          -
          -
          -

          diff --git a/spaces/terfces0erbo/CollegeProjectV2/Free Download 007 Facebook Hack V1.0 With PORTABLE Full Cracked.md b/spaces/terfces0erbo/CollegeProjectV2/Free Download 007 Facebook Hack V1.0 With PORTABLE Full Cracked.md deleted file mode 100644 index d61c8084ce3b05f645dca2ad6b41ac64306e5b22..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Free Download 007 Facebook Hack V1.0 With PORTABLE Full Cracked.md +++ /dev/null @@ -1,6 +0,0 @@ -

          free download 007 facebook hack v1.0 with full cracked


          DOWNLOAD · https://bytlly.com/2uGiYp



          -
          - d5da3c52bf
          -
          -
          -

          diff --git a/spaces/test12356/SUI-svc-3.0/preprocess_flist_config.py b/spaces/test12356/SUI-svc-3.0/preprocess_flist_config.py deleted file mode 100644 index 2fc8571b4162d769296e4cdcfb67fa68d95164cf..0000000000000000000000000000000000000000 --- a/spaces/test12356/SUI-svc-3.0/preprocess_flist_config.py +++ /dev/null @@ -1,117 +0,0 @@ -import os -import argparse -from tqdm import tqdm -from random import shuffle -import json -config_template = { - "train": { - "log_interval": 200, - "eval_interval": 1000, - "seed": 1234, - "epochs": 10000, - "learning_rate": 2e-4, - "betas": [0.8, 0.99], - "eps": 1e-9, - "batch_size": 12, - "fp16_run": False, - "lr_decay": 0.999875, - "segment_size": 17920, - "init_lr_ratio": 1, - "warmup_epochs": 0, - "c_mel": 45, - "c_kl": 1.0, - "use_sr": True, - "max_speclen": 384, - "port": "8001" - }, - "data": { - "training_files":"filelists/train.txt", - "validation_files":"filelists/val.txt", - "max_wav_value": 32768.0, - "sampling_rate": 48000, - "filter_length": 1280, - "hop_length": 320, - "win_length": 1280, - "n_mel_channels": 80, - "mel_fmin": 0.0, - "mel_fmax": None - }, - "model": { - "inter_channels": 192, - "hidden_channels": 192, - "filter_channels": 768, - "n_heads": 2, - "n_layers": 6, - "kernel_size": 3, - "p_dropout": 0.1, - "resblock": "1", - "resblock_kernel_sizes": [3,7,11], - "resblock_dilation_sizes": [[1,3,5], [1,3,5], [1,3,5]], - "upsample_rates": [10,8,2,2], - "upsample_initial_channel": 512, - "upsample_kernel_sizes": [16,16,4,4], - "n_layers_q": 3, - "use_spectral_norm": False, - "gin_channels": 256, - "ssl_dim": 256, - "n_speakers": 0, - }, - "spk":{ - "nen": 0, - "paimon": 1, - "yunhao": 2 - } -} - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--train_list", type=str, default="./filelists/train.txt", help="path to train list") - parser.add_argument("--val_list", type=str, default="./filelists/val.txt", help="path to val list") - parser.add_argument("--test_list", type=str, default="./filelists/test.txt", help="path to test list") - parser.add_argument("--source_dir", type=str, default="./dataset/48k", help="path to source dir") - args = parser.parse_args() - - train = [] - val = [] - test = [] - idx = 0 - spk_dict = {} - spk_id = 0 - for speaker in tqdm(os.listdir(args.source_dir)): - spk_dict[speaker] = spk_id - spk_id += 1 - wavs = [os.path.join(args.source_dir, speaker, i)for i in os.listdir(os.path.join(args.source_dir, speaker))] - wavs = [i for i in wavs if i.endswith("wav")] - shuffle(wavs) - train += wavs[2:-10] - val += wavs[:2] - test += wavs[-10:] - n_speakers = len(spk_dict.keys())*2 - shuffle(train) - shuffle(val) - shuffle(test) - - print("Writing", args.train_list) - with open(args.train_list, "w") as f: - for fname in tqdm(train): - wavpath = fname - f.write(wavpath + "\n") - - print("Writing", args.val_list) - with open(args.val_list, "w") as f: - for fname in tqdm(val): - wavpath = fname - f.write(wavpath + "\n") - - print("Writing", args.test_list) - with open(args.test_list, "w") as f: - for fname in tqdm(test): - wavpath = fname - f.write(wavpath + "\n") - - config_template["model"]["n_speakers"] = n_speakers - config_template["spk"] = spk_dict - print("Writing configs/config.json") - with open("configs/config.json", "w") as f: - json.dump(config_template, f, indent=2) diff --git a/spaces/thejagstudio/procom/main/migrations/0001_initial.py b/spaces/thejagstudio/procom/main/migrations/0001_initial.py deleted file mode 100644 index 85193d1967c8e4c90a84e185d3bc32f18fbed444..0000000000000000000000000000000000000000 --- a/spaces/thejagstudio/procom/main/migrations/0001_initial.py +++ /dev/null @@ -1,29 +0,0 @@ -# Generated by Django 4.1.5 on 2023-01-28 04:46 - -from django.db import migrations, models - - -class Migration(migrations.Migration): - - initial = True - - dependencies = [] - - operations = [ - migrations.CreateModel( - name="Food", - fields=[ - ( - "id", - models.BigAutoField( - auto_created=True, - primary_key=True, - serialize=False, - verbose_name="ID", - ), - ), - ("name", models.CharField(max_length=200)), - ("description", models.CharField(max_length=500)), - ], - ), - ] diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Artcut 2006 Software Free [BETTER] Download.md b/spaces/tialenAdioni/chat-gpt-api/logs/Artcut 2006 Software Free [BETTER] Download.md deleted file mode 100644 index cdca0e86a162d412650e55dcd87c9503a5fa7b19..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Artcut 2006 Software Free [BETTER] Download.md +++ /dev/null @@ -1,46 +0,0 @@ -
          -

          How to Download and Install Artcut 2006 Software for Free

          -

          Artcut 2006 is a software package for basic signs and vinyl graphics design. It allows you to create, edit and cut graphics using a variety of tools and features. Artcut 2006 is compatible with Windows 98/ME/2000/XP and requires only 32mb of RAM and 1mb of hard disk space. In this article, we will show you how to download and install Artcut 2006 software for free.

          -

          Step 1: Download Artcut 2006 Software

          -

          The first step is to download Artcut 2006 software from a reliable source. You can find many websites that offer Artcut 2006 software for free download, but some of them may contain viruses or malware that can harm your computer. Therefore, we recommend you to use the following link[^1^] to download Artcut 2006 software safely and securely. This link will take you to the Software Informer website, where you can find detailed information about Artcut 2006 software, such as its features, screenshots, reviews and comments.

          -

          artcut 2006 software free download


          Download Zip ····· https://urlcod.com/2uK5jY



          -

          Step 2: Install Artcut 2006 Software

          -

          The second step is to install Artcut 2006 software on your computer. To do this, you need to follow these simple steps:

          -
            -
          • Open the downloaded file (Artcut6.exe) and run it as administrator.
          • -
          • Follow the instructions on the screen to complete the installation process.
          • -
          • When the installation is finished, you will see a shortcut icon of Artcut 2006 on your desktop.
          • -
          • Double-click on the icon to launch Artcut 2006 software.
          • -
          -

          Step 3: Enjoy Artcut 2006 Software

          -

          The third step is to enjoy Artcut 2006 software and create your own signs and vinyl graphics. You can use Artcut 2006 software to:

          -
            -
          • Access a variety of tools and features, such as outlines, distortions, node editing, geometric shapes, grouping/ungrouping, welding, text editing and more.
          • -
          • Use a modest clip art and logo collection or import your own images from .plt files.
          • -
          • Cut your graphics using a compatible cutting plotter or printer.
          • -
          -

          We hope this article was helpful for you to download and install Artcut 2006 software for free. If you have any questions or problems, please feel free to leave a comment below or contact us via email. Thank you for reading!

          - -

          Why Choose Artcut 2006 Software?

          -

          Artcut 2006 software is a great choice for anyone who wants to create professional-looking signs and vinyl graphics without spending too much money or time. Artcut 2006 software has many benefits, such as:

          -
            -
          • It is easy to use and learn. You don't need any special skills or training to use Artcut 2006 software. It has a user-friendly interface and clear instructions that guide you through every step of the design and cutting process.
          • -
          • It is compatible with most cutting plotters and printers. You can use Artcut 2006 software with any cutting plotter or printer that supports HPGL language, such as Redsail, Roland, Graphtec, Mimaki, Summa and more.
          • -
          • It is versatile and flexible. You can use Artcut 2006 software to create signs and vinyl graphics for various purposes and occasions, such as logos, banners, stickers, decals, labels, car wraps, window graphics and more.
          • -
          • It is affordable and reliable. You can download and install Artcut 2006 software for free from a trusted source[^1^]. You don't need to pay any subscription fees or hidden charges to use Artcut 2006 software. You can also enjoy free updates and technical support from the developer.
          • -
          -

          How to Use Artcut 2006 Software?

          -

          Using Artcut 2006 software is simple and fun. You can follow these basic steps to create your own signs and vinyl graphics:

          -

          -
            -
          1. Launch Artcut 2006 software and select a new document or open an existing one.
          2. -
          3. Draw or import your graphic on the work area. You can use the tools and features on the toolbar and menu to edit your graphic as you like.
          4. -
          5. Adjust the size and position of your graphic according to your cutting plotter or printer settings.
          6. -
          7. Select the cut option and choose the cutting mode, speed, pressure and blade offset.
          8. -
          9. Connect your cutting plotter or printer to your computer via USB cable or serial port.
          10. -
          11. Send your graphic to your cutting plotter or printer and wait for it to finish cutting.
          12. -
          13. Weed out the excess vinyl and apply your graphic to the desired surface.
          14. -
          -

          You can also watch some helpful videos on YouTube[^2^] [^4^] [^5^] that show you how to install and use Artcut 2006 software in more detail.

          e93f5a0c3f
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Drive Ahead v1.84 MOD APK [Latest] and Smash Your Opponents with Various Vehicles.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Drive Ahead v1.84 MOD APK [Latest] and Smash Your Opponents with Various Vehicles.md deleted file mode 100644 index d4859731945c7478165a4bf7f70fa9616b1c0c37..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Drive Ahead v1.84 MOD APK [Latest] and Smash Your Opponents with Various Vehicles.md +++ /dev/null @@ -1,112 +0,0 @@ - -

          Drive Ahead v1.84 MOD APK [Latest]: A Fun and Exciting Car Fighting Game

          -

          Introduction

          -

          If you are looking for a game that combines racing, action, and humor, then you should try Drive Ahead. This is a game where you have to smash your opponent's head with your vehicle in various arenas. Sounds simple, right? Well, not quite. You also have to deal with different obstacles, traps, and hazards that can make your life difficult. And if that's not enough, you also have to choose from a wide range of vehicles, from cars and trucks to bikes and tanks, each with their own strengths and weaknesses.

          -

          Drive Ahead v1.84 MOD APK [Latest]


          Download File - https://urlcod.com/2uK9eI



          -

          Drive Ahead is a game that will keep you entertained for hours with its addictive gameplay, pixel graphics, and hilarious sound effects. You can play solo or with your friends in local or online multiplayer modes. You can also customize your vehicles and arenas with different themes and items. And if you want to challenge yourself even more, you can try the missions and events that will test your skills and luck.

          -

          What is Drive Ahead?

          -

          Drive Ahead is a car fighting game developed by Dodreams Ltd. It was released in 2015 for Android and iOS devices. The game has been downloaded over 100 million times and has received positive reviews from users and critics alike.

          -

          The game's main mode is called Duel, where you have to face another player or a computer-controlled opponent in a random arena. The goal is to hit your opponent's head with your vehicle before they hit yours. The first one to score five points wins the match. You can also play in King of the Hill mode, where you have to stay on top of a hill while avoiding falling off or getting hit by your opponent.

          -

          Download Drive Ahead v1.84 MOD APK for free
          -How to install Drive Ahead v1.84 MOD APK on Android
          -Drive Ahead v1.84 MOD APK unlimited coins and tickets
          -Drive Ahead v1.84 MOD APK latest version update
          -Drive Ahead v1.84 MOD APK gameplay and features
          -Drive Ahead v1.84 MOD APK online multiplayer mode
          -Drive Ahead v1.84 MOD APK unlocked all cars and arenas
          -Drive Ahead v1.84 MOD APK no root required
          -Drive Ahead v1.84 MOD APK best car games for Android
          -Drive Ahead v1.84 MOD APK hack and cheats
          -Drive Ahead v1.84 MOD APK review and rating
          -Drive Ahead v1.84 MOD APK direct download link
          -Drive Ahead v1.84 MOD APK offline mode available
          -Drive Ahead v1.84 MOD APK new missions and challenges
          -Drive Ahead v1.84 MOD APK compatible with Android 4.4 and up
          -Drive Ahead v1.84 MOD APK modded by Rexdl
          -Drive Ahead v1.84 MOD APK fun and addictive racing game
          -Drive Ahead v1.84 MOD APK tips and tricks to win
          -Drive Ahead v1.84 MOD APK customise your cars and helmets
          -Drive Ahead v1.84 MOD APK support for game controllers
          -Drive Ahead v1.84 MOD APK bug fixes and improvements
          -Drive Ahead v1.84 MOD APK safe and secure download
          -Drive Ahead v1.84 MOD APK fast and easy installation
          -Drive Ahead v1.84 MOD APK full HD graphics and sound effects
          -Drive Ahead v1.84 MOD APK enjoy the game without ads
          -Drive Ahead v1.84 MOD APK share your replays with friends
          -Drive Ahead v1.84 MOD APK challenge your friends in local multiplayer
          -Drive Ahead v1.84 MOD APK earn rewards and achievements
          -Drive Ahead v1.84 MOD APK explore different arenas and themes
          -Drive Ahead v1.84 MOD APK crash and smash your opponents' cars
          -Drive Ahead v1.84 MOD APK test your skills in different game modes
          -Drive Ahead v1.84 MOD APK download size and requirements
          -Drive Ahead v1.84 MOD APK how to update to the latest version
          -Drive Ahead v1.84 MOD APK alternative download sources
          -Drive Ahead v1.84 MOD APK frequently asked questions and answers
          -Drive Ahead v1.84 MOD APK user feedback and comments
          -Drive Ahead v1.84 MOD APK video tutorial and walkthrough
          -Drive Ahead v1.84 MOD APK compare with other car games for Android
          -Drive Ahead v1.84 MOD APK what's new in the latest version
          -Drive Ahead v1.84 MOD APK original vs modded version comparison
          -Drive Ahead v1.84 MOD APK pros and cons of using the modded version
          -Drive Ahead v1.84 MOD APK how to uninstall the modded version
          -Drive Ahead v1.84 MOD APK legal and ethical issues of using the modded version
          -Drive Ahead v1.84 MOD APK contact the developer for support and feedback
          -Drive Ahead v1.84 MOD APK join the community of fans and players
          -Drive Ahead v1.84 MOD APK follow the official social media accounts of the game
          -Drive Ahead v1.84 MOD APK recommend the game to your friends and family

          -

          The game also has other modes such as Rift Riders, where you have to collect rift orbs and avoid enemies; Soccer, where you have to score goals with your vehicle; Boss Fights, where you have to defeat powerful bosses; and Daily Challenges, where you have to complete specific tasks.

          -

          What are the features of Drive Ahead?

          -

          Drive Ahead has many features that make it a fun and exciting game to play. Some of them are:

          -
            -
          • Over 300 vehicles to choose from, including cars, trucks, bikes, tanks, mechas, UFOs, dinosaurs, and more.
          • -
          • Over 100 arenas to fight in, each with different themes, layouts, obstacles, and hazards.
          • -
          • Local and online multiplayer modes, where you can play with up to four players on the same device or online.
          • -
          • Customization options, where you can change the appearance of your vehicles and arenas with different skins and items.
          • -
          • Missions and events, where you can earn rewards by completing various objectives.
          • -
          • Achievements and leaderboards, where you can track your progress and compete with other players.
          • -
          -

          How to download and install Drive Ahead v1.84 MOD APK [Latest]?

          -

          If you want to enjoy Drive Ahead with more features and benefits, then you should download and install the modded version of the game. This version will give you unlimited money, all vehicles unlocked, and an ad-free experience. Here's how you can do it:

          -

          Requirements

          -
            -
          • An Android device running on version 4.4 or higher.
          • -
          • A stable internet connection.
          • -
          • A file manager app.
          • -
          • Enough storage space on your device.
          • -
          -

          Steps

          -
            -
          1. Download the Drive Ahead v1.84 MOD APK file from a trusted source. You can use the link below:
          2. -
          -

          https://apkdone.com/drive-ahead/

          -
            -
          1. Once the download is complete, locate the file on your device using the file manager app.
          2. -
          3. Tap on the file and select Install. You may need to enable Unknown Sources in your device settings if this is your first time installing an APK file.
          4. -
          5. Wait for the installation process to finish.
          6. -
          7. Launch the game and enjoy!
          8. -
          -

          What are the benefits of Drive Ahead v1.84 MOD APK [Latest]?

          -

          The modded version of Drive Ahead will give you several benefits that will enhance your gaming experience. Some of them are:

          -

          Unlimited money

          -

          With unlimited money, you can buy any vehicle or arena that you want without worrying about the cost. You can also upgrade your vehicles with different parts and accessories to make them more powerful and stylish.

          -

          All vehicles unlocked

          -

          With all vehicles unlocked, you can access any vehicle that you want without having to unlock them by playing or paying. You can try out different vehicles and see which ones suit your playstyle and preference.

          -

          Ad-free experience

          -

          With an ad-free experience, you can play without any interruptions or distractions from annoying ads. You can also save your data usage and battery life by avoiding ads.

          -

          Tips and tricks for playing Drive Ahead

          -

          If you want to improve your skills and performance in Drive Ahead, then you should follow these tips and tricks:

          -

          Choose the right vehicle for each arena

          -

          Different vehicles have different advantages and disadvantages in different arenas. For example, some vehicles are faster but less stable than others; some vehicles are heavier but more durable than others; some vehicles have special abilities or weapons that can help or hinder them in certain situations. Therefore, you should choose a vehicle that matches the arena's theme, layout, obstacles, and hazards.

          -

          Use the terrain to your advantage

          -

          The terrain can be your friend or foe depending on how you use it. For example, some terrains can help you gain speed or height; some terrains can slow you down or make you lose balance; some terrains can damage or destroy your vehicle or your opponent's vehicle; some terrains can hide or expose you or your opponent. Therefore, you should use the terrain to your advantage by avoiding its pitfalls and exploiting its opportunities.

          -

          Collect helmets and coins to unlock more content

          -

          Helmets and coins are the main currencies in Drive Ahead. You can use them to unlock more vehicles and arenas in the shop menu. You can also use them to spin the roulette wheel after each match for a chance to win more rewards. You can collect helmets and coins by playing matches, completing missions and events, watching ads, or buying them with real money.

          -

          Conclusion

          -

          Summary of the main points

          -

          In conclusion, Drive Ahead is a fun and exciting car fighting game that will keep you entertained for hours with its addictive gameplay, pixel graphics, and hilarious sound effects. You can choose from over 300 vehicles and over 100 arenas to fight in solo or multiplayer modes. You can also customize your vehicles and arenas with different skins and items. And if you want more features and benefits, you can download and install Drive Ahead v1.84 MOD APK [Latest], which will give you unlimited money, all vehicles unlocked, and an ad-free experience.

          -

          Call to action

          -

          If you are ready to smash some heads with your vehicle in Drive Ahead v1.84 MOD APK [Latest], then don't wait any longer! Download it now from this link:

          -

          https://apkdone.com/drive-ahead/

          -

          0a6ba089eb
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Enjoy the Beauty of the Underwater World with Marine Aquarium 3.5.9 Rus.md b/spaces/tialenAdioni/chat-gpt-api/logs/Enjoy the Beauty of the Underwater World with Marine Aquarium 3.5.9 Rus.md deleted file mode 100644 index 3ca3589dde755a89155503d58a21d1bea8ea0b4e..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Enjoy the Beauty of the Underwater World with Marine Aquarium 3.5.9 Rus.md +++ /dev/null @@ -1,20 +0,0 @@ - -

          How to Enjoy the Beauty of Marine Aquarium 3.5.9 Rus on Your Android Device

          -

          If you love tropical fish and want to experience the most realistic and stunning aquarium simulation on your Android device, you should try Marine Aquarium 3.5.9 Rus. This app is the latest version of the world's top-selling tropical fish aquarium, developed by Prolific Publishing, Inc.

          -

          Marine Aquarium 3.5.9 Rus


          Download Zip ✫✫✫ https://urlcod.com/2uK77f



          -

          Marine Aquarium 3.5.9 Rus features over two dozen of the most popular tropical fish, masterfully crafted and animated with artificial intelligence. You can also admire a Zebra Moray Eel and an adorable starfish, as well as a hand-sculpted crystal display that shows an analog clock, a digital clock, or a calendar.

          -

          The app uses genuine hyper-realistic 3D rendering to create a photo-realistic aquarium that you can enjoy on your Android device. You can also customize the settings, such as the panning speed, the bubble sound, and the fish selection.

          -

          One of the coolest features of Marine Aquarium 3.5.9 Rus is that you can set it as your Android's Daydream, which means that it will automatically start when your device is charging. You can also tap and hold to reveal the wire frame structure and see how the magic is created.

          -

          Marine Aquarium 3.5.9 Rus is available for download on Google Play Store for $2.99, which is a fair price for such a high-quality app. You can also try the free version, which offers 100% of the features with no ads.

          -

          If you are looking for a relaxing and mesmerizing app that will bring the beauty of marine life to your Android device, you should definitely check out Marine Aquarium 3.5.9 Rus. It is one of the best aquarium simulations ever made, and it will make you feel like you have a piece of the ocean in your pocket.

          - -

          But Marine Aquarium 3.5.9 Rus is not only a beautiful and entertaining app, it also has some health and wellbeing benefits for its users. According to various studies, watching aquariums and fish tanks can have positive effects on people's physical and mental health, such as:

          -
            -
          • Reducing stress and anxiety: Aquariums can create a calming and relaxing atmosphere that can lower people's stress levels and anxiety. A study by researchers from the University of Plymouth, the University of Exeter and the National Marine Aquarium found that viewing aquarium displays led to noticeable reductions in blood pressure and heart rate, as well as improved mood [^2^].
          • -
          • Improving sleep quality: Aquariums can also help people fall asleep faster and sleep better, as they provide a soothing and natural sound that can mask unwanted noises. A study by researchers from the University of Pennsylvania found that the presence of an aquarium reduced the participants' anxiety levels by 12% before undergoing dental surgery [^4^].
          • -
          • Enhancing cognitive function: Aquariums can also stimulate people's curiosity and interest in learning more about marine life and the environment. A study by researchers from the University of Exeter found that higher numbers of fish in aquariums helped to hold people's attention for longer and improve their memory [^2^]. The marine aquarium trade has also benefited science by providing easy access to marine species for research [^5^].
          • -
          • Decreasing hyperactivity in children: Aquariums can also have a positive impact on children's behaviour and development, especially those with attention deficit hyperactivity disorder (ADHD). The presence of an aquarium can reduce hyperactivity in children as well as improving their sleeping patterns and overall curiosity and appreciation for the natural world .
          • -
          -

          As you can see, Marine Aquarium 3.5.9 Rus is not only a fun and amazing app, but also a beneficial one for your health and wellbeing. If you want to enjoy the beauty of marine life on your Android device, you should download Marine Aquarium 3.5.9 Rus today and experience it for yourself.

          e753bf7129
          -
          -
          \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Among Us Now and Enjoy Cross-Platform Play with Millions of Players.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Among Us Now and Enjoy Cross-Platform Play with Millions of Players.md deleted file mode 100644 index 6339171d24f8868aedf097188486e96f3f2f84c7..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Among Us Now and Enjoy Cross-Platform Play with Millions of Players.md +++ /dev/null @@ -1,135 +0,0 @@ - -

          Download Among Us Now and Join the Fun

          -

          If you are looking for a fun and exciting game to play with your friends or online strangers, you should download Among Us now. Among Us is a multiplayer game of teamwork and betrayal, where you have to work together with your crewmates to prepare your spaceship for departure, while avoiding being killed by one or more impostors who are secretly among you.

          -

          Among Us is a game that will keep you on your toes, as you never know who you can trust and who is out to get you. You will have to use your skills of deduction, deception, and communication to survive and win. Whether you are a crewmate or an impostor, you will have a blast playing this game.

          -

          download among us now


          DOWNLOADhttps://bltlly.com/2uOq2S



          -

          In this article, we will tell you everything you need to know about Among Us, including how to play it, how to download it, how to customize your character, how to find a game online or host your own, and how to communicate with other players. By the end of this article, you will be ready to join the millions of players who are enjoying this game every day.

          -

          How to Play Among Us

          -

          The Basics: Crewmates and Impostors

          -

          Among Us can be played by 4 to 15 players online or over local WiFi. Each player is assigned a role of either a crewmate or an impostor at the start of each round. The crewmates have to complete tasks around the map to fill up a group task bar, while the impostors have to kill crewmates, sabotage the ship, and blend in with the others.

          -

          The crewmates can win by completing all their tasks or by discovering and voting out all the impostors. The impostors can win by killing enough crewmates, by causing a major sabotage that is not fixed in time, or by convincing the crewmates to vote out one of their own.

          -

          The Maps: The Skeld, MIRA HQ, Polus, and the Airship

          -

          Among Us has four different maps that you can play in, each with its own layout, tasks, vents, and sabotages. The maps are:

          -
            -
          • The Skeld: The original map of the game, set on a spaceship with 14 rooms connected by corridors. Some of the tasks include fixing wires, swiping cards, diverting power, and emptying garbage. Some of the sabotages include reactor meltdown, oxygen depletion, communications disruption, and door locking.
          • -
          • MIRA HQ: A high-tech space station with 12 rooms connected by an elevator and a vent system. Some of the tasks include scanning boarding pass, sorting samples, watering plants, and clearing asteroids. Some of the sabotages include reactor meltdown, oxygen depletion, communications disruption, and door locking.
          • -
          • Polus: A snowy planet base with 15 rooms connected by tunnels and an outdoor area. Some of the tasks include scanning boarding pass, filling canisters, repairing drill, and rebooting WiFi. Some of the sabotages include seismic stabilizers, lights outage, communications disruption, and door locking.
          • -
          • The Airship: The newest and largest map of the game, set on a flying airship with 18 rooms connected by ladders, platforms, and vents. Some of the tasks include polishing ruby, emptying trash, developing photos, and dressing mannequins. Some of the sabotages include reactor meltdown, lights outage, communications disruption, and door locking.
          • -
          -

          You can choose which map you want to play in when you create or join a game. Each map has its own challenges and strategies, so you will never get bored of playing Among Us.

          -

          The Modes: Classic or Hide n Seek

          -

          Among Us has two main modes that you can play: classic or hide n seek. The classic mode is the standard mode of the game, where the crewmates have to complete tasks and find the impostors, while the impostors have to kill and deceive the crewmates. The classic mode can be customized with different settings, such as the number of impostors, the kill cooldown, the vision range, the voting time, and more.

          -

          How to download among us for free on PC
          -Download among us on Google Play Store
          -Among us online game no download required
          -Download among us on Steam with discount
          -Among us download for Windows 10 x64bit
          -Play among us online for free in your browser
          -Download among us mod menu apk
          -Among us download size and requirements
          -Download among us on iOS App Store
          -Among us free download for Mac OS
          -Download among us latest version with Airship map
          -Among us download for Chromebook
          -Download among us hack version for Android
          -Among us download link for PC
          -Download among us on Nintendo Switch
          -Among us online unblocked no download
          -Download among us custom skins and hats
          -Among us download for Linux Ubuntu
          -Download among us on Xbox One
          -Among us free download for laptop
          -Download among us on Amazon Fire tablet
          -Among us online multiplayer without download
          -Download among us voice chat mod
          -Among us download for PS4
          -Among us free download with all pets
          -Download among us on Bluestacks emulator
          -Among us online play now no download
          -Download among us on Epic Games Store
          -Among us download for Windows 7 x32bit
          -Download among us on Samsung Galaxy phone
          -Among us online free no download needed
          -Download among us with Henry Stickmin collection
          -Among us download for Macbook Air
          -Download among us on Microsoft Store
          -Among us free download for Android phone
          -Download among us with Discord integration
          -Among us download for Windows 8.1 x64bit
          -Download among us on Huawei phone
          -Among us online game free no download
          -Download among us with Hide n Seek mode
          -Among us download for iPad Pro
          -Download among us on LDPlayer emulator
          -Among us online play now free no download
          -Download among us with Twitch integration
          -Among us download for iPhone 12
          -Download among us on NoxPlayer emulator
          -Among us online game play now no download
          -Download among us with Proximity chat mod
          -Among us download for Kindle Fire

          -

          The hide n seek mode is a variation of the game, where the impostor reveals themselves at the start of the round and has to chase and kill all the crewmates before they finish their tasks. The hide n seek mode has different rules, such as no reporting bodies, no emergency meetings, no sabotages, low impostor vision, high crewmate vision, and more. The hide n seek mode is not an official mode of the game, but a fan-made one that can be played with friends or online players who agree to follow the rules.

          -

          How to Download Among Us

          -

          For PC: Steam or Innersloth Website

          -

          If you want to play Among Us on your PC, you have two options: you can either buy it on Steam for $4.99 or download it for free from the Innersloth website. The Steam version has some advantages, such as automatic updates, achievements, stats, and skins. The Innersloth website version is free, but it requires manual updates and does not have some features that the Steam version has.

          -

          To buy Among Us on Steam, you need to have a Steam account and a compatible device. You can visit the [Among Us Steam page] and click on "Add to Cart" to purchase the game. You can also buy some DLCs that include extra skins, hats, pets, and maps. To download Among Us from the Innersloth website, you need to have a Windows PC and an internet connection. You can visit the [Among Us Innersloth page] and click on "Download" to get the game. You can also check for updates and patches on the same page.

          -

          For Mobile: Google Play or App Store

          -

          If you want to play Among Us on your mobile device, you can download it for free from Google Play or App Store. The mobile version is compatible with Android and iOS devices and has all the features that the PC version has. However, some skins, hats, pets, and maps are not free on mobile and require in-app purchases.

          -

          To download Among Us on Google Play, you need to have an Android device and an internet connection. You can visit the [Among Us Google Play page] and click on "Install" to get the game. You can also buy some DLCs that include extra skins, hats, pets, and maps. To download Among Us on App Store, you need to have an iOS device and an internet connection. You can visit the [Among Us App Store page] and click on "Get" to download the game. You can also buy some DLCs that include extra skins, hats, pets, and maps.

          -

          For Console: Nintendo Switch or Xbox

          -

          If you want to play Among Us on your console, you can buy it on Nintendo Switch or Xbox. The console version is compatible with Nintendo Switch and Xbox One devices and has all the features that the PC and mobile versions have. However, some skins, hats, pets, and maps are not free on console and require in-game purchases.

          -

          To buy Among Us on Nintendo Switch, you need to have a Nintendo Switch device and an internet connection. You can visit the [Among Us Nintendo eShop page] and click on "Proceed to Purchase" to buy the game for $5.00. You can also buy some DLCs that include extra skins, hats, pets, and maps. To buy Among Us on Xbox, you need to have an Xbox One device and an internet connection. You can visit the [Among Us Xbox Store page] and click on "Buy" to buy the game for $4.99. You can also buy some DLCs that include extra skins, hats, pets, and maps.

          -

          How to Customize Your Character

          -

          Choose Your Color, Hat, Visor, Skin, and Pet

          -

          One of the fun aspects of Among Us is that you can customize your character to suit your personality and style. You can choose from 19 different colors, such as red, blue, green, yellow, pink, purple, orange, black, white, and more. You can also choose from over 100 different hats, such as beanies, caps, crowns, ears, horns, masks, glasses, flowers, fruits, and more. You can also choose from 16 different visors that change the shape of your eyes, such as round, oval, square, starry, angry, sad, happy, and more. You can also choose from 12 different skins that change the appearance of your body, such as astronaut, captain, doctor, mechanic, police, and more. You can also choose from 11 different pets that follow you around, such as dogs, cats, birds, robots, aliens, and more.

          -

          You can customize your character before or during a game by clicking on the laptop icon in the lobby or the customize button in the game menu. You can also buy some skins, hats, pets, and maps with real money or in-game currency.

          -

          Use Different Outfits and Accessories

          -

          Another way to customize your character is to use different outfits and accessories that are available in some maps or DLCs. For example, in the Airship map, you can find different outfits and accessories in the Vault room, such as a banana suit, a cheese hat, a cowboy hat, a fedora, a flower pot, a knight helmet, a pirate hat, a plunger hat, a sombrero, and more. You can also use some outfits and accessories that are exclusive to some DLCs, such as a brain slug hat, a hamster pet, a mini crewmate pet, a wall guard outfit, and more.

          -

          You can use different outfits and accessories by clicking on them in the map or the DLC menu. You can also buy some outfits and accessories with real money or in-game currency.

          -

          Change Your Name and Settings

          -

          The final way to customize your character is to change your name and settings. You can choose any name you want for your character, as long as it is not offensive or inappropriate. You can also change your settings to suit your preferences, such as your language, your sound effects, your music volume, your chat type, your joystick size, your confirm ejects option, and more.

          -

          You can change your name and settings by clicking on the name box or the settings button in the main menu or the game menu. You can also change your name and settings during a game by clicking on the chat button or the settings button.

          -

          How to Find a Game Online or Host Your Own

          -

          Join a Public Lobby or Create a Private Code

          -

          If you want to play Among Us online with other players, you have two options: you can either join a public lobby or create a private code. A public lobby is a game that anyone can join by clicking on the "Find Game" button in the online menu. You can choose which map you want to play in and how many impostors you want to have. You can also filter the games by region and language. A private code is a game that only people who know the code can join by clicking on the "Enter Code" button in the online menu. You can create a private code by clicking on the "Host" button in the online menu and choosing which map you want to play in and how many impostors you want to have.

          -

          You can join a public lobby or create a private code by clicking on the "Online" button in the main menu. You can also join a public lobby or create a private code during a game by clicking on the "Online" button in the game menu.

          -

          Adjust the Game Options and Rules

          -

          If you want to customize your game experience, you can adjust the game options and rules before or during a game. The game options include things like the number of impostors, the kill cooldown, the emergency meetings, the voting time, the player speed, the crewmate vision, the impostor vision, the task bar updates, and more. The game rules include things like the confirm ejects, the visual tasks, the anonymous votes, and more. You can change the game options and rules by clicking on the "Customize" button in the lobby or the game menu. You can also use some presets that are available in some maps or DLCs, such as the Polus map rules or the Airship map rules.

          -

          Invite Your Friends or Play with Strangers

          -

          If you want to play Among Us with your friends or with strangers, you can do so by using the online mode or the local mode. The online mode allows you to play with other players over the internet, either by joining a public lobby or creating a private code. You can invite your friends to join your game by sending them the private code or by using a voice chat app like Discord or Skype. You can also play with strangers who join your game or who host their own games. The online mode is a great way to meet new people and have fun with them.

          -

          The local mode allows you to play with other players who are in the same WiFi network as you, either by joining a local lobby or creating a local code. You can invite your friends to join your game by sending them the local code or by telling them in person. You can also play with strangers who are in the same WiFi network as you and who join your game or who host their own games. The local mode is a great way to play with your friends who are nearby and have fun with them.

          -

          How to Communicate with Other Players

          -

          Use In-Game Text Chat or Voice Chat Apps

          -

          If you want to communicate with other players in Among Us, you have two options: you can either use the in-game text chat or use voice chat apps. The in-game text chat allows you to type messages to other players during meetings or when you are dead. You can also use some quick chat options that are available in some languages, such as "Where?", "Who?", "Why?", "How?", and more. The in-game text chat is a good way to communicate with other players who speak the same language as you and who follow the game rules.

          -

          Voice chat apps allow you to talk to other players using your microphone during or between games. You can use voice chat apps like Discord, Skype, Zoom, or others that are compatible with your device and your game. Voice chat apps are a good way to communicate with other players who are your friends or who agree to use voice chat with you. However, voice chat apps are not allowed in some games and can ruin the game experience for some players who prefer text chat.

          -

          Report Dead Bodies or Call Emergency Meetings

          -

          If you want to communicate with other players during a game, you can do so by reporting dead bodies or calling emergency meetings. Reporting dead bodies is when you find a dead body of another player and click on the "Report" button to start a meeting. Calling emergency meetings is when you go to a button on the map and click on it to start a meeting. You can only call emergency meetings when they are available and when there is no sabotage going on.

          -

          When you report a dead body or call an emergency meeting, all the living players will gather in a meeting room and discuss who they think is the impostor. You can use this opportunity to share information, ask questions, accuse someone, defend yourself, lie, or tell the truth. You can also vote for someone to be ejected from the game or skip voting if you are not sure. The player with the most votes will be ejected from the game and their role will be revealed if confirm ejects is on.

          -

          Discuss and Vote for the Impostor

          -

          If you want to communicate with other players effectively during a meeting, you should follow some tips and etiquette. Here are some of them:

          -
            -
          • Be respectful and polite to other players, even if they disagree with you or accuse you.
          • -
          • Be honest and truthful if you are a crewmate, but be deceptive and cunning if you are an impostor.
          • -
          • Be logical and rational when presenting your evidence or arguments, but also be creative and imaginative when making up stories or alibis.
          • -
          • Be attentive and observant when listening to other players' statements or claims, but also be skeptical and critical when questioning them.
          • -
          • Be cooperative and helpful when working with your teammates, but also be independent and decisive when making your own choices.
          • -
          • Be confident and assertive when expressing your opinions or suspicions, but also be humble and flexible when admitting your mistakes or changing your mind.
          -

          By following these tips and etiquette, you will be able to communicate with other players in a fun and effective way. You will also be able to find the impostor or fool the crewmates, depending on your role.

          -

          Conclusion and FAQs

          -

          Among Us is a game that you should download now and join the fun. It is a game that will challenge your skills of teamwork, betrayal, communication, and deduction. It is a game that will make you laugh, scream, rage, and celebrate. It is a game that will bring you closer to your friends or introduce you to new ones. It is a game that you will never get tired of playing.

          -

          In this article, we have covered everything you need to know about Among Us, including how to play it, how to download it, how to customize your character, how to find a game online or host your own, and how to communicate with other players. We hope that this article has helped you understand and enjoy this game better. If you have any questions or comments, feel free to leave them below.

          -

          Here are some FAQs that you might have about Among Us:

          -
            -
          • Q: Is Among Us free to play?
          • -
          • A: Among Us is free to play on mobile devices, but it costs $4.99 on PC and $5.00 on Nintendo Switch. You can also buy some DLCs that include extra skins, hats, pets, and maps.
          • -
          • Q: Is Among Us cross-platform?
          • -
          • A: Yes, Among Us is cross-platform, which means that you can play with other players who are using different devices, such as PC, mobile, or console.
          • -
          • Q: Is Among Us safe for kids?
          • -
          • A: Among Us is rated 9+ on App Store and 10+ on Google Play for infrequent/mild cartoon or fantasy violence. The game does not have any graphic or realistic violence, but it does involve killing and lying. The game also does not have any profanity or sexual content, but it does allow players to chat with each other using text or voice. Therefore, parents should supervise their kids when they play this game and use parental controls to limit their exposure to inappropriate content or behavior.
          • -
          • Q: Is Among Us based on a true story?
          • -
          • A: No, Among Us is not based on a true story. It is a fictional game that is inspired by some sci-fi movies and games, such as The Thing, Alien, Mafia, Werewolf, and more.
          • -
          • Q: Is Among Us still popular?
          • -
          • A: Yes, Among Us is still popular. According to [Steam Charts], Among Us had an average of 28,000 concurrent players in May 2021. According to [Sensor Tower], Among Us had over 41 million downloads on mobile devices in April 2021. The game also has a large and active fan base on social media platforms, such as YouTube, Twitch, Twitter, Reddit, and more.
          • -

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/timpal0l/chat-ui/PRIVACY.md b/spaces/timpal0l/chat-ui/PRIVACY.md deleted file mode 100644 index 109d92ce873c4b6a362e55749d31d7a4adf0c8c6..0000000000000000000000000000000000000000 --- a/spaces/timpal0l/chat-ui/PRIVACY.md +++ /dev/null @@ -1,29 +0,0 @@ -## Privacy - -In this `v0` of HuggingChat, we only store messages to display them to the user, not for any other usage (including for research or model training purposes). - -Please note that in `v0`, users are not authenticated in any way, i.e. this app doesn't have access to your HF user account even if you're logged in to huggingface.co. The app is only using an anonymous session cookie. ❗️ Warning ❗️ this means if you switch browsers or clear cookies, you will currently lose your conversations. - -In a future version, we are considering exposing a setting for users to share their conversations with the model authors (here OpenAssistant) to improve their training data and their model over time. In other terms, model authors are the custodians of the data collected by their model, even if it's hosted on our platform. - -## About available LLMs - -The goal of this app is to showcase that it is now (April 2023) possible to build an open source alternative to ChatGPT. 💪 - -For now, it's running OpenAssistant's [latest LLaMA based model](https://huggingface.co/OpenAssistant/oasst-sft-6-llama-30b-xor) (which is one of the current best open source chat models), but the plan in the longer-term is to expose all good-quality chat models from the Hub. - -## Technical details - -This app is running in a [Space](https://huggingface.co/docs/hub/spaces-overview), which entails that the code for this UI is open source: https://huggingface.co/spaces/huggingchat/chat-ui/tree/main. -The inference backend is running [text-generation-inference](https://github.com/huggingface/text-generation-inference) on HuggingFace's Inference API infrastructure. - -It is therefore possible to deploy a copy of this app to a Space and customize it (swap model, add some UI elements, or store user messages according to your own Terms and conditions) - -We welcome any feedback on this app: please participate to the public discussion at https://huggingface.co/spaces/huggingchat/chat-ui/discussions - - - -## Coming soon - -- LLM watermarking -- User setting to share conversations with model authors diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Crack GBDeflicker 2 4 6 VERIFIED.md b/spaces/tioseFevbu/cartoon-converter/scripts/Crack GBDeflicker 2 4 6 VERIFIED.md deleted file mode 100644 index 15584e74d0b4936d6335849e80366f9f303c2ee5..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Crack GBDeflicker 2 4 6 VERIFIED.md +++ /dev/null @@ -1,21 +0,0 @@ -
          -

          How to Remove Flicker from Time-Lapse Videos with GBDeflicker 2 4 6

          -

          If you are into time-lapse photography, you might have encountered the problem of flickering in your videos. Flickering is caused by frame by frame variations in lighting or exposure, which create noticeable brightness fluctuations. Flickering can ruin your time-lapse video and make it look unprofessional.

          -

          Crack GBDeflicker 2 4 6


          DOWNLOAD ✓✓✓ https://urlcod.com/2uHxUH



          -

          Fortunately, there is a solution to this problem: GBDeflicker 2 4 6. GBDeflicker is a software that analyzes your video and smooths out the abrupt changes in luminance to create a flicker-free video. GBDeflicker is available as an Adobe compatible plug-in for After Effects or as a standalone Windows application. In this article, we will show you how to use GBDeflicker 2 4 6 to remove flicker from your time-lapse videos.

          -

          Step 1: Download and Install GBDeflicker 2 4 6

          -

          You can download GBDeflicker 2 4 6 from the official website of Granite Bay Software[^1^] [^2^]. There are two versions of GBDeflicker: the plug-in version and the standalone application version. The plug-in version works with After Effects and can deflicker any input source that After Effects can load. The standalone application version works with image sequences of JPG, BMP, GIF, PNG, CR2, and TIFF formats. It cannot process movie files, but it can optionally make a preview movie of your image sequence.

          -

          Depending on which version you prefer, you can download the appropriate installer file and run it on your computer. The installer will copy GBDeflicker and its supporting files to the folder c:/Program Files/Granite Bay Software/GBDeflicker on your hard disk[^3^]. You will also need to activate GBDeflicker with a license key that you can purchase from the website or get a free trial key for 30 days.

          -

          -

          Step 2: Import Your Time-Lapse Video into GBDeflicker

          -

          If you are using the plug-in version of GBDeflicker, you need to import your time-lapse video into After Effects and apply the GBDeflicker effect to it. You can find the GBDeflicker effect under the Effect menu > Granite Bay Software > GBDeflicker.

          -

          If you are using the standalone application version of GBDeflicker, you need to select the folder that contains your image sequence and click on the Open button. GBDeflicker will automatically detect the frame rate and resolution of your image sequence and display it in the preview window.

          -

          Step 3: Adjust the Settings of GBDeflicker

          -

          GBDeflicker has a user-friendly interface that allows you to adjust various settings to remove flicker from your video. You can see the input and output graphs that show the luminance values of each frame and how they are smoothed by GBDeflicker. You can also see a histogram that shows the distribution of luminance values and alerts you if there are any clipping issues.

          -

          The main settings that you need to adjust are:

          -
            -
          • Smoothing: This controls how much GBDeflicker smooths out the luminance changes between frames. A higher value means more smoothing and less flicker, but also less detail and contrast. A lower value means less smoothing and more flicker, but also more detail and contrast. You need to find a balance that suits your video.
          • -
          • Correction: This controls how GBDeflicker corrects the luminance values of each frame. You can choose between two algorithms: Smoothing or Keyframe. The Smoothing algorithm applies a constant correction factor to each frame based on the smoothing value. The Keyframe algorithm allows you to set keyframes at specific frames and adjust the correction factor manually for each keyframe. The Keyframe algorithm gives you more control over the correction process, but it also requires more time and effort.
          • -
          • Gamma: This controls how GBDeflicker applies gamma correction to your video. Gamma correction is a process that adjusts the brightness levels of your video to match how they are perceived by

            e93f5a0c3f
            -
            -
            \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/vcs/mercurial.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/vcs/mercurial.py deleted file mode 100644 index 2a005e0aff2df95f01aff4706b48af5da0c81db1..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/vcs/mercurial.py +++ /dev/null @@ -1,163 +0,0 @@ -import configparser -import logging -import os -from typing import List, Optional, Tuple - -from pip._internal.exceptions import BadCommand, InstallationError -from pip._internal.utils.misc import HiddenText, display_path -from pip._internal.utils.subprocess import make_command -from pip._internal.utils.urls import path_to_url -from pip._internal.vcs.versioncontrol import ( - RevOptions, - VersionControl, - find_path_to_project_root_from_repo_root, - vcs, -) - -logger = logging.getLogger(__name__) - - -class Mercurial(VersionControl): - name = "hg" - dirname = ".hg" - repo_name = "clone" - schemes = ( - "hg+file", - "hg+http", - "hg+https", - "hg+ssh", - "hg+static-http", - ) - - @staticmethod - def get_base_rev_args(rev: str) -> List[str]: - return [rev] - - def fetch_new( - self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int - ) -> None: - rev_display = rev_options.to_display() - logger.info( - "Cloning hg %s%s to %s", - url, - rev_display, - display_path(dest), - ) - if verbosity <= 0: - flags: Tuple[str, ...] = ("--quiet",) - elif verbosity == 1: - flags = () - elif verbosity == 2: - flags = ("--verbose",) - else: - flags = ("--verbose", "--debug") - self.run_command(make_command("clone", "--noupdate", *flags, url, dest)) - self.run_command( - make_command("update", *flags, rev_options.to_args()), - cwd=dest, - ) - - def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: - repo_config = os.path.join(dest, self.dirname, "hgrc") - config = configparser.RawConfigParser() - try: - config.read(repo_config) - config.set("paths", "default", url.secret) - with open(repo_config, "w") as config_file: - config.write(config_file) - except (OSError, configparser.NoSectionError) as exc: - logger.warning("Could not switch Mercurial repository to %s: %s", url, exc) - else: - cmd_args = make_command("update", "-q", rev_options.to_args()) - self.run_command(cmd_args, cwd=dest) - - def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: - self.run_command(["pull", "-q"], cwd=dest) - cmd_args = make_command("update", "-q", rev_options.to_args()) - self.run_command(cmd_args, cwd=dest) - - @classmethod - def get_remote_url(cls, location: str) -> str: - url = cls.run_command( - ["showconfig", "paths.default"], - show_stdout=False, - stdout_only=True, - cwd=location, - ).strip() - if cls._is_local_repository(url): - url = path_to_url(url) - return url.strip() - - @classmethod - def get_revision(cls, location: str) -> str: - """ - Return the repository-local changeset revision number, as an integer. - """ - current_revision = cls.run_command( - ["parents", "--template={rev}"], - show_stdout=False, - stdout_only=True, - cwd=location, - ).strip() - return current_revision - - @classmethod - def get_requirement_revision(cls, location: str) -> str: - """ - Return the changeset identification hash, as a 40-character - hexadecimal string - """ - current_rev_hash = cls.run_command( - ["parents", "--template={node}"], - show_stdout=False, - stdout_only=True, - cwd=location, - ).strip() - return current_rev_hash - - @classmethod - def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool: - """Always assume the versions don't match""" - return False - - @classmethod - def get_subdirectory(cls, location: str) -> Optional[str]: - """ - Return the path to Python project root, relative to the repo root. - Return None if the project root is in the repo root. - """ - # find the repo root - repo_root = cls.run_command( - ["root"], show_stdout=False, stdout_only=True, cwd=location - ).strip() - if not os.path.isabs(repo_root): - repo_root = os.path.abspath(os.path.join(location, repo_root)) - return find_path_to_project_root_from_repo_root(location, repo_root) - - @classmethod - def get_repository_root(cls, location: str) -> Optional[str]: - loc = super().get_repository_root(location) - if loc: - return loc - try: - r = cls.run_command( - ["root"], - cwd=location, - show_stdout=False, - stdout_only=True, - on_returncode="raise", - log_failed_cmd=False, - ) - except BadCommand: - logger.debug( - "could not determine if %s is under hg control " - "because hg is not available", - location, - ) - return None - except InstallationError: - return None - return os.path.normpath(r.rstrip("\r\n")) - - -vcs.register(Mercurial) diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/chardet/sjisprober.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/chardet/sjisprober.py deleted file mode 100644 index 3bcbdb71d1639b5cac8ff9c4461e1e36f6f4bb17..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/chardet/sjisprober.py +++ /dev/null @@ -1,98 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is mozilla.org code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from .chardistribution import SJISDistributionAnalysis -from .codingstatemachine import CodingStateMachine -from .enums import MachineState, ProbingState -from .jpcntx import SJISContextAnalysis -from .mbcharsetprober import MultiByteCharSetProber -from .mbcssm import SJIS_SM_MODEL - - -class SJISProber(MultiByteCharSetProber): - def __init__(self): - super().__init__() - self.coding_sm = CodingStateMachine(SJIS_SM_MODEL) - self.distribution_analyzer = SJISDistributionAnalysis() - self.context_analyzer = SJISContextAnalysis() - self.reset() - - def reset(self): - super().reset() - self.context_analyzer.reset() - - @property - def charset_name(self): - return self.context_analyzer.charset_name - - @property - def language(self): - return "Japanese" - - def feed(self, byte_str): - for i, byte in enumerate(byte_str): - coding_state = self.coding_sm.next_state(byte) - if coding_state == MachineState.ERROR: - self.logger.debug( - "%s %s prober hit error at byte %s", - self.charset_name, - self.language, - i, - ) - self._state = ProbingState.NOT_ME - break - if coding_state == MachineState.ITS_ME: - self._state = ProbingState.FOUND_IT - break - if coding_state == MachineState.START: - char_len = self.coding_sm.get_current_charlen() - if i == 0: - self._last_char[1] = byte - self.context_analyzer.feed( - self._last_char[2 - char_len :], char_len - ) - self.distribution_analyzer.feed(self._last_char, char_len) - else: - self.context_analyzer.feed( - byte_str[i + 1 - char_len : i + 3 - char_len], char_len - ) - self.distribution_analyzer.feed(byte_str[i - 1 : i + 1], char_len) - - self._last_char[0] = byte_str[-1] - - if self.state == ProbingState.DETECTING: - if self.context_analyzer.got_enough_data() and ( - self.get_confidence() > self.SHORTCUT_THRESHOLD - ): - self._state = ProbingState.FOUND_IT - - return self.state - - def get_confidence(self): - context_conf = self.context_analyzer.get_confidence() - distrib_conf = self.distribution_analyzer.get_confidence() - return max(context_conf, distrib_conf) diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/layout.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/layout.py deleted file mode 100644 index 1d704652eef10a65a439e675107eacd3149e08f1..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/layout.py +++ /dev/null @@ -1,445 +0,0 @@ -from abc import ABC, abstractmethod -from itertools import islice -from operator import itemgetter -from threading import RLock -from typing import ( - TYPE_CHECKING, - Dict, - Iterable, - List, - NamedTuple, - Optional, - Sequence, - Tuple, - Union, -) - -from ._ratio import ratio_resolve -from .align import Align -from .console import Console, ConsoleOptions, RenderableType, RenderResult -from .highlighter import ReprHighlighter -from .panel import Panel -from .pretty import Pretty -from .repr import rich_repr, Result -from .region import Region -from .segment import Segment -from .style import StyleType - -if TYPE_CHECKING: - from pip._vendor.rich.tree import Tree - - -class LayoutRender(NamedTuple): - """An individual layout render.""" - - region: Region - render: List[List[Segment]] - - -RegionMap = Dict["Layout", Region] -RenderMap = Dict["Layout", LayoutRender] - - -class LayoutError(Exception): - """Layout related error.""" - - -class NoSplitter(LayoutError): - """Requested splitter does not exist.""" - - -class _Placeholder: - """An internal renderable used as a Layout placeholder.""" - - highlighter = ReprHighlighter() - - def __init__(self, layout: "Layout", style: StyleType = "") -> None: - self.layout = layout - self.style = style - - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> RenderResult: - width = options.max_width - height = options.height or options.size.height - layout = self.layout - title = ( - f"{layout.name!r} ({width} x {height})" - if layout.name - else f"({width} x {height})" - ) - yield Panel( - Align.center(Pretty(layout), vertical="middle"), - style=self.style, - title=self.highlighter(title), - border_style="blue", - height=height, - ) - - -class Splitter(ABC): - """Base class for a splitter.""" - - name: str = "" - - @abstractmethod - def get_tree_icon(self) -> str: - """Get the icon (emoji) used in layout.tree""" - - @abstractmethod - def divide( - self, children: Sequence["Layout"], region: Region - ) -> Iterable[Tuple["Layout", Region]]: - """Divide a region amongst several child layouts. - - Args: - children (Sequence(Layout)): A number of child layouts. - region (Region): A rectangular region to divide. - """ - - -class RowSplitter(Splitter): - """Split a layout region in to rows.""" - - name = "row" - - def get_tree_icon(self) -> str: - return "[layout.tree.row]⬌" - - def divide( - self, children: Sequence["Layout"], region: Region - ) -> Iterable[Tuple["Layout", Region]]: - x, y, width, height = region - render_widths = ratio_resolve(width, children) - offset = 0 - _Region = Region - for child, child_width in zip(children, render_widths): - yield child, _Region(x + offset, y, child_width, height) - offset += child_width - - -class ColumnSplitter(Splitter): - """Split a layout region in to columns.""" - - name = "column" - - def get_tree_icon(self) -> str: - return "[layout.tree.column]⬍" - - def divide( - self, children: Sequence["Layout"], region: Region - ) -> Iterable[Tuple["Layout", Region]]: - x, y, width, height = region - render_heights = ratio_resolve(height, children) - offset = 0 - _Region = Region - for child, child_height in zip(children, render_heights): - yield child, _Region(x, y + offset, width, child_height) - offset += child_height - - -@rich_repr -class Layout: - """A renderable to divide a fixed height in to rows or columns. - - Args: - renderable (RenderableType, optional): Renderable content, or None for placeholder. Defaults to None. - name (str, optional): Optional identifier for Layout. Defaults to None. - size (int, optional): Optional fixed size of layout. Defaults to None. - minimum_size (int, optional): Minimum size of layout. Defaults to 1. - ratio (int, optional): Optional ratio for flexible layout. Defaults to 1. - visible (bool, optional): Visibility of layout. Defaults to True. - """ - - splitters = {"row": RowSplitter, "column": ColumnSplitter} - - def __init__( - self, - renderable: Optional[RenderableType] = None, - *, - name: Optional[str] = None, - size: Optional[int] = None, - minimum_size: int = 1, - ratio: int = 1, - visible: bool = True, - height: Optional[int] = None, - ) -> None: - self._renderable = renderable or _Placeholder(self) - self.size = size - self.minimum_size = minimum_size - self.ratio = ratio - self.name = name - self.visible = visible - self.height = height - self.splitter: Splitter = self.splitters["column"]() - self._children: List[Layout] = [] - self._render_map: RenderMap = {} - self._lock = RLock() - - def __rich_repr__(self) -> Result: - yield "name", self.name, None - yield "size", self.size, None - yield "minimum_size", self.minimum_size, 1 - yield "ratio", self.ratio, 1 - - @property - def renderable(self) -> RenderableType: - """Layout renderable.""" - return self if self._children else self._renderable - - @property - def children(self) -> List["Layout"]: - """Gets (visible) layout children.""" - return [child for child in self._children if child.visible] - - @property - def map(self) -> RenderMap: - """Get a map of the last render.""" - return self._render_map - - def get(self, name: str) -> Optional["Layout"]: - """Get a named layout, or None if it doesn't exist. - - Args: - name (str): Name of layout. - - Returns: - Optional[Layout]: Layout instance or None if no layout was found. - """ - if self.name == name: - return self - else: - for child in self._children: - named_layout = child.get(name) - if named_layout is not None: - return named_layout - return None - - def __getitem__(self, name: str) -> "Layout": - layout = self.get(name) - if layout is None: - raise KeyError(f"No layout with name {name!r}") - return layout - - @property - def tree(self) -> "Tree": - """Get a tree renderable to show layout structure.""" - from pip._vendor.rich.styled import Styled - from pip._vendor.rich.table import Table - from pip._vendor.rich.tree import Tree - - def summary(layout: "Layout") -> Table: - - icon = layout.splitter.get_tree_icon() - - table = Table.grid(padding=(0, 1, 0, 0)) - - text: RenderableType = ( - Pretty(layout) if layout.visible else Styled(Pretty(layout), "dim") - ) - table.add_row(icon, text) - _summary = table - return _summary - - layout = self - tree = Tree( - summary(layout), - guide_style=f"layout.tree.{layout.splitter.name}", - highlight=True, - ) - - def recurse(tree: "Tree", layout: "Layout") -> None: - for child in layout._children: - recurse( - tree.add( - summary(child), - guide_style=f"layout.tree.{child.splitter.name}", - ), - child, - ) - - recurse(tree, self) - return tree - - def split( - self, - *layouts: Union["Layout", RenderableType], - splitter: Union[Splitter, str] = "column", - ) -> None: - """Split the layout in to multiple sub-layouts. - - Args: - *layouts (Layout): Positional arguments should be (sub) Layout instances. - splitter (Union[Splitter, str]): Splitter instance or name of splitter. - """ - _layouts = [ - layout if isinstance(layout, Layout) else Layout(layout) - for layout in layouts - ] - try: - self.splitter = ( - splitter - if isinstance(splitter, Splitter) - else self.splitters[splitter]() - ) - except KeyError: - raise NoSplitter(f"No splitter called {splitter!r}") - self._children[:] = _layouts - - def add_split(self, *layouts: Union["Layout", RenderableType]) -> None: - """Add a new layout(s) to existing split. - - Args: - *layouts (Union[Layout, RenderableType]): Positional arguments should be renderables or (sub) Layout instances. - - """ - _layouts = ( - layout if isinstance(layout, Layout) else Layout(layout) - for layout in layouts - ) - self._children.extend(_layouts) - - def split_row(self, *layouts: Union["Layout", RenderableType]) -> None: - """Split the layout in to a row (layouts side by side). - - Args: - *layouts (Layout): Positional arguments should be (sub) Layout instances. - """ - self.split(*layouts, splitter="row") - - def split_column(self, *layouts: Union["Layout", RenderableType]) -> None: - """Split the layout in to a column (layouts stacked on top of each other). - - Args: - *layouts (Layout): Positional arguments should be (sub) Layout instances. - """ - self.split(*layouts, splitter="column") - - def unsplit(self) -> None: - """Reset splits to initial state.""" - del self._children[:] - - def update(self, renderable: RenderableType) -> None: - """Update renderable. - - Args: - renderable (RenderableType): New renderable object. - """ - with self._lock: - self._renderable = renderable - - def refresh_screen(self, console: "Console", layout_name: str) -> None: - """Refresh a sub-layout. - - Args: - console (Console): Console instance where Layout is to be rendered. - layout_name (str): Name of layout. - """ - with self._lock: - layout = self[layout_name] - region, _lines = self._render_map[layout] - (x, y, width, height) = region - lines = console.render_lines( - layout, console.options.update_dimensions(width, height) - ) - self._render_map[layout] = LayoutRender(region, lines) - console.update_screen_lines(lines, x, y) - - def _make_region_map(self, width: int, height: int) -> RegionMap: - """Create a dict that maps layout on to Region.""" - stack: List[Tuple[Layout, Region]] = [(self, Region(0, 0, width, height))] - push = stack.append - pop = stack.pop - layout_regions: List[Tuple[Layout, Region]] = [] - append_layout_region = layout_regions.append - while stack: - append_layout_region(pop()) - layout, region = layout_regions[-1] - children = layout.children - if children: - for child_and_region in layout.splitter.divide(children, region): - push(child_and_region) - - region_map = { - layout: region - for layout, region in sorted(layout_regions, key=itemgetter(1)) - } - return region_map - - def render(self, console: Console, options: ConsoleOptions) -> RenderMap: - """Render the sub_layouts. - - Args: - console (Console): Console instance. - options (ConsoleOptions): Console options. - - Returns: - RenderMap: A dict that maps Layout on to a tuple of Region, lines - """ - render_width = options.max_width - render_height = options.height or console.height - region_map = self._make_region_map(render_width, render_height) - layout_regions = [ - (layout, region) - for layout, region in region_map.items() - if not layout.children - ] - render_map: Dict["Layout", "LayoutRender"] = {} - render_lines = console.render_lines - update_dimensions = options.update_dimensions - - for layout, region in layout_regions: - lines = render_lines( - layout.renderable, update_dimensions(region.width, region.height) - ) - render_map[layout] = LayoutRender(region, lines) - return render_map - - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> RenderResult: - with self._lock: - width = options.max_width or console.width - height = options.height or console.height - render_map = self.render(console, options.update_dimensions(width, height)) - self._render_map = render_map - layout_lines: List[List[Segment]] = [[] for _ in range(height)] - _islice = islice - for (region, lines) in render_map.values(): - _x, y, _layout_width, layout_height = region - for row, line in zip( - _islice(layout_lines, y, y + layout_height), lines - ): - row.extend(line) - - new_line = Segment.line() - for layout_row in layout_lines: - yield from layout_row - yield new_line - - -if __name__ == "__main__": - from pip._vendor.rich.console import Console - - console = Console() - layout = Layout() - - layout.split_column( - Layout(name="header", size=3), - Layout(ratio=1, name="main"), - Layout(size=10, name="footer"), - ) - - layout["main"].split_row(Layout(name="side"), Layout(name="body", ratio=2)) - - layout["body"].split_row(Layout(name="content", ratio=2), Layout(name="s2")) - - layout["s2"].split_column( - Layout(name="top"), Layout(name="middle"), Layout(name="bottom") - ) - - layout["side"].split_column(Layout(layout.tree, name="left1"), Layout(name="left2")) - - layout["content"].update("foo") - - console.print(layout) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/roi_heads/roi_extractors/generic_roi_extractor.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/roi_heads/roi_extractors/generic_roi_extractor.py deleted file mode 100644 index 80c25bb8fde7844c994bfc1f4ae1a2d960cbf3d6..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/roi_heads/roi_extractors/generic_roi_extractor.py +++ /dev/null @@ -1,83 +0,0 @@ -from mmcv.cnn.bricks import build_plugin_layer -from mmcv.runner import force_fp32 - -from mmdet.models.builder import ROI_EXTRACTORS -from .base_roi_extractor import BaseRoIExtractor - - -@ROI_EXTRACTORS.register_module() -class GenericRoIExtractor(BaseRoIExtractor): - """Extract RoI features from all level feature maps levels. - - This is the implementation of `A novel Region of Interest Extraction Layer - for Instance Segmentation `_. - - Args: - aggregation (str): The method to aggregate multiple feature maps. - Options are 'sum', 'concat'. Default: 'sum'. - pre_cfg (dict | None): Specify pre-processing modules. Default: None. - post_cfg (dict | None): Specify post-processing modules. Default: None. - kwargs (keyword arguments): Arguments that are the same - as :class:`BaseRoIExtractor`. - """ - - def __init__(self, - aggregation='sum', - pre_cfg=None, - post_cfg=None, - **kwargs): - super(GenericRoIExtractor, self).__init__(**kwargs) - - assert aggregation in ['sum', 'concat'] - - self.aggregation = aggregation - self.with_post = post_cfg is not None - self.with_pre = pre_cfg is not None - # build pre/post processing modules - if self.with_post: - self.post_module = build_plugin_layer(post_cfg, '_post_module')[1] - if self.with_pre: - self.pre_module = build_plugin_layer(pre_cfg, '_pre_module')[1] - - @force_fp32(apply_to=('feats', ), out_fp16=True) - def forward(self, feats, rois, roi_scale_factor=None): - """Forward function.""" - if len(feats) == 1: - return self.roi_layers[0](feats[0], rois) - - out_size = self.roi_layers[0].output_size - num_levels = len(feats) - roi_feats = feats[0].new_zeros( - rois.size(0), self.out_channels, *out_size) - - # some times rois is an empty tensor - if roi_feats.shape[0] == 0: - return roi_feats - - if roi_scale_factor is not None: - rois = self.roi_rescale(rois, roi_scale_factor) - - # mark the starting channels for concat mode - start_channels = 0 - for i in range(num_levels): - roi_feats_t = self.roi_layers[i](feats[i], rois) - end_channels = start_channels + roi_feats_t.size(1) - if self.with_pre: - # apply pre-processing to a RoI extracted from each layer - roi_feats_t = self.pre_module(roi_feats_t) - if self.aggregation == 'sum': - # and sum them all - roi_feats += roi_feats_t - else: - # and concat them along channel dimension - roi_feats[:, start_channels:end_channels] = roi_feats_t - # update channels starting position - start_channels = end_channels - # check if concat channels match at the end - if self.aggregation == 'concat': - assert start_channels == self.out_channels - - if self.with_post: - # apply post-processing before return the result - roi_feats = self.post_module(roi_feats) - return roi_feats diff --git a/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/ldm/data/lsun.py b/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/ldm/data/lsun.py deleted file mode 100644 index 6256e45715ff0b57c53f985594d27cbbbff0e68e..0000000000000000000000000000000000000000 --- a/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/ldm/data/lsun.py +++ /dev/null @@ -1,92 +0,0 @@ -import os -import numpy as np -import PIL -from PIL import Image -from torch.utils.data import Dataset -from torchvision import transforms - - -class LSUNBase(Dataset): - def __init__(self, - txt_file, - data_root, - size=None, - interpolation="bicubic", - flip_p=0.5 - ): - self.data_paths = txt_file - self.data_root = data_root - with open(self.data_paths, "r") as f: - self.image_paths = f.read().splitlines() - self._length = len(self.image_paths) - self.labels = { - "relative_file_path_": [l for l in self.image_paths], - "file_path_": [os.path.join(self.data_root, l) - for l in self.image_paths], - } - - self.size = size - self.interpolation = {"linear": PIL.Image.LINEAR, - "bilinear": PIL.Image.BILINEAR, - "bicubic": PIL.Image.BICUBIC, - "lanczos": PIL.Image.LANCZOS, - }[interpolation] - self.flip = transforms.RandomHorizontalFlip(p=flip_p) - - def __len__(self): - return self._length - - def __getitem__(self, i): - example = dict((k, self.labels[k][i]) for k in self.labels) - image = Image.open(example["file_path_"]) - if not image.mode == "RGB": - image = image.convert("RGB") - - # default to score-sde preprocessing - img = np.array(image).astype(np.uint8) - crop = min(img.shape[0], img.shape[1]) - h, w, = img.shape[0], img.shape[1] - img = img[(h - crop) // 2:(h + crop) // 2, - (w - crop) // 2:(w + crop) // 2] - - image = Image.fromarray(img) - if self.size is not None: - image = image.resize((self.size, self.size), resample=self.interpolation) - - image = self.flip(image) - image = np.array(image).astype(np.uint8) - example["image"] = (image / 127.5 - 1.0).astype(np.float32) - return example - - -class LSUNChurchesTrain(LSUNBase): - def __init__(self, **kwargs): - super().__init__(txt_file="data/lsun/church_outdoor_train.txt", data_root="data/lsun/churches", **kwargs) - - -class LSUNChurchesValidation(LSUNBase): - def __init__(self, flip_p=0., **kwargs): - super().__init__(txt_file="data/lsun/church_outdoor_val.txt", data_root="data/lsun/churches", - flip_p=flip_p, **kwargs) - - -class LSUNBedroomsTrain(LSUNBase): - def __init__(self, **kwargs): - super().__init__(txt_file="data/lsun/bedrooms_train.txt", data_root="data/lsun/bedrooms", **kwargs) - - -class LSUNBedroomsValidation(LSUNBase): - def __init__(self, flip_p=0.0, **kwargs): - super().__init__(txt_file="data/lsun/bedrooms_val.txt", data_root="data/lsun/bedrooms", - flip_p=flip_p, **kwargs) - - -class LSUNCatsTrain(LSUNBase): - def __init__(self, **kwargs): - super().__init__(txt_file="data/lsun/cat_train.txt", data_root="data/lsun/cats", **kwargs) - - -class LSUNCatsValidation(LSUNBase): - def __init__(self, flip_p=0., **kwargs): - super().__init__(txt_file="data/lsun/cat_val.txt", data_root="data/lsun/cats", - flip_p=flip_p, **kwargs) diff --git a/spaces/uSerNameDDHL/bingo/src/pages/api/image.ts b/spaces/uSerNameDDHL/bingo/src/pages/api/image.ts deleted file mode 100644 index fbc0c8def432ba212d27347471670d3b6202463d..0000000000000000000000000000000000000000 --- a/spaces/uSerNameDDHL/bingo/src/pages/api/image.ts +++ /dev/null @@ -1,40 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { debug } from '@/lib/isomorphic' -import { createHeaders } from '@/lib/utils' -import { createImage } from '@/lib/bots/bing/utils' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - const { prompt, id } = req.query - if (!prompt) { - return res.json({ - result: { - value: 'Image', - message: 'No Prompt' - } - }) - } - try { - const headers = createHeaders(req.cookies, { - IMAGE_BING_COOKIE: process.env.IMAGE_BING_COOKIE - }, 'image') - - debug('headers', headers) - const response = await createImage(String(prompt), String(id), { - ...headers, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - }) - res.writeHead(200, { - 'Content-Type': 'text/plain; charset=UTF-8', - }) - return res.end(response) - } catch (e) { - return res.json({ - result: { - value: 'Error', - message: `${e}` - } - }) - } -} diff --git a/spaces/umitgunduz/news-extractor/src/download.py b/spaces/umitgunduz/news-extractor/src/download.py deleted file mode 100644 index f3008e4422d93c0a67a5132c4e81d5450336f501..0000000000000000000000000000000000000000 --- a/spaces/umitgunduz/news-extractor/src/download.py +++ /dev/null @@ -1,130 +0,0 @@ -import glob -import json -import logging -import os -import ssl -from http import HTTPStatus - -import requests -from progress.bar import Bar - -logging.basicConfig(level=logging.INFO) -ssl._create_default_https_context = ssl._create_unverified_context - - -class NewsHtmlDowloader: - """ - Haber sitelerindeki HTML içeriklerini indirmek ve kaydetmek için kullanılan sınıf. - - Methods: - __init__(): NewsHtmlDowloader sınıfını oluşturan yapıcı metot. - save_html(name, id, raw_html_path, html): Belirtilen site ve id ile birlikte verilen HTML içeriğini bir dosyaya kaydeder. - download(url): Belirtilen URL'den HTML içeriğini indirir. - run(name, meta_path, raw_html_path): Belirtilen site adıyla ilişkili meta dosyasını okuyarak HTML içeriklerini indirir ve kaydeder. - """ - - def __init__(self): - """ - NewsHtmlDowloader sınıfını oluşturan yapıcı metot. - - Returns: - None - """ - logging.debug('NewsHtmlDowloader Sınıfı oluşturuldu') - - @staticmethod - def save_html(name, id, raw_html_path, html): - """ - Belirtilen site ve id ile birlikte verilen HTML içeriğini bir dosyaya kaydeder. - - Args: - name (str): Kaydedilecek sitenin adı. - id (str): Kaydedilecek dosyanın id'si. - raw_html_path (str): HTML dosyalarının kaydedileceği dizin yolunu belirtir. - html (str): Kaydedilecek HTML içeriği. - - Returns: - None - - Raises: - IOError: Dosya veya dizin oluşturma hatası durumunda oluşabilir. - """ - file_dir = f"{raw_html_path}/{name}" - if not os.path.exists(file_dir): - os.makedirs(file_dir) - file_path = f"{file_dir}/{id}.html" - with open(file_path, 'w', encoding='utf-8') as output: - output.write(html) - - @staticmethod - def download(url): - """ - Belirtilen URL'den HTML içeriğini indirir. - - Args: - url (str): İndirilecek URL. - - Returns: - str: İndirilen HTML içeriği. - - Raises: - Exception: İndirme başarısız olduğunda fırlatılır. - """ - resp = requests.get(url, headers={'User-Agent': 'Mozilla'}) - if resp.status_code == HTTPStatus.OK: - html = resp.text - # if resp.encoding != "utf-8": - # html = html.encode(resp.encoding).decode("utf-8") - else: - raise Exception( - f"Failed Download: Status Code: {resp.status_code}") - return html - - def run(self, name, meta_path, raw_html_path): - """ - Belirtilen site adıyla ilişkili meta dosyasını okuyarak HTML içeriklerini indirir ve kaydeder. - - Args: - name (str): Site adı. - meta_path (str): Meta dosyalarının bulunduğu dizin yolunu belirtir. - raw_html_path (str): HTML dosyalarının kaydedileceği dizin yolunu belirtir. - - Returns: - None - """ - lfs = glob.glob(f"{meta_path}/{name}.json") - for lf in lfs: - with open(lf, 'r') as json_file: - links = json.load(json_file) - _max = len(links) - - logging.info(f"{name} html dosyaları inidirlmeye başlandı.") - with Bar(f'{name} Download Links', max=_max, - suffix='%(percent).1f%% | %(index)d | %(remaining)d | %(max)d | %(eta)ds') as bar: - for link in links: - _id = link["id"] - _source = link["source"] - _url = link["url"] - html = self.download(_url) - self.save_html(name, _id, raw_html_path, html) - bar.next() - bar.finish() - logging.info(f"{name} dosyası indirme işlemi tamamlandı.") - - -if __name__ == '__main__': - """ - Uygulamanın ana çalıştırma noktası. Belirtilen sitelerin HTML içeriklerini indirir ve kaydeder. - - Returns: - None - """ - downloader = NewsHtmlDowloader() - sites = ["aa", "aksam", "cnnturk", "cumhuriyet", "ensonhaber", "haber7", "haberglobal", "haberler", "haberturk", - "hurriyet", "milliyet", "ntv", "trthaber"] - _meta_path = "../data/meta" - _raw_html_path = "../data/html/raw" - for _name in sites: - downloader.run(name=_name, - meta_path=_meta_path, - raw_html_path=_raw_html_path) diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Audicodecalculatorauz1z1.md b/spaces/usbethFlerru/sovits-modelsV2/example/Audicodecalculatorauz1z1.md deleted file mode 100644 index bbac2b83773b138f02a74154ec740f880ed7671b..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Audicodecalculatorauz1z1.md +++ /dev/null @@ -1,7 +0,0 @@ -
            -

            this is a very simple tool. you press a button, a date is displayed. the goal is to show over time how many times a button has been pressed.

            -

            audicodecalculatorauz1z1


            Download File ✓✓✓ https://urlcod.com/2uyUU9



            field type label description required
            button integer id 0-9 yes
            audio array recordings a list of audio files, with the extension 'x-aac' or '.mp3' yes
            start date start date yes
            end date end date yes
            output text outputname audio name in formats aac and mp3 yes
            subdirectory text audiodef audio_directory no
            -

            these options allow for configuration of the checker.

            -inputwav or aiff file to parse (use '-' for stdin).
            -helpprint this list of options.
            -outputoutput the detected auz1+ z1 code.
            -quietbe quiet, print only the detected auz1+ z1 code.
            -verboseshow some extra information.
            -versionshow program's version.
            -

            before parsing, the file is read into memory. the data is then examined and separated into blocks of the required size. blocks that are not auz1 code and standard ascii file header are ignored. if possible, the largest block of the data contains the auz1+ z1 code.

            899543212b
            -
            -
            \ No newline at end of file diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/utils/callbacks/comet.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/utils/callbacks/comet.py deleted file mode 100644 index 94aeb8f64c8abc39564d1cac25c3c6eb55ad3dce..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/utils/callbacks/comet.py +++ /dev/null @@ -1,368 +0,0 @@ -# Ultralytics YOLO 🚀, AGPL-3.0 license - -import os -from pathlib import Path - -from ultralytics.yolo.utils import LOGGER, RANK, TESTS_RUNNING, ops -from ultralytics.yolo.utils.torch_utils import model_info_for_loggers - -try: - import comet_ml - - assert not TESTS_RUNNING # do not log pytest - assert hasattr(comet_ml, '__version__') # verify package is not directory -except (ImportError, AssertionError): - comet_ml = None - -# Ensures certain logging functions only run for supported tasks -COMET_SUPPORTED_TASKS = ['detect'] - -# Names of plots created by YOLOv8 that are logged to Comet -EVALUATION_PLOT_NAMES = 'F1_curve', 'P_curve', 'R_curve', 'PR_curve', 'confusion_matrix' -LABEL_PLOT_NAMES = 'labels', 'labels_correlogram' - -_comet_image_prediction_count = 0 - - -def _get_comet_mode(): - return os.getenv('COMET_MODE', 'online') - - -def _get_comet_model_name(): - return os.getenv('COMET_MODEL_NAME', 'YOLOv8') - - -def _get_eval_batch_logging_interval(): - return int(os.getenv('COMET_EVAL_BATCH_LOGGING_INTERVAL', 1)) - - -def _get_max_image_predictions_to_log(): - return int(os.getenv('COMET_MAX_IMAGE_PREDICTIONS', 100)) - - -def _scale_confidence_score(score): - scale = float(os.getenv('COMET_MAX_CONFIDENCE_SCORE', 100.0)) - return score * scale - - -def _should_log_confusion_matrix(): - return os.getenv('COMET_EVAL_LOG_CONFUSION_MATRIX', 'false').lower() == 'true' - - -def _should_log_image_predictions(): - return os.getenv('COMET_EVAL_LOG_IMAGE_PREDICTIONS', 'true').lower() == 'true' - - -def _get_experiment_type(mode, project_name): - """Return an experiment based on mode and project name.""" - if mode == 'offline': - return comet_ml.OfflineExperiment(project_name=project_name) - - return comet_ml.Experiment(project_name=project_name) - - -def _create_experiment(args): - """Ensures that the experiment object is only created in a single process during distributed training.""" - if RANK not in (-1, 0): - return - try: - comet_mode = _get_comet_mode() - _project_name = os.getenv('COMET_PROJECT_NAME', args.project) - experiment = _get_experiment_type(comet_mode, _project_name) - experiment.log_parameters(vars(args)) - experiment.log_others({ - 'eval_batch_logging_interval': _get_eval_batch_logging_interval(), - 'log_confusion_matrix_on_eval': _should_log_confusion_matrix(), - 'log_image_predictions': _should_log_image_predictions(), - 'max_image_predictions': _get_max_image_predictions_to_log(), }) - experiment.log_other('Created from', 'yolov8') - - except Exception as e: - LOGGER.warning(f'WARNING ⚠️ Comet installed but not initialized correctly, not logging this run. {e}') - - -def _fetch_trainer_metadata(trainer): - """Returns metadata for YOLO training including epoch and asset saving status.""" - curr_epoch = trainer.epoch + 1 - - train_num_steps_per_epoch = len(trainer.train_loader.dataset) // trainer.batch_size - curr_step = curr_epoch * train_num_steps_per_epoch - final_epoch = curr_epoch == trainer.epochs - - save = trainer.args.save - save_period = trainer.args.save_period - save_interval = curr_epoch % save_period == 0 - save_assets = save and save_period > 0 and save_interval and not final_epoch - - return dict( - curr_epoch=curr_epoch, - curr_step=curr_step, - save_assets=save_assets, - final_epoch=final_epoch, - ) - - -def _scale_bounding_box_to_original_image_shape(box, resized_image_shape, original_image_shape, ratio_pad): - """YOLOv8 resizes images during training and the label values - are normalized based on this resized shape. This function rescales the - bounding box labels to the original image shape. - """ - - resized_image_height, resized_image_width = resized_image_shape - - # Convert normalized xywh format predictions to xyxy in resized scale format - box = ops.xywhn2xyxy(box, h=resized_image_height, w=resized_image_width) - # Scale box predictions from resized image scale back to original image scale - box = ops.scale_boxes(resized_image_shape, box, original_image_shape, ratio_pad) - # Convert bounding box format from xyxy to xywh for Comet logging - box = ops.xyxy2xywh(box) - # Adjust xy center to correspond top-left corner - box[:2] -= box[2:] / 2 - box = box.tolist() - - return box - - -def _format_ground_truth_annotations_for_detection(img_idx, image_path, batch, class_name_map=None): - """Format ground truth annotations for detection.""" - indices = batch['batch_idx'] == img_idx - bboxes = batch['bboxes'][indices] - if len(bboxes) == 0: - LOGGER.debug(f'COMET WARNING: Image: {image_path} has no bounding boxes labels') - return None - - cls_labels = batch['cls'][indices].squeeze(1).tolist() - if class_name_map: - cls_labels = [str(class_name_map[label]) for label in cls_labels] - - original_image_shape = batch['ori_shape'][img_idx] - resized_image_shape = batch['resized_shape'][img_idx] - ratio_pad = batch['ratio_pad'][img_idx] - - data = [] - for box, label in zip(bboxes, cls_labels): - box = _scale_bounding_box_to_original_image_shape(box, resized_image_shape, original_image_shape, ratio_pad) - data.append({ - 'boxes': [box], - 'label': f'gt_{label}', - 'score': _scale_confidence_score(1.0), }) - - return {'name': 'ground_truth', 'data': data} - - -def _format_prediction_annotations_for_detection(image_path, metadata, class_label_map=None): - """Format YOLO predictions for object detection visualization.""" - stem = image_path.stem - image_id = int(stem) if stem.isnumeric() else stem - - predictions = metadata.get(image_id) - if not predictions: - LOGGER.debug(f'COMET WARNING: Image: {image_path} has no bounding boxes predictions') - return None - - data = [] - for prediction in predictions: - boxes = prediction['bbox'] - score = _scale_confidence_score(prediction['score']) - cls_label = prediction['category_id'] - if class_label_map: - cls_label = str(class_label_map[cls_label]) - - data.append({'boxes': [boxes], 'label': cls_label, 'score': score}) - - return {'name': 'prediction', 'data': data} - - -def _fetch_annotations(img_idx, image_path, batch, prediction_metadata_map, class_label_map): - """Join the ground truth and prediction annotations if they exist.""" - ground_truth_annotations = _format_ground_truth_annotations_for_detection(img_idx, image_path, batch, - class_label_map) - prediction_annotations = _format_prediction_annotations_for_detection(image_path, prediction_metadata_map, - class_label_map) - - annotations = [ - annotation for annotation in [ground_truth_annotations, prediction_annotations] if annotation is not None] - return [annotations] if annotations else None - - -def _create_prediction_metadata_map(model_predictions): - """Create metadata map for model predictions by groupings them based on image ID.""" - pred_metadata_map = {} - for prediction in model_predictions: - pred_metadata_map.setdefault(prediction['image_id'], []) - pred_metadata_map[prediction['image_id']].append(prediction) - - return pred_metadata_map - - -def _log_confusion_matrix(experiment, trainer, curr_step, curr_epoch): - """Log the confusion matrix to Comet experiment.""" - conf_mat = trainer.validator.confusion_matrix.matrix - names = list(trainer.data['names'].values()) + ['background'] - experiment.log_confusion_matrix( - matrix=conf_mat, - labels=names, - max_categories=len(names), - epoch=curr_epoch, - step=curr_step, - ) - - -def _log_images(experiment, image_paths, curr_step, annotations=None): - """Logs images to the experiment with optional annotations.""" - if annotations: - for image_path, annotation in zip(image_paths, annotations): - experiment.log_image(image_path, name=image_path.stem, step=curr_step, annotations=annotation) - - else: - for image_path in image_paths: - experiment.log_image(image_path, name=image_path.stem, step=curr_step) - - -def _log_image_predictions(experiment, validator, curr_step): - """Logs predicted boxes for a single image during training.""" - global _comet_image_prediction_count - - task = validator.args.task - if task not in COMET_SUPPORTED_TASKS: - return - - jdict = validator.jdict - if not jdict: - return - - predictions_metadata_map = _create_prediction_metadata_map(jdict) - dataloader = validator.dataloader - class_label_map = validator.names - - batch_logging_interval = _get_eval_batch_logging_interval() - max_image_predictions = _get_max_image_predictions_to_log() - - for batch_idx, batch in enumerate(dataloader): - if (batch_idx + 1) % batch_logging_interval != 0: - continue - - image_paths = batch['im_file'] - for img_idx, image_path in enumerate(image_paths): - if _comet_image_prediction_count >= max_image_predictions: - return - - image_path = Path(image_path) - annotations = _fetch_annotations( - img_idx, - image_path, - batch, - predictions_metadata_map, - class_label_map, - ) - _log_images( - experiment, - [image_path], - curr_step, - annotations=annotations, - ) - _comet_image_prediction_count += 1 - - -def _log_plots(experiment, trainer): - """Logs evaluation plots and label plots for the experiment.""" - plot_filenames = [trainer.save_dir / f'{plots}.png' for plots in EVALUATION_PLOT_NAMES] - _log_images(experiment, plot_filenames, None) - - label_plot_filenames = [trainer.save_dir / f'{labels}.jpg' for labels in LABEL_PLOT_NAMES] - _log_images(experiment, label_plot_filenames, None) - - -def _log_model(experiment, trainer): - """Log the best-trained model to Comet.ml.""" - model_name = _get_comet_model_name() - experiment.log_model( - model_name, - file_or_folder=str(trainer.best), - file_name='best.pt', - overwrite=True, - ) - - -def on_pretrain_routine_start(trainer): - """Creates or resumes a CometML experiment at the start of a YOLO pre-training routine.""" - experiment = comet_ml.get_global_experiment() - is_alive = getattr(experiment, 'alive', False) - if not experiment or not is_alive: - _create_experiment(trainer.args) - - -def on_train_epoch_end(trainer): - """Log metrics and save batch images at the end of training epochs.""" - experiment = comet_ml.get_global_experiment() - if not experiment: - return - - metadata = _fetch_trainer_metadata(trainer) - curr_epoch = metadata['curr_epoch'] - curr_step = metadata['curr_step'] - - experiment.log_metrics( - trainer.label_loss_items(trainer.tloss, prefix='train'), - step=curr_step, - epoch=curr_epoch, - ) - - if curr_epoch == 1: - _log_images(experiment, trainer.save_dir.glob('train_batch*.jpg'), curr_step) - - -def on_fit_epoch_end(trainer): - """Logs model assets at the end of each epoch.""" - experiment = comet_ml.get_global_experiment() - if not experiment: - return - - metadata = _fetch_trainer_metadata(trainer) - curr_epoch = metadata['curr_epoch'] - curr_step = metadata['curr_step'] - save_assets = metadata['save_assets'] - - experiment.log_metrics(trainer.metrics, step=curr_step, epoch=curr_epoch) - experiment.log_metrics(trainer.lr, step=curr_step, epoch=curr_epoch) - if curr_epoch == 1: - experiment.log_metrics(model_info_for_loggers(trainer), step=curr_step, epoch=curr_epoch) - - if not save_assets: - return - - _log_model(experiment, trainer) - if _should_log_confusion_matrix(): - _log_confusion_matrix(experiment, trainer, curr_step, curr_epoch) - if _should_log_image_predictions(): - _log_image_predictions(experiment, trainer.validator, curr_step) - - -def on_train_end(trainer): - """Perform operations at the end of training.""" - experiment = comet_ml.get_global_experiment() - if not experiment: - return - - metadata = _fetch_trainer_metadata(trainer) - curr_epoch = metadata['curr_epoch'] - curr_step = metadata['curr_step'] - plots = trainer.args.plots - - _log_model(experiment, trainer) - if plots: - _log_plots(experiment, trainer) - - _log_confusion_matrix(experiment, trainer, curr_step, curr_epoch) - _log_image_predictions(experiment, trainer.validator, curr_step) - experiment.end() - - global _comet_image_prediction_count - _comet_image_prediction_count = 0 - - -callbacks = { - 'on_pretrain_routine_start': on_pretrain_routine_start, - 'on_train_epoch_end': on_train_epoch_end, - 'on_fit_epoch_end': on_fit_epoch_end, - 'on_train_end': on_train_end} if comet_ml else {} diff --git a/spaces/vinay123/panoptic-segment-anything/GroundingDINO/README.md b/spaces/vinay123/panoptic-segment-anything/GroundingDINO/README.md deleted file mode 100644 index b6610df03d409633e572ef49d67a445d35a63967..0000000000000000000000000000000000000000 --- a/spaces/vinay123/panoptic-segment-anything/GroundingDINO/README.md +++ /dev/null @@ -1,163 +0,0 @@ -# Grounding DINO - ---- - -[![arXiv](https://img.shields.io/badge/arXiv-2303.05499-b31b1b.svg)](https://arxiv.org/abs/2303.05499) -[![YouTube](https://badges.aleen42.com/src/youtube.svg)](https://youtu.be/wxWDt5UiwY8) -[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb) -[![YouTube](https://badges.aleen42.com/src/youtube.svg)](https://youtu.be/cMa77r3YrDk) -[![HuggingFace space](https://img.shields.io/badge/🤗-HuggingFace%20Space-cyan.svg)](https://huggingface.co/spaces/ShilongLiu/Grounding_DINO_demo) - -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/zero-shot-object-detection-on-mscoco)](https://paperswithcode.com/sota/zero-shot-object-detection-on-mscoco?p=grounding-dino-marrying-dino-with-grounded) \ -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/zero-shot-object-detection-on-odinw)](https://paperswithcode.com/sota/zero-shot-object-detection-on-odinw?p=grounding-dino-marrying-dino-with-grounded) \ -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/object-detection-on-coco-minival)](https://paperswithcode.com/sota/object-detection-on-coco-minival?p=grounding-dino-marrying-dino-with-grounded) \ -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/object-detection-on-coco)](https://paperswithcode.com/sota/object-detection-on-coco?p=grounding-dino-marrying-dino-with-grounded) - - - -Official PyTorch implementation of [Grounding DINO](https://arxiv.org/abs/2303.05499), a stronger open-set object detector. Code is available now! - - -## Highlight - -- **Open-Set Detection.** Detect **everything** with language! -- **High Performancce.** COCO zero-shot **52.5 AP** (training without COCO data!). COCO fine-tune **63.0 AP**. -- **Flexible.** Collaboration with Stable Diffusion for Image Editting. - -## News -[2023/03/28] A YouTube [video](https://youtu.be/cMa77r3YrDk) about Grounding DINO and basic object detection prompt engineering. [[SkalskiP](https://github.com/SkalskiP)] \ -[2023/03/28] Add a [demo](https://huggingface.co/spaces/ShilongLiu/Grounding_DINO_demo) on Hugging Face Space! \ -[2023/03/27] Support CPU-only mode. Now the model can run on machines without GPUs.\ -[2023/03/25] A [demo](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb) for Grounding DINO is available at Colab. [[SkalskiP](https://github.com/SkalskiP)] \ -[2023/03/22] Code is available Now! - -
            - -Description - -ODinW -
            - - - -## TODO - -- [x] Release inference code and demo. -- [x] Release checkpoints. -- [ ] Grounding DINO with Stable Diffusion and GLIGEN demos. -- [ ] Release training codes. - -## Install - -If you have a CUDA environment, please make sure the environment variable `CUDA_HOME` is set. It will be compiled under CPU-only mode if no CUDA available. - -```bash -pip install -e . -``` - -## Demo - -```bash -CUDA_VISIBLE_DEVICES=6 python demo/inference_on_a_image.py \ - -c /path/to/config \ - -p /path/to/checkpoint \ - -i .asset/cats.png \ - -o "outputs/0" \ - -t "cat ear." \ - [--cpu-only] # open it for cpu mode -``` -See the `demo/inference_on_a_image.py` for more details. - -**Web UI** - -We also provide a demo code to integrate Grounding DINO with Gradio Web UI. See the file `demo/gradio_app.py` for more details. - -## Checkpoints - - - - - - - - - - - - - - - - - - - - - - - - - -
            namebackboneDatabox AP on COCOCheckpointConfig
            1GroundingDINO-TSwin-TO365,GoldG,Cap4M48.4 (zero-shot) / 57.2 (fine-tune)Github link | HF linklink
            - -## Results - -
            - -COCO Object Detection Results - -COCO -
            - -
            - -ODinW Object Detection Results - -ODinW -
            - -
            - -Marrying Grounding DINO with Stable Diffusion for Image Editing - -GD_SD -
            - -
            - -Marrying Grounding DINO with GLIGEN for more Detailed Image Editing - -GD_GLIGEN -
            - -## Model - -Includes: a text backbone, an image backbone, a feature enhancer, a language-guided query selection, and a cross-modality decoder. - -![arch](.asset/arch.png) - - -## Acknowledgement - -Our model is related to [DINO](https://github.com/IDEA-Research/DINO) and [GLIP](https://github.com/microsoft/GLIP). Thanks for their great work! - -We also thank great previous work including DETR, Deformable DETR, SMCA, Conditional DETR, Anchor DETR, Dynamic DETR, DAB-DETR, DN-DETR, etc. More related work are available at [Awesome Detection Transformer](https://github.com/IDEACVR/awesome-detection-transformer). A new toolbox [detrex](https://github.com/IDEA-Research/detrex) is available as well. - -Thanks [Stable Diffusion](https://github.com/Stability-AI/StableDiffusion) and [GLIGEN](https://github.com/gligen/GLIGEN) for their awesome models. - - -## Citation - -If you find our work helpful for your research, please consider citing the following BibTeX entry. - -```bibtex -@inproceedings{ShilongLiu2023GroundingDM, - title={Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection}, - author={Shilong Liu and Zhaoyang Zeng and Tianhe Ren and Feng Li and Hao Zhang and Jie Yang and Chunyuan Li and Jianwei Yang and Hang Su and Jun Zhu and Lei Zhang}, - year={2023} -} -``` - - - - diff --git a/spaces/vntonie/anything-v3.0/app.py b/spaces/vntonie/anything-v3.0/app.py deleted file mode 100644 index 99a6a3762d5e337f08e960c4a31b4ac2467bca49..0000000000000000000000000000000000000000 --- a/spaces/vntonie/anything-v3.0/app.py +++ /dev/null @@ -1,8 +0,0 @@ -import gradio as gr - -description = """
            - -
            - """ - -gr.Interface.load("models/Linaqruf/anything-v3.0", description=description).launch() \ No newline at end of file diff --git a/spaces/vonbarnekowa/stable-diffusion/scripts/txt2img.py b/spaces/vonbarnekowa/stable-diffusion/scripts/txt2img.py deleted file mode 100644 index 1ed42a3cd87347998e947362e8845f28bf580fdd..0000000000000000000000000000000000000000 --- a/spaces/vonbarnekowa/stable-diffusion/scripts/txt2img.py +++ /dev/null @@ -1,289 +0,0 @@ -import argparse, os -import cv2 -import torch -import numpy as np -from omegaconf import OmegaConf -from PIL import Image -from tqdm import tqdm, trange -from itertools import islice -from einops import rearrange -from torchvision.utils import make_grid -from pytorch_lightning import seed_everything -from torch import autocast -from contextlib import nullcontext -from imwatermark import WatermarkEncoder - -from ldm.util import instantiate_from_config -from ldm.models.diffusion.ddim import DDIMSampler -from ldm.models.diffusion.plms import PLMSSampler -from ldm.models.diffusion.dpm_solver import DPMSolverSampler - -torch.set_grad_enabled(False) - -def chunk(it, size): - it = iter(it) - return iter(lambda: tuple(islice(it, size)), ()) - - -def load_model_from_config(config, ckpt, verbose=False): - print(f"Loading model from {ckpt}") - pl_sd = torch.load(ckpt, map_location="cpu") - if "global_step" in pl_sd: - print(f"Global Step: {pl_sd['global_step']}") - sd = pl_sd["state_dict"] - model = instantiate_from_config(config.model) - m, u = model.load_state_dict(sd, strict=False) - if len(m) > 0 and verbose: - print("missing keys:") - print(m) - if len(u) > 0 and verbose: - print("unexpected keys:") - print(u) - - model.cuda() - model.eval() - return model - - -def parse_args(): - parser = argparse.ArgumentParser() - parser.add_argument( - "--prompt", - type=str, - nargs="?", - default="a professional photograph of an astronaut riding a triceratops", - help="the prompt to render" - ) - parser.add_argument( - "--outdir", - type=str, - nargs="?", - help="dir to write results to", - default="outputs/txt2img-samples" - ) - parser.add_argument( - "--steps", - type=int, - default=50, - help="number of ddim sampling steps", - ) - parser.add_argument( - "--plms", - action='store_true', - help="use plms sampling", - ) - parser.add_argument( - "--dpm", - action='store_true', - help="use DPM (2) sampler", - ) - parser.add_argument( - "--fixed_code", - action='store_true', - help="if enabled, uses the same starting code across all samples ", - ) - parser.add_argument( - "--ddim_eta", - type=float, - default=0.0, - help="ddim eta (eta=0.0 corresponds to deterministic sampling", - ) - parser.add_argument( - "--n_iter", - type=int, - default=3, - help="sample this often", - ) - parser.add_argument( - "--H", - type=int, - default=512, - help="image height, in pixel space", - ) - parser.add_argument( - "--W", - type=int, - default=512, - help="image width, in pixel space", - ) - parser.add_argument( - "--C", - type=int, - default=4, - help="latent channels", - ) - parser.add_argument( - "--f", - type=int, - default=8, - help="downsampling factor, most often 8 or 16", - ) - parser.add_argument( - "--n_samples", - type=int, - default=3, - help="how many samples to produce for each given prompt. A.k.a batch size", - ) - parser.add_argument( - "--n_rows", - type=int, - default=0, - help="rows in the grid (default: n_samples)", - ) - parser.add_argument( - "--scale", - type=float, - default=9.0, - help="unconditional guidance scale: eps = eps(x, empty) + scale * (eps(x, cond) - eps(x, empty))", - ) - parser.add_argument( - "--from-file", - type=str, - help="if specified, load prompts from this file, separated by newlines", - ) - parser.add_argument( - "--config", - type=str, - default="configs/stable-diffusion/v2-inference.yaml", - help="path to config which constructs model", - ) - parser.add_argument( - "--ckpt", - type=str, - help="path to checkpoint of model", - ) - parser.add_argument( - "--seed", - type=int, - default=42, - help="the seed (for reproducible sampling)", - ) - parser.add_argument( - "--precision", - type=str, - help="evaluate at this precision", - choices=["full", "autocast"], - default="autocast" - ) - parser.add_argument( - "--repeat", - type=int, - default=1, - help="repeat each prompt in file this often", - ) - opt = parser.parse_args() - return opt - - -def put_watermark(img, wm_encoder=None): - if wm_encoder is not None: - img = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR) - img = wm_encoder.encode(img, 'dwtDct') - img = Image.fromarray(img[:, :, ::-1]) - return img - - -def main(opt): - seed_everything(opt.seed) - - config = OmegaConf.load(f"{opt.config}") - model = load_model_from_config(config, f"{opt.ckpt}") - - device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") - model = model.to(device) - - if opt.plms: - sampler = PLMSSampler(model) - elif opt.dpm: - sampler = DPMSolverSampler(model) - else: - sampler = DDIMSampler(model) - - os.makedirs(opt.outdir, exist_ok=True) - outpath = opt.outdir - - print("Creating invisible watermark encoder (see https://github.com/ShieldMnt/invisible-watermark)...") - wm = "SDV2" - wm_encoder = WatermarkEncoder() - wm_encoder.set_watermark('bytes', wm.encode('utf-8')) - - batch_size = opt.n_samples - n_rows = opt.n_rows if opt.n_rows > 0 else batch_size - if not opt.from_file: - prompt = opt.prompt - assert prompt is not None - data = [batch_size * [prompt]] - - else: - print(f"reading prompts from {opt.from_file}") - with open(opt.from_file, "r") as f: - data = f.read().splitlines() - data = [p for p in data for i in range(opt.repeat)] - data = list(chunk(data, batch_size)) - - sample_path = os.path.join(outpath, "samples") - os.makedirs(sample_path, exist_ok=True) - sample_count = 0 - base_count = len(os.listdir(sample_path)) - grid_count = len(os.listdir(outpath)) - 1 - - start_code = None - if opt.fixed_code: - start_code = torch.randn([opt.n_samples, opt.C, opt.H // opt.f, opt.W // opt.f], device=device) - - precision_scope = autocast if opt.precision == "autocast" else nullcontext - with torch.no_grad(), \ - precision_scope("cuda"), \ - model.ema_scope(): - all_samples = list() - for n in trange(opt.n_iter, desc="Sampling"): - for prompts in tqdm(data, desc="data"): - uc = None - if opt.scale != 1.0: - uc = model.get_learned_conditioning(batch_size * [""]) - if isinstance(prompts, tuple): - prompts = list(prompts) - c = model.get_learned_conditioning(prompts) - shape = [opt.C, opt.H // opt.f, opt.W // opt.f] - samples, _ = sampler.sample(S=opt.steps, - conditioning=c, - batch_size=opt.n_samples, - shape=shape, - verbose=False, - unconditional_guidance_scale=opt.scale, - unconditional_conditioning=uc, - eta=opt.ddim_eta, - x_T=start_code) - - x_samples = model.decode_first_stage(samples) - x_samples = torch.clamp((x_samples + 1.0) / 2.0, min=0.0, max=1.0) - - for x_sample in x_samples: - x_sample = 255. * rearrange(x_sample.cpu().numpy(), 'c h w -> h w c') - img = Image.fromarray(x_sample.astype(np.uint8)) - img = put_watermark(img, wm_encoder) - img.save(os.path.join(sample_path, f"{base_count:05}.png")) - base_count += 1 - sample_count += 1 - - all_samples.append(x_samples) - - # additionally, save as grid - grid = torch.stack(all_samples, 0) - grid = rearrange(grid, 'n b c h w -> (n b) c h w') - grid = make_grid(grid, nrow=n_rows) - - # to image - grid = 255. * rearrange(grid, 'c h w -> h w c').cpu().numpy() - grid = Image.fromarray(grid.astype(np.uint8)) - grid = put_watermark(grid, wm_encoder) - grid.save(os.path.join(outpath, f'grid-{grid_count:04}.png')) - grid_count += 1 - - print(f"Your samples are ready and waiting for you here: \n{outpath} \n" - f" \nEnjoy.") - - -if __name__ == "__main__": - opt = parse_args() - main(opt) diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/roles/assistant.py b/spaces/wffcyrus/MetaGPT-v1/metagpt/roles/assistant.py deleted file mode 100644 index 0bce4a3f96d65e614ee68d64dd02f7c6c7832967..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/metagpt/roles/assistant.py +++ /dev/null @@ -1,170 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/8/7 -@Author : mashenquan -@File : assistant.py -@Desc : I am attempting to incorporate certain symbol concepts from UML into MetaGPT, enabling it to have the - ability to freely construct flows through symbol concatenation. Simultaneously, I am also striving to - make these symbols configurable and standardized, making the process of building flows more convenient. - For more about `fork` node in activity diagrams, see: `https://www.uml-diagrams.org/activity-diagrams.html` - This file defines a `fork` style meta role capable of generating arbitrary roles at runtime based on a - configuration file. -@Modified By: mashenquan, 2023/8/22. A definition has been provided for the return value of _think: returning false - indicates that further reasoning cannot continue. - -""" -import asyncio -from pathlib import Path - -from metagpt.actions import ActionOutput -from metagpt.actions.skill_action import ArgumentsParingAction, SkillAction -from metagpt.actions.talk_action import TalkAction -from metagpt.config import CONFIG -from metagpt.learn.skill_loader import SkillLoader -from metagpt.logs import logger -from metagpt.memory.brain_memory import BrainMemory, MessageType -from metagpt.roles import Role -from metagpt.schema import Message - - -class Assistant(Role): - """Assistant for solving common issues.""" - - def __init__( - self, - name="Lily", - profile="An assistant", - goal="Help to solve problem", - constraints="Talk in {language}", - desc="", - *args, - **kwargs, - ): - super(Assistant, self).__init__( - name=name, profile=profile, goal=goal, constraints=constraints, desc=desc, *args, **kwargs - ) - brain_memory = CONFIG.BRAIN_MEMORY - self.memory = BrainMemory(**brain_memory) if brain_memory else BrainMemory() - skill_path = Path(CONFIG.SKILL_PATH) if CONFIG.SKILL_PATH else None - self.skills = SkillLoader(skill_yaml_file_name=skill_path) - - async def think(self) -> bool: - """Everything will be done part by part.""" - last_talk = await self.refine_memory() - if not last_talk: - return False - prompt = f"Refer to this sentence:\n {last_talk}\n" - skills = self.skills.get_skill_list() - for desc, name in skills.items(): - prompt += ( - f"If want you to do {desc}, return `[SKILL]: {name}` brief and clear. For instance: [SKILL]: {name}\n" - ) - prompt += "If the preceding text presents a complete question and solution, rewrite and return `[SOLUTION]: {problem}` brief and clear. For instance: [SOLUTION]: Solution for distributing watermelon\n" - prompt += "If the preceding text presents an unresolved issue and its corresponding discussion, rewrite and return `[PROBLEM]: {problem}` brief and clear. For instance: [PROBLEM]: How to distribute watermelon?\n" - prompt += "Otherwise, rewrite and return `[TALK]: {talk}` brief and clear. For instance: [TALK]: distribute watermelon" - logger.info(prompt) - rsp = await self._llm.aask(prompt, []) - logger.info(rsp) - return await self._plan(rsp, last_talk=last_talk) - - async def act(self) -> ActionOutput: - result = await self._rc.todo.run(**CONFIG.options) - if not result: - return None - if isinstance(result, str): - msg = Message(content=result) - output = ActionOutput(content=result) - else: - msg = Message( - content=result.content, instruct_content=result.instruct_content, cause_by=type(self._rc.todo) - ) - output = result - self.memory.add_answer(msg) - return output - - async def talk(self, text): - self.memory.add_talk(Message(content=text)) - - async def _plan(self, rsp: str, **kwargs) -> bool: - skill, text = Assistant.extract_info(input_string=rsp) - handlers = { - MessageType.Talk.value: self.talk_handler, - MessageType.Problem.value: self.talk_handler, - MessageType.Skill.value: self.skill_handler, - } - handler = handlers.get(skill, self.talk_handler) - return await handler(text, **kwargs) - - async def talk_handler(self, text, **kwargs) -> bool: - history = self.memory.history_text - action = TalkAction( - talk=text, knowledge=self.memory.get_knowledge(), history_summary=history, llm=self._llm, **kwargs - ) - self.add_to_do(action) - return True - - async def skill_handler(self, text, **kwargs) -> bool: - last_talk = kwargs.get("last_talk") - skill = self.skills.get_skill(text) - if not skill: - logger.info(f"skill not found: {text}") - return await self.talk_handler(text=last_talk, **kwargs) - action = ArgumentsParingAction(skill=skill, llm=self._llm, **kwargs) - await action.run(**kwargs) - if action.args is None: - return await self.talk_handler(text=last_talk, **kwargs) - action = SkillAction(skill=skill, args=action.args, llm=self._llm, name=skill.name, desc=skill.description) - self.add_to_do(action) - return True - - async def refine_memory(self) -> str: - history_text = self.memory.history_text - last_talk = self.memory.last_talk - if last_talk is None: # No user feedback, unsure if past conversation is finished. - return None - if history_text == "": - return last_talk - history_summary = await self._llm.get_summary(history_text, max_words=500) - if last_talk and await self._llm.is_related(last_talk, history_summary): # Merge relevant content. - last_talk = await self._llm.rewrite(sentence=last_talk, context=history_text) - return last_talk - - self.memory.move_to_solution(history_summary) # Promptly clear memory after the issue is resolved. - return last_talk - - @staticmethod - def extract_info(input_string): - from metagpt.provider.openai_api import OpenAIGPTAPI - - return OpenAIGPTAPI.extract_info(input_string) - - def get_memory(self) -> str: - return self.memory.json() - - def load_memory(self, jsn): - try: - self.memory = BrainMemory(**jsn) - except Exception as e: - logger.exception(f"load error:{e}, data:{jsn}") - - -async def main(): - topic = "what's apple" - role = Assistant(language="Chinese") - await role.talk(topic) - while True: - has_action = await role.think() - if not has_action: - break - msg = await role.act() - logger.info(msg) - # Retrieve user terminal input. - logger.info("Enter prompt") - talk = input("You: ") - await role.talk(talk) - - -if __name__ == "__main__": - CONFIG.language = "Chinese" - asyncio.run(main()) diff --git a/spaces/whgwd2023/bingo/src/lib/isomorphic/index.ts b/spaces/whgwd2023/bingo/src/lib/isomorphic/index.ts deleted file mode 100644 index d4ebae951004bc8ec388f82548f4204a6c2a0a50..0000000000000000000000000000000000000000 --- a/spaces/whgwd2023/bingo/src/lib/isomorphic/index.ts +++ /dev/null @@ -1,8 +0,0 @@ -'use client' - -import Debug from 'debug' -export * from 'ifw' - -export const debug = typeof document === 'undefined' ? Debug('bingo') - : process.env.NEXT_PUBLIC_DEBUG ? console.info.bind(console) - : () => {} diff --git a/spaces/wliu88/StructDiffusionDemo/src/StructDiffusion/data/semantic_arrangement_demo.py b/spaces/wliu88/StructDiffusionDemo/src/StructDiffusion/data/semantic_arrangement_demo.py deleted file mode 100644 index 653ccfd758aefc73071dfb36ba78ea46774ac7b5..0000000000000000000000000000000000000000 --- a/spaces/wliu88/StructDiffusionDemo/src/StructDiffusion/data/semantic_arrangement_demo.py +++ /dev/null @@ -1,563 +0,0 @@ -import copy -import cv2 -import h5py -import numpy as np -import os -import trimesh -import torch -from tqdm import tqdm -import json -import random - -from torch.utils.data import DataLoader - -# Local imports -from StructDiffusion.utils.rearrangement import show_pcs, get_pts, combine_and_sample_xyzs -from StructDiffusion.language.tokenizer import Tokenizer - -import StructDiffusion.utils.brain2.camera as cam -import StructDiffusion.utils.brain2.image as img -import StructDiffusion.utils.transformations as tra - - -class SemanticArrangementDataset(torch.utils.data.Dataset): - - def __init__(self, data_root, tokenizer, - max_num_target_objects=11, max_num_distractor_objects=5, - max_num_shape_parameters=7, max_num_rearrange_features=1, max_num_anchor_features=3, - num_pts=1024, - use_virtual_structure_frame=True, ignore_distractor_objects=True, ignore_rgb=True, - filter_num_moved_objects_range=None, shuffle_object_index=False, - data_augmentation=True, debug=False, **kwargs): - """ - - Note: setting filter_num_moved_objects_range=[k, k] and max_num_objects=k will create no padding for target objs - - :param data_root: - :param split: train, valid, or test - :param shuffle_object_index: whether to shuffle the positions of target objects and other objects in the sequence - :param debug: - :param max_num_shape_parameters: - :param max_num_objects: - :param max_num_rearrange_features: - :param max_num_anchor_features: - :param num_pts: - :param use_stored_arrangement_indices: - :param kwargs: - """ - - self.use_virtual_structure_frame = use_virtual_structure_frame - self.ignore_distractor_objects = ignore_distractor_objects - self.ignore_rgb = ignore_rgb and not debug - - self.num_pts = num_pts - self.debug = debug - - self.max_num_objects = max_num_target_objects - self.max_num_other_objects = max_num_distractor_objects - self.max_num_shape_parameters = max_num_shape_parameters - self.max_num_rearrange_features = max_num_rearrange_features - self.max_num_anchor_features = max_num_anchor_features - self.shuffle_object_index = shuffle_object_index - - # used to tokenize the language part - self.tokenizer = tokenizer - - # retrieve data - self.data_root = data_root - self.arrangement_data = [] - for filename in os.listdir(data_root): - if ".h5" in filename: - self.arrangement_data.append((os.path.join(data_root, filename), 0)) - print("{} valid sequences".format(len(self.arrangement_data))) - - # Data Aug - self.data_augmentation = data_augmentation - # additive noise - self.gp_rescale_factor_range = [12, 20] - self.gaussian_scale_range = [0., 0.003] - # multiplicative noise - self.gamma_shape = 1000. - self.gamma_scale = 0.001 - - def filter_based_on_number_of_moved_objects(self, filter_num_moved_objects_range): - assert len(list(filter_num_moved_objects_range)) == 2 - min_num, max_num = filter_num_moved_objects_range - print("Remove scenes that have less than {} or more than {} objects being moved".format(min_num, max_num)) - ok_data = [] - for filename, step_t in self.arrangement_data: - h5 = h5py.File(filename, 'r') - moved_objs = h5['moved_objs'][()].split(',') - if min_num <= len(moved_objs) <= max_num: - ok_data.append((filename, step_t)) - print("{} valid sequences left".format(len(ok_data))) - return ok_data - - def get_data_idx(self, idx): - # Create the datum to return - file_idx = np.argmax(idx < self.file_to_count) - data = h5py.File(self.data_files[file_idx], 'r') - if file_idx > 0: - # for lang2sym, idx is always 0 - idx = idx - self.file_to_count[file_idx - 1] - return data, idx, file_idx - - def add_noise_to_depth(self, depth_img): - """ add depth noise """ - multiplicative_noise = np.random.gamma(self.gamma_shape, self.gamma_scale) - depth_img = multiplicative_noise * depth_img - return depth_img - - def add_noise_to_xyz(self, xyz_img, depth_img): - """ TODO: remove this code or at least celean it up""" - xyz_img = xyz_img.copy() - H, W, C = xyz_img.shape - gp_rescale_factor = np.random.randint(self.gp_rescale_factor_range[0], - self.gp_rescale_factor_range[1]) - gp_scale = np.random.uniform(self.gaussian_scale_range[0], - self.gaussian_scale_range[1]) - small_H, small_W = (np.array([H, W]) / gp_rescale_factor).astype(int) - additive_noise = np.random.normal(loc=0.0, scale=gp_scale, size=(small_H, small_W, C)) - additive_noise = cv2.resize(additive_noise, (W, H), interpolation=cv2.INTER_CUBIC) - xyz_img[depth_img > 0, :] += additive_noise[depth_img > 0, :] - return xyz_img - - def random_index(self): - return self[np.random.randint(len(self))] - - def _get_rgb(self, h5, idx, ee=True): - RGB = "ee_rgb" if ee else "rgb" - rgb1 = img.PNGToNumpy(h5[RGB][idx])[:, :, :3] / 255. # remove alpha - return rgb1 - - def _get_depth(self, h5, idx, ee=True): - DEPTH = "ee_depth" if ee else "depth" - - def _get_images(self, h5, idx, ee=True): - if ee: - RGB, DEPTH, SEG = "ee_rgb", "ee_depth", "ee_seg" - DMIN, DMAX = "ee_depth_min", "ee_depth_max" - else: - RGB, DEPTH, SEG = "rgb", "depth", "seg" - DMIN, DMAX = "depth_min", "depth_max" - dmin = h5[DMIN][idx] - dmax = h5[DMAX][idx] - rgb1 = img.PNGToNumpy(h5[RGB][idx])[:, :, :3] / 255. # remove alpha - depth1 = h5[DEPTH][idx] / 20000. * (dmax - dmin) + dmin - seg1 = img.PNGToNumpy(h5[SEG][idx]) - - valid1 = np.logical_and(depth1 > 0.1, depth1 < 2.) - - # proj_matrix = h5['proj_matrix'][()] - camera = cam.get_camera_from_h5(h5) - if self.data_augmentation: - depth1 = self.add_noise_to_depth(depth1) - - xyz1 = cam.compute_xyz(depth1, camera) - if self.data_augmentation: - xyz1 = self.add_noise_to_xyz(xyz1, depth1) - - # Transform the point cloud - # Here it is... - # CAM_POSE = "ee_cam_pose" if ee else "cam_pose" - CAM_POSE = "ee_camera_view" if ee else "camera_view" - cam_pose = h5[CAM_POSE][idx] - if ee: - # ee_camera_view has 0s for x, y, z - cam_pos = h5["ee_cam_pose"][:][:3, 3] - cam_pose[:3, 3] = cam_pos - - # Get transformed point cloud - h, w, d = xyz1.shape - xyz1 = xyz1.reshape(h * w, -1) - xyz1 = trimesh.transform_points(xyz1, cam_pose) - xyz1 = xyz1.reshape(h, w, -1) - - scene1 = rgb1, depth1, seg1, valid1, xyz1 - - return scene1 - - def __len__(self): - return len(self.arrangement_data) - - def _get_ids(self, h5): - """ - get object ids - - @param h5: - @return: - """ - ids = {} - for k in h5.keys(): - if k.startswith("id_"): - ids[k[3:]] = h5[k][()] - return ids - - def get_positive_ratio(self): - num_pos = 0 - for d in self.arrangement_data: - filename, step_t = d - if step_t == 0: - num_pos += 1 - return (len(self.arrangement_data) - num_pos) * 1.0 / num_pos - - def get_object_position_vocab_sizes(self): - return self.tokenizer.get_object_position_vocab_sizes() - - def get_vocab_size(self): - return self.tokenizer.get_vocab_size() - - def get_data_index(self, idx): - filename = self.arrangement_data[idx] - return filename - - def get_raw_data(self, idx, inference_mode=False, shuffle_object_index=False): - """ - - :param idx: - :param inference_mode: - :param shuffle_object_index: used to test different orders of objects - :return: - """ - - filename, _ = self.arrangement_data[idx] - - h5 = h5py.File(filename, 'r') - ids = self._get_ids(h5) - all_objs = sorted([o for o in ids.keys() if "object_" in o]) - goal_specification = json.loads(str(np.array(h5["goal_specification"]))) - num_rearrange_objs = len(goal_specification["rearrange"]["objects"]) - num_other_objs = len(goal_specification["anchor"]["objects"] + goal_specification["distract"]["objects"]) - assert len(all_objs) == num_rearrange_objs + num_other_objs, "{}, {}".format(len(all_objs), num_rearrange_objs + num_other_objs) - assert num_rearrange_objs <= self.max_num_objects - assert num_other_objs <= self.max_num_other_objects - - # important: only using the last step - step_t = num_rearrange_objs - - target_objs = all_objs[:num_rearrange_objs] - other_objs = all_objs[num_rearrange_objs:] - - structure_parameters = goal_specification["shape"] - - # Important: ensure the order is correct - if structure_parameters["type"] == "circle" or structure_parameters["type"] == "line": - target_objs = target_objs[::-1] - elif structure_parameters["type"] == "tower" or structure_parameters["type"] == "dinner": - target_objs = target_objs - else: - raise KeyError("{} structure is not recognized".format(structure_parameters["type"])) - all_objs = target_objs + other_objs - - ################################### - # getting scene images and point clouds - scene = self._get_images(h5, step_t, ee=True) - rgb, depth, seg, valid, xyz = scene - if inference_mode: - initial_scene = scene - - # getting object point clouds - obj_pcs = [] - obj_pad_mask = [] - current_pc_poses = [] - other_obj_pcs = [] - other_obj_pad_mask = [] - for obj in all_objs: - obj_mask = np.logical_and(seg == ids[obj], valid) - if np.sum(obj_mask) <= 0: - raise Exception - ok, obj_xyz, obj_rgb, _ = get_pts(xyz, rgb, obj_mask, num_pts=self.num_pts) - if not ok: - raise Exception - - if obj in target_objs: - if self.ignore_rgb: - obj_pcs.append(obj_xyz) - else: - obj_pcs.append(torch.concat([obj_xyz, obj_rgb], dim=-1)) - obj_pad_mask.append(0) - pc_pose = np.eye(4) - pc_pose[:3, 3] = torch.mean(obj_xyz, dim=0).numpy() - current_pc_poses.append(pc_pose) - elif obj in other_objs: - if self.ignore_rgb: - other_obj_pcs.append(obj_xyz) - else: - other_obj_pcs.append(torch.concat([obj_xyz, obj_rgb], dim=-1)) - other_obj_pad_mask.append(0) - else: - raise Exception - - ################################### - # computes goal positions for objects - # Important: because of the noises we added to point clouds, the rearranged point clouds will not be perfect - if self.use_virtual_structure_frame: - goal_structure_pose = tra.euler_matrix(structure_parameters["rotation"][0], structure_parameters["rotation"][1], - structure_parameters["rotation"][2]) - goal_structure_pose[:3, 3] = [structure_parameters["position"][0], structure_parameters["position"][1], - structure_parameters["position"][2]] - goal_structure_pose_inv = np.linalg.inv(goal_structure_pose) - - goal_obj_poses = [] - current_obj_poses = [] - goal_pc_poses = [] - for obj, current_pc_pose in zip(target_objs, current_pc_poses): - goal_pose = h5[obj][0] - current_pose = h5[obj][step_t] - if inference_mode: - goal_obj_poses.append(goal_pose) - current_obj_poses.append(current_pose) - - goal_pc_pose = goal_pose @ np.linalg.inv(current_pose) @ current_pc_pose - if self.use_virtual_structure_frame: - goal_pc_pose = goal_structure_pose_inv @ goal_pc_pose - goal_pc_poses.append(goal_pc_pose) - - # transform current object point cloud to the goal point cloud in the world frame - if self.debug: - new_obj_pcs = [copy.deepcopy(pc.numpy()) for pc in obj_pcs] - for i, obj_pc in enumerate(new_obj_pcs): - - current_pc_pose = current_pc_poses[i] - goal_pc_pose = goal_pc_poses[i] - if self.use_virtual_structure_frame: - goal_pc_pose = goal_structure_pose @ goal_pc_pose - print("current pc pose", current_pc_pose) - print("goal pc pose", goal_pc_pose) - - goal_pc_transform = goal_pc_pose @ np.linalg.inv(current_pc_pose) - print("transform", goal_pc_transform) - new_obj_pc = copy.deepcopy(obj_pc) - new_obj_pc[:, :3] = trimesh.transform_points(obj_pc[:, :3], goal_pc_transform) - print(new_obj_pc.shape) - - # visualize rearrangement sequence (new_obj_xyzs), the current object before moving (obj_xyz), and other objects - new_obj_pcs[i] = new_obj_pc - new_obj_pcs[i][:, 3:] = np.tile(np.array([1, 0, 0], dtype=np.float), (new_obj_pc.shape[0], 1)) - new_obj_rgb_current = np.tile(np.array([0, 1, 0], dtype=np.float), (new_obj_pc.shape[0], 1)) - show_pcs([pc[:, :3] for pc in new_obj_pcs] + [pc[:, :3] for pc in other_obj_pcs] + [obj_pc[:, :3]], - [pc[:, 3:] for pc in new_obj_pcs] + [pc[:, 3:] for pc in other_obj_pcs] + [new_obj_rgb_current], - add_coordinate_frame=True) - show_pcs([pc[:, :3] for pc in new_obj_pcs], [pc[:, 3:] for pc in new_obj_pcs], add_coordinate_frame=True) - - # pad data - for i in range(self.max_num_objects - len(target_objs)): - obj_pcs.append(torch.zeros_like(obj_pcs[0], dtype=torch.float32)) - obj_pad_mask.append(1) - for i in range(self.max_num_other_objects - len(other_objs)): - other_obj_pcs.append(torch.zeros_like(obj_pcs[0], dtype=torch.float32)) - other_obj_pad_mask.append(1) - - ################################### - # preparing sentence - sentence = [] - sentence_pad_mask = [] - - # structure parameters - # 5 parameters - structure_parameters = goal_specification["shape"] - if structure_parameters["type"] == "circle" or structure_parameters["type"] == "line": - sentence.append((structure_parameters["type"], "shape")) - sentence.append((structure_parameters["rotation"][2], "rotation")) - sentence.append((structure_parameters["position"][0], "position_x")) - sentence.append((structure_parameters["position"][1], "position_y")) - if structure_parameters["type"] == "circle": - sentence.append((structure_parameters["radius"], "radius")) - elif structure_parameters["type"] == "line": - sentence.append((structure_parameters["length"] / 2.0, "radius")) - for _ in range(5): - sentence_pad_mask.append(0) - else: - sentence.append((structure_parameters["type"], "shape")) - sentence.append((structure_parameters["rotation"][2], "rotation")) - sentence.append((structure_parameters["position"][0], "position_x")) - sentence.append((structure_parameters["position"][1], "position_y")) - for _ in range(4): - sentence_pad_mask.append(0) - sentence.append(("PAD", None)) - sentence_pad_mask.append(1) - - ################################### - # paddings - for i in range(self.max_num_objects - len(target_objs)): - goal_pc_poses.append(np.eye(4)) - - ################################### - if self.debug: - print("---") - print("all objects:", all_objs) - print("target objects:", target_objs) - print("other objects:", other_objs) - print("goal specification:", goal_specification) - print("sentence:", sentence) - show_pcs([pc[:, :3] for pc in obj_pcs + other_obj_pcs], [pc[:, 3:] for pc in obj_pcs + other_obj_pcs], add_coordinate_frame=True) - - assert len(obj_pcs) == len(goal_pc_poses) - ################################### - - # shuffle the position of objects - if shuffle_object_index: - shuffle_target_object_indices = list(range(len(target_objs))) - random.shuffle(shuffle_target_object_indices) - shuffle_object_indices = shuffle_target_object_indices + list(range(len(target_objs), self.max_num_objects)) - obj_pcs = [obj_pcs[i] for i in shuffle_object_indices] - goal_pc_poses = [goal_pc_poses[i] for i in shuffle_object_indices] - if inference_mode: - goal_obj_poses = [goal_obj_poses[i] for i in shuffle_object_indices] - current_obj_poses = [current_obj_poses[i] for i in shuffle_object_indices] - target_objs = [target_objs[i] for i in shuffle_target_object_indices] - current_pc_poses = [current_pc_poses[i] for i in shuffle_object_indices] - - ################################### - if self.use_virtual_structure_frame: - if self.ignore_distractor_objects: - # language, structure virtual frame, target objects - pcs = obj_pcs - type_index = [0] * self.max_num_shape_parameters + [2] + [3] * self.max_num_objects - position_index = list(range(self.max_num_shape_parameters)) + [0] + list(range(self.max_num_objects)) - pad_mask = sentence_pad_mask + [0] + obj_pad_mask - else: - # language, distractor objects, structure virtual frame, target objects - pcs = other_obj_pcs + obj_pcs - type_index = [0] * self.max_num_shape_parameters + [1] * self.max_num_other_objects + [2] + [3] * self.max_num_objects - position_index = list(range(self.max_num_shape_parameters)) + list(range(self.max_num_other_objects)) + [0] + list(range(self.max_num_objects)) - pad_mask = sentence_pad_mask + other_obj_pad_mask + [0] + obj_pad_mask - goal_poses = [goal_structure_pose] + goal_pc_poses - else: - if self.ignore_distractor_objects: - # language, target objects - pcs = obj_pcs - type_index = [0] * self.max_num_shape_parameters + [3] * self.max_num_objects - position_index = list(range(self.max_num_shape_parameters)) + list(range(self.max_num_objects)) - pad_mask = sentence_pad_mask + obj_pad_mask - else: - # language, distractor objects, target objects - pcs = other_obj_pcs + obj_pcs - type_index = [0] * self.max_num_shape_parameters + [1] * self.max_num_other_objects + [3] * self.max_num_objects - position_index = list(range(self.max_num_shape_parameters)) + list(range(self.max_num_other_objects)) + list(range(self.max_num_objects)) - pad_mask = sentence_pad_mask + other_obj_pad_mask + obj_pad_mask - goal_poses = goal_pc_poses - - datum = { - "pcs": pcs, - "sentence": sentence, - "goal_poses": goal_poses, - "type_index": type_index, - "position_index": position_index, - "pad_mask": pad_mask, - "t": step_t, - "filename": filename - } - - if inference_mode: - datum["rgb"] = rgb - datum["goal_obj_poses"] = goal_obj_poses - datum["current_obj_poses"] = current_obj_poses - datum["target_objs"] = target_objs - datum["initial_scene"] = initial_scene - datum["ids"] = ids - datum["goal_specification"] = goal_specification - datum["current_pc_poses"] = current_pc_poses - - return datum - - @staticmethod - def convert_to_tensors(datum, tokenizer): - tensors = { - "pcs": torch.stack(datum["pcs"], dim=0), - "sentence": torch.LongTensor(np.array([tokenizer.tokenize(*i) for i in datum["sentence"]])), - "goal_poses": torch.FloatTensor(np.array(datum["goal_poses"])), - "type_index": torch.LongTensor(np.array(datum["type_index"])), - "position_index": torch.LongTensor(np.array(datum["position_index"])), - "pad_mask": torch.LongTensor(np.array(datum["pad_mask"])), - "t": datum["t"], - "filename": datum["filename"] - } - return tensors - - def __getitem__(self, idx): - - datum = self.convert_to_tensors(self.get_raw_data(idx, shuffle_object_index=self.shuffle_object_index), - self.tokenizer) - - return datum - - def single_datum_to_batch(self, x, num_samples, device, inference_mode=True): - tensor_x = {} - - tensor_x["pcs"] = x["pcs"].to(device)[None, :, :, :].repeat(num_samples, 1, 1, 1) - tensor_x["sentence"] = x["sentence"].to(device)[None, :].repeat(num_samples, 1) - if not inference_mode: - tensor_x["goal_poses"] = x["goal_poses"].to(device)[None, :, :, :].repeat(num_samples, 1, 1, 1) - - tensor_x["type_index"] = x["type_index"].to(device)[None, :].repeat(num_samples, 1) - tensor_x["position_index"] = x["position_index"].to(device)[None, :].repeat(num_samples, 1) - tensor_x["pad_mask"] = x["pad_mask"].to(device)[None, :].repeat(num_samples, 1) - - return tensor_x - - -def compute_min_max(dataloader): - - # tensor([-0.3557, -0.3847, 0.0000, -1.0000, -1.0000, -0.4759, -1.0000, -1.0000, - # -0.9079, -0.8668, -0.9105, -0.4186]) - # tensor([0.3915, 0.3494, 0.3267, 1.0000, 1.0000, 0.8961, 1.0000, 1.0000, 0.8194, - # 0.4787, 0.6421, 1.0000]) - # tensor([0.0918, -0.3758, 0.0000, -1.0000, -1.0000, 0.0000, -1.0000, -1.0000, - # -0.0000, 0.0000, 0.0000, 1.0000]) - # tensor([0.9199, 0.3710, 0.0000, 1.0000, 1.0000, 0.0000, 1.0000, 1.0000, -0.0000, - # 0.0000, 0.0000, 1.0000]) - - min_value = torch.ones(16) * 10000 - max_value = torch.ones(16) * -10000 - for d in tqdm(dataloader): - goal_poses = d["goal_poses"] - goal_poses = goal_poses.reshape(-1, 16) - current_max, _ = torch.max(goal_poses, dim=0) - current_min, _ = torch.min(goal_poses, dim=0) - max_value[max_value < current_max] = current_max[max_value < current_max] - max_value[max_value > current_min] = current_min[max_value > current_min] - print(f"{min_value} - {max_value}") - - -if __name__ == "__main__": - - tokenizer = Tokenizer("/home/weiyu/data_drive/data_new_objects/type_vocabs_coarse.json") - - data_roots = [] - index_roots = [] - for shape, index in [("circle", "index_10k"), ("line", "index_10k"), ("stacking", "index_10k"), ("dinner", "index_10k")]: - data_roots.append("/home/weiyu/data_drive/data_new_objects/examples_{}_new_objects/result".format(shape)) - index_roots.append(index) - - dataset = SemanticArrangementDataset(data_roots=data_roots, - index_roots=index_roots, - split="valid", tokenizer=tokenizer, - max_num_target_objects=7, - max_num_distractor_objects=5, - max_num_shape_parameters=5, - max_num_rearrange_features=0, - max_num_anchor_features=0, - num_pts=1024, - use_virtual_structure_frame=True, - ignore_distractor_objects=True, - ignore_rgb=True, - filter_num_moved_objects_range=None, # [5, 5] - data_augmentation=False, - shuffle_object_index=False, - debug=False) - - # print(len(dataset)) - # for d in dataset: - # print("\n\n" + "="*100) - - dataloader = DataLoader(dataset, batch_size=64, shuffle=False, num_workers=8) - for i, d in enumerate(tqdm(dataloader)): - pass - # for k in d: - # if isinstance(d[k], torch.Tensor): - # print("--size", k, d[k].shape) - # for k in d: - # print(k, d[k]) - # - # input("next?") \ No newline at end of file diff --git a/spaces/wuhuik/bingo/src/components/chat-message.tsx b/spaces/wuhuik/bingo/src/components/chat-message.tsx deleted file mode 100644 index bf272d8d7005cfd06c53bd213e09ea217e803549..0000000000000000000000000000000000000000 --- a/spaces/wuhuik/bingo/src/components/chat-message.tsx +++ /dev/null @@ -1,93 +0,0 @@ -import remarkGfm from 'remark-gfm' -import remarkMath from 'remark-math' -import supersub from 'remark-supersub' -import remarkBreaks from 'remark-breaks' -import { cn } from '@/lib/utils' -import { CodeBlock } from '@/components/ui/codeblock' -import { MemoizedReactMarkdown } from '@/components/markdown' -import { LearnMore } from './learn-more' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { useEffect } from 'react' -import { TurnCounter } from './turn-counter' - -export interface ChatMessageProps { - message: ChatMessageModel -} - -export function ChatMessage({ message, ...props }: ChatMessageProps) { - useEffect(() => { - if (document.body.scrollHeight - window.innerHeight - window.scrollY - 200 < 0) { - window.scrollBy(0, 200) - } - }, [message.text]) - - return message.text ? ( -
            -
            - {obj.alt} - } - } catch (e) { - } - return {obj.alt} - }, - p({ children }) { - return

            {children}

            - }, - code({ node, inline, className, children, ...props }) { - if (children.length) { - if (children[0] == '▍') { - return ( - - ) - } - - children[0] = (children[0] as string).replace('`▍`', '▍') - } - - const match = /language-(\w+)/.exec(className || '') - - if (inline) { - return ( - - {children} - - ) - } - - return ( - - ) - } - }} - > - {message.text} -
            -
            -
            - {message.author === 'bot' && } - {message.author === 'bot' && } -
            -
            - ) : null -} diff --git a/spaces/wy213/213a/src/components/ui/codeblock.tsx b/spaces/wy213/213a/src/components/ui/codeblock.tsx deleted file mode 100644 index aabda4e3b59f4e36b6ab79feb19d8d18b70e881b..0000000000000000000000000000000000000000 --- a/spaces/wy213/213a/src/components/ui/codeblock.tsx +++ /dev/null @@ -1,142 +0,0 @@ -'use client' - -import { FC, memo } from 'react' -import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter' -import { coldarkDark } from 'react-syntax-highlighter/dist/cjs/styles/prism' - -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' -import { IconCheck, IconCopy, IconDownload } from '@/components/ui/icons' -import { Button } from '@/components/ui/button' - -interface Props { - language: string - value: string -} - -interface languageMap { - [key: string]: string | undefined -} - -export const programmingLanguages: languageMap = { - javascript: '.js', - python: '.py', - java: '.java', - c: '.c', - cpp: '.cpp', - 'c++': '.cpp', - 'c#': '.cs', - ruby: '.rb', - php: '.php', - swift: '.swift', - 'objective-c': '.m', - kotlin: '.kt', - typescript: '.ts', - go: '.go', - perl: '.pl', - rust: '.rs', - scala: '.scala', - haskell: '.hs', - lua: '.lua', - shell: '.sh', - sql: '.sql', - html: '.html', - css: '.css' - // add more file extensions here, make sure the key is same as language prop in CodeBlock.tsx component -} - -export const generateRandomString = (length: number, lowercase = false) => { - const chars = 'ABCDEFGHJKLMNPQRSTUVWXY3456789' // excluding similar looking characters like Z, 2, I, 1, O, 0 - let result = '' - for (let i = 0; i < length; i++) { - result += chars.charAt(Math.floor(Math.random() * chars.length)) - } - return lowercase ? result.toLowerCase() : result -} - -const CodeBlock: FC = memo(({ language, value }) => { - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - - const downloadAsFile = () => { - if (typeof window === 'undefined') { - return - } - const fileExtension = programmingLanguages[language] || '.file' - const suggestedFileName = `file-${generateRandomString( - 3, - true - )}${fileExtension}` - const fileName = window.prompt('Enter file name' || '', suggestedFileName) - - if (!fileName) { - // User pressed cancel on prompt. - return - } - - const blob = new Blob([value], { type: 'text/plain' }) - const url = URL.createObjectURL(blob) - const link = document.createElement('a') - link.download = fileName - link.href = url - link.style.display = 'none' - document.body.appendChild(link) - link.click() - document.body.removeChild(link) - URL.revokeObjectURL(url) - } - - const onCopy = () => { - if (isCopied) return - copyToClipboard(value) - } - - return ( -
            -
            - {language} -
            - - -
            -
            - - {value} - -
            - ) -}) -CodeBlock.displayName = 'CodeBlock' - -export { CodeBlock } diff --git a/spaces/wy213/213a/src/lib/bots/bing/index.ts b/spaces/wy213/213a/src/lib/bots/bing/index.ts deleted file mode 100644 index 6fd51ba48cbb1148f13d29e76960c092b807cfae..0000000000000000000000000000000000000000 --- a/spaces/wy213/213a/src/lib/bots/bing/index.ts +++ /dev/null @@ -1,426 +0,0 @@ -import { fetch, WebSocket, debug } from '@/lib/isomorphic' -import WebSocketAsPromised from 'websocket-as-promised' -import { - SendMessageParams, - BingConversationStyle, - ConversationResponse, - ChatResponseMessage, - ConversationInfo, - InvocationEventType, - ChatError, - ErrorCode, - ChatUpdateCompleteResponse, - ImageInfo, - KBlobResponse -} from './types' - -import { convertMessageToMarkdown, websocketUtils, streamAsyncIterable } from './utils' -import { WatchDog, createChunkDecoder } from '@/lib/utils' - -type Params = SendMessageParams<{ bingConversationStyle: BingConversationStyle }> - -const OPTIONS_SETS = [ - 'nlu_direct_response_filter', - 'deepleo', - 'disable_emoji_spoken_text', - 'responsible_ai_policy_235', - 'enablemm', - 'iycapbing', - 'iyxapbing', - 'objopinion', - 'rweasgv2', - 'dagslnv1', - 'dv3sugg', - 'autosave', - 'iyoloxap', - 'iyoloneutral', - 'clgalileo', - 'gencontentv3', -] - -export class BingWebBot { - protected conversationContext?: ConversationInfo - protected cookie: string - protected ua: string - protected endpoint = '' - private lastText = '' - private asyncTasks: Array> = [] - - constructor(opts: { - cookie: string - ua: string - bingConversationStyle?: BingConversationStyle - conversationContext?: ConversationInfo - }) { - const { cookie, ua, conversationContext } = opts - this.cookie = cookie?.includes(';') ? cookie : `_EDGE_V=1; _U=${cookie}` - this.ua = ua - this.conversationContext = conversationContext - } - - static buildChatRequest(conversation: ConversationInfo) { - const optionsSets = OPTIONS_SETS - if (conversation.conversationStyle === BingConversationStyle.Precise) { - optionsSets.push('h3precise') - } else if (conversation.conversationStyle === BingConversationStyle.Creative) { - optionsSets.push('h3imaginative') - } - return { - arguments: [ - { - source: 'cib', - optionsSets, - allowedMessageTypes: [ - 'ActionRequest', - 'Chat', - 'Context', - 'InternalSearchQuery', - 'InternalSearchResult', - 'Disengaged', - 'InternalLoaderMessage', - 'Progress', - 'RenderCardRequest', - 'SemanticSerp', - 'GenerateContentQuery', - 'SearchQuery', - ], - sliceIds: [ - 'winmuid1tf', - 'anssupfor_c', - 'imgchatgptv2', - 'tts2cf', - 'contansperf', - 'mlchatpc8500w', - 'mlchatpc2', - 'ctrlworkpay', - 'winshortmsgtf', - 'cibctrl', - 'sydtransctrl', - 'sydconfigoptc', - '0705trt4', - '517opinion', - '628ajcopus0', - '330uaugs0', - '529rwea', - '0626snptrcs0', - '424dagslnv1', - ], - isStartOfSession: conversation.invocationId === 0, - message: { - author: 'user', - inputMethod: 'Keyboard', - text: conversation.prompt, - imageUrl: conversation.imageUrl, - messageType: 'Chat', - }, - conversationId: conversation.conversationId, - conversationSignature: conversation.conversationSignature, - participant: { id: conversation.clientId }, - }, - ], - invocationId: conversation.invocationId.toString(), - target: 'chat', - type: InvocationEventType.StreamInvocation, - } - } - - async createConversation(): Promise { - const headers = { - 'Accept-Encoding': 'gzip, deflate, br, zsdch', - 'User-Agent': this.ua, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: this.cookie, - } - - let resp: ConversationResponse | undefined - try { - const response = await fetch(this.endpoint + '/api/create', { method: 'POST', headers, redirect: 'error', mode: 'cors', credentials: 'include' }) - if (response.status === 404) { - throw new ChatError('Not Found', ErrorCode.NOTFOUND_ERROR) - } - resp = await response.json() as ConversationResponse - } catch (err) { - console.error('create conversation error', err) - } - - if (!resp?.result) { - throw new ChatError('你的 VPS 或代理可能被封禁,如有疑问,请前往 https://github.com/weaigc/bingo 咨询', ErrorCode.UNKOWN_ERROR) - } - - const { value, message } = resp.result || {} - if (value !== 'Success') { - const errorMsg = `${value}: ${message}` - if (value === 'UnauthorizedRequest') { - throw new ChatError(errorMsg, ErrorCode.BING_UNAUTHORIZED) - } - if (value === 'Forbidden') { - throw new ChatError(errorMsg, ErrorCode.BING_FORBIDDEN) - } - throw new ChatError(errorMsg, ErrorCode.UNKOWN_ERROR) - } - return resp - } - - private async createContext(conversationStyle: BingConversationStyle) { - if (!this.conversationContext) { - const conversation = await this.createConversation() - this.conversationContext = { - conversationId: conversation.conversationId, - conversationSignature: conversation.conversationSignature, - clientId: conversation.clientId, - invocationId: 0, - conversationStyle, - prompt: '', - } - } - return this.conversationContext - } - - async sendMessage(params: Params) { - try { - await this.createContext(params.options.bingConversationStyle) - Object.assign(this.conversationContext!, { prompt: params.prompt, imageUrl: params.imageUrl }) - return this.sydneyProxy(params) - } catch (error) { - params.onEvent({ - type: 'ERROR', - error: error instanceof ChatError ? error : new ChatError('Catch Error', ErrorCode.UNKOWN_ERROR), - }) - } - } - - private async sydneyProxy(params: Params) { - const abortController = new AbortController() - const response = await fetch(this.endpoint + '/api/sydney', { - method: 'POST', - headers: { - 'Content-Type': 'application/json', - }, - signal: abortController.signal, - body: JSON.stringify(this.conversationContext!) - }) - if (response.status !== 200) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - 'Unknown error', - ErrorCode.UNKOWN_ERROR, - ), - }) - } - params.signal?.addEventListener('abort', () => { - abortController.abort() - }) - - const textDecoder = createChunkDecoder() - for await (const chunk of streamAsyncIterable(response.body!)) { - this.parseEvents(params, websocketUtils.unpackMessage(textDecoder(chunk))) - } - } - - async sendWs() { - const wsConfig: ConstructorParameters[1] = { - packMessage: websocketUtils.packMessage, - unpackMessage: websocketUtils.unpackMessage, - createWebSocket: (url) => new WebSocket(url, { - headers: { - 'accept-language': 'zh-CN,zh;q=0.9', - 'cache-control': 'no-cache', - 'User-Agent': this.ua, - pragma: 'no-cache', - cookie: this.cookie, - } - }) - } - const wsp = new WebSocketAsPromised('wss://sydney.bing.com/sydney/ChatHub', wsConfig) - - wsp.open().then(() => { - wsp.sendPacked({ protocol: 'json', version: 1 }) - wsp.sendPacked({ type: 6 }) - wsp.sendPacked(BingWebBot.buildChatRequest(this.conversationContext!)) - }) - - return wsp - } - - private async useWs(params: Params) { - const wsp = await this.sendWs() - const watchDog = new WatchDog() - wsp.onUnpackedMessage.addListener((events) => { - watchDog.watch(() => { - wsp.sendPacked({ type: 6 }) - }) - this.parseEvents(params, events) - }) - - wsp.onClose.addListener(() => { - watchDog.reset() - params.onEvent({ type: 'DONE' }) - wsp.removeAllListeners() - }) - - params.signal?.addEventListener('abort', () => { - wsp.removeAllListeners() - wsp.close() - }) - } - - private async createImage(prompt: string, id: string) { - try { - const headers = { - 'Accept-Encoding': 'gzip, deflate, br, zsdch', - 'User-Agent': this.ua, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: this.cookie, - } - const query = new URLSearchParams({ - prompt, - id - }) - const response = await fetch(this.endpoint + '/api/image?' + query.toString(), - { - method: 'POST', - headers, - mode: 'cors', - credentials: 'include' - }) - .then(res => res.text()) - if (response) { - this.lastText += '\n' + response - } - } catch (err) { - console.error('Create Image Error', err) - } - } - - private buildKnowledgeApiPayload(imageUrl: string, conversationStyle: BingConversationStyle) { - const imageInfo: ImageInfo = {} - let imageBase64: string | undefined = undefined - const knowledgeRequest = { - imageInfo, - knowledgeRequest: { - invokedSkills: [ - 'ImageById' - ], - subscriptionId: 'Bing.Chat.Multimodal', - invokedSkillsRequestData: { - enableFaceBlur: true - }, - convoData: { - convoid: this.conversationContext?.conversationId, - convotone: conversationStyle, - } - }, - } - - if (imageUrl.startsWith('data:image/')) { - imageBase64 = imageUrl.replace('data:image/', ''); - const partIndex = imageBase64.indexOf(',') - if (partIndex) { - imageBase64 = imageBase64.substring(partIndex + 1) - } - } else { - imageInfo.url = imageUrl - } - return { knowledgeRequest, imageBase64 } - } - - async uploadImage(imageUrl: string, conversationStyle: BingConversationStyle = BingConversationStyle.Creative): Promise { - if (!imageUrl) { - return - } - await this.createContext(conversationStyle) - const payload = this.buildKnowledgeApiPayload(imageUrl, conversationStyle) - - const response = await fetch(this.endpoint + '/api/kblob', - { - headers: { - 'Content-Type': 'application/json', - }, - method: 'POST', - mode: 'cors', - credentials: 'include', - body: JSON.stringify(payload), - }) - .then(res => res.json()) - .catch(e => { - console.log('Error', e) - }) - return response - } - - private async generateContent(message: ChatResponseMessage) { - if (message.contentType === 'IMAGE') { - this.asyncTasks.push(this.createImage(message.text, message.messageId)) - } - } - - private async parseEvents(params: Params, events: any) { - const conversation = this.conversationContext! - - events?.forEach(async (event: ChatUpdateCompleteResponse) => { - debug('bing event', event) - if (event.type === 3) { - await Promise.all(this.asyncTasks) - this.asyncTasks = [] - params.onEvent({ type: 'UPDATE_ANSWER', data: { text: this.lastText } }) - params.onEvent({ type: 'DONE' }) - conversation.invocationId = parseInt(event.invocationId, 10) + 1 - } else if (event.type === 1) { - const messages = event.arguments[0].messages - if (messages) { - const text = convertMessageToMarkdown(messages[0]) - this.lastText = text - params.onEvent({ type: 'UPDATE_ANSWER', data: { text, spokenText: messages[0].text, throttling: event.arguments[0].throttling } }) - } - } else if (event.type === 2) { - const messages = event.item.messages as ChatResponseMessage[] | undefined - if (!messages) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - event.item.result.error || 'Unknown error', - event.item.result.value === 'Throttled' ? ErrorCode.THROTTLE_LIMIT - : event.item.result.value === 'CaptchaChallenge' ? (this.conversationContext?.conversationId?.includes('BingProdUnAuthenticatedUsers') ? ErrorCode.BING_UNAUTHORIZED : ErrorCode.BING_CAPTCHA) - : ErrorCode.UNKOWN_ERROR - ), - }) - return - } - const limited = messages.some((message) => - message.contentOrigin === 'TurnLimiter' - || message.messageType === 'Disengaged' - ) - if (limited) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - 'Sorry, you have reached chat limit in this conversation.', - ErrorCode.CONVERSATION_LIMIT, - ), - }) - return - } - - const lastMessage = event.item.messages.at(-1) as ChatResponseMessage - const specialMessage = event.item.messages.find(message => message.author === 'bot' && message.contentType === 'IMAGE') - if (specialMessage) { - this.generateContent(specialMessage) - } - - if (lastMessage) { - const text = convertMessageToMarkdown(lastMessage) - this.lastText = text - params.onEvent({ - type: 'UPDATE_ANSWER', - data: { text, throttling: event.item.throttling, suggestedResponses: lastMessage.suggestedResponses, sourceAttributions: lastMessage.sourceAttributions }, - }) - } - } - }) - } - - resetConversation() { - this.conversationContext = undefined - } -} diff --git a/spaces/xangma/chat-pykg/README.md b/spaces/xangma/chat-pykg/README.md deleted file mode 100644 index 15832009a467141a0041a2b12818bf5150766196..0000000000000000000000000000000000000000 --- a/spaces/xangma/chat-pykg/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: chat-pykg -emoji: 🦀 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/models/pcb.py b/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/models/pcb.py deleted file mode 100644 index 92c74148763a600ed331bb0e361588fbf3b09189..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/models/pcb.py +++ /dev/null @@ -1,314 +0,0 @@ -from __future__ import division, absolute_import -import torch.utils.model_zoo as model_zoo -from torch import nn -from torch.nn import functional as F - -__all__ = ['pcb_p6', 'pcb_p4'] - -model_urls = { - 'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth', - 'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth', - 'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth', - 'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth', - 'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth', -} - - -def conv3x3(in_planes, out_planes, stride=1): - """3x3 convolution with padding""" - return nn.Conv2d( - in_planes, - out_planes, - kernel_size=3, - stride=stride, - padding=1, - bias=False - ) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(BasicBlock, self).__init__() - self.conv1 = conv3x3(inplanes, planes, stride) - self.bn1 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = nn.BatchNorm2d(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(Bottleneck, self).__init__() - self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - self.conv2 = nn.Conv2d( - planes, - planes, - kernel_size=3, - stride=stride, - padding=1, - bias=False - ) - self.bn2 = nn.BatchNorm2d(planes) - self.conv3 = nn.Conv2d( - planes, planes * self.expansion, kernel_size=1, bias=False - ) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class DimReduceLayer(nn.Module): - - def __init__(self, in_channels, out_channels, nonlinear): - super(DimReduceLayer, self).__init__() - layers = [] - layers.append( - nn.Conv2d( - in_channels, out_channels, 1, stride=1, padding=0, bias=False - ) - ) - layers.append(nn.BatchNorm2d(out_channels)) - - if nonlinear == 'relu': - layers.append(nn.ReLU(inplace=True)) - elif nonlinear == 'leakyrelu': - layers.append(nn.LeakyReLU(0.1)) - - self.layers = nn.Sequential(*layers) - - def forward(self, x): - return self.layers(x) - - -class PCB(nn.Module): - """Part-based Convolutional Baseline. - - Reference: - Sun et al. Beyond Part Models: Person Retrieval with Refined - Part Pooling (and A Strong Convolutional Baseline). ECCV 2018. - - Public keys: - - ``pcb_p4``: PCB with 4-part strips. - - ``pcb_p6``: PCB with 6-part strips. - """ - - def __init__( - self, - num_classes, - loss, - block, - layers, - parts=6, - reduced_dim=256, - nonlinear='relu', - **kwargs - ): - self.inplanes = 64 - super(PCB, self).__init__() - self.loss = loss - self.parts = parts - self.feature_dim = 512 * block.expansion - - # backbone network - self.conv1 = nn.Conv2d( - 3, 64, kernel_size=7, stride=2, padding=3, bias=False - ) - self.bn1 = nn.BatchNorm2d(64) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = self._make_layer(block, 64, layers[0]) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2) - self.layer3 = self._make_layer(block, 256, layers[2], stride=2) - self.layer4 = self._make_layer(block, 512, layers[3], stride=1) - - # pcb layers - self.parts_avgpool = nn.AdaptiveAvgPool2d((self.parts, 1)) - self.dropout = nn.Dropout(p=0.5) - self.conv5 = DimReduceLayer( - 512 * block.expansion, reduced_dim, nonlinear=nonlinear - ) - self.feature_dim = reduced_dim - self.classifier = nn.ModuleList( - [ - nn.Linear(self.feature_dim, num_classes) - for _ in range(self.parts) - ] - ) - - self._init_params() - - def _make_layer(self, block, planes, blocks, stride=1): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d( - self.inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - bias=False - ), - nn.BatchNorm2d(planes * block.expansion), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes)) - - return nn.Sequential(*layers) - - def _init_params(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_( - m.weight, mode='fan_out', nonlinearity='relu' - ) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.BatchNorm2d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.BatchNorm1d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def featuremaps(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - return x - - def forward(self, x): - f = self.featuremaps(x) - v_g = self.parts_avgpool(f) - - if not self.training: - v_g = F.normalize(v_g, p=2, dim=1) - return v_g.view(v_g.size(0), -1) - - v_g = self.dropout(v_g) - v_h = self.conv5(v_g) - - y = [] - for i in range(self.parts): - v_h_i = v_h[:, :, i, :] - v_h_i = v_h_i.view(v_h_i.size(0), -1) - y_i = self.classifier[i](v_h_i) - y.append(y_i) - - if self.loss == 'softmax': - return y - elif self.loss == 'triplet': - v_g = F.normalize(v_g, p=2, dim=1) - return y, v_g.view(v_g.size(0), -1) - else: - raise KeyError('Unsupported loss: {}'.format(self.loss)) - - -def init_pretrained_weights(model, model_url): - """Initializes model with pretrained weights. - - Layers that don't match with pretrained layers in name or size are kept unchanged. - """ - pretrain_dict = model_zoo.load_url(model_url) - model_dict = model.state_dict() - pretrain_dict = { - k: v - for k, v in pretrain_dict.items() - if k in model_dict and model_dict[k].size() == v.size() - } - model_dict.update(pretrain_dict) - model.load_state_dict(model_dict) - - -def pcb_p6(num_classes, loss='softmax', pretrained=True, **kwargs): - model = PCB( - num_classes=num_classes, - loss=loss, - block=Bottleneck, - layers=[3, 4, 6, 3], - last_stride=1, - parts=6, - reduced_dim=256, - nonlinear='relu', - **kwargs - ) - if pretrained: - init_pretrained_weights(model, model_urls['resnet50']) - return model - - -def pcb_p4(num_classes, loss='softmax', pretrained=True, **kwargs): - model = PCB( - num_classes=num_classes, - loss=loss, - block=Bottleneck, - layers=[3, 4, 6, 3], - last_stride=1, - parts=4, - reduced_dim=256, - nonlinear='relu', - **kwargs - ) - if pretrained: - init_pretrained_weights(model, model_urls['resnet50']) - return model diff --git a/spaces/xiaoguolizi/anime-ai-detect/app.py b/spaces/xiaoguolizi/anime-ai-detect/app.py deleted file mode 100644 index 89224ac0e4493054be928e7fabed7b9d0485e412..0000000000000000000000000000000000000000 --- a/spaces/xiaoguolizi/anime-ai-detect/app.py +++ /dev/null @@ -1,17 +0,0 @@ -import gradio as gr -from transformers import pipeline - -detection_pipeline = pipeline("image-classification", "saltacc/anime-ai-detect") - - -def detect(img): - print(img) - output = detection_pipeline(img, top_k=2) - final = {} - for d in output: - final[d["label"]] = d["score"] - return final - - -iface = gr.Interface(fn=detect, inputs=gr.Image(type="pil"), outputs=gr.Label(label="result")) -iface.launch() diff --git a/spaces/xp3857/Image_Restoration_Colorization/Global/data/online_dataset_for_old_photos.py b/spaces/xp3857/Image_Restoration_Colorization/Global/data/online_dataset_for_old_photos.py deleted file mode 100644 index 068410a93eb10d5f00e694fd890f8aaa069526a3..0000000000000000000000000000000000000000 --- a/spaces/xp3857/Image_Restoration_Colorization/Global/data/online_dataset_for_old_photos.py +++ /dev/null @@ -1,485 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. - -import os.path -import io -import zipfile -from data.base_dataset import BaseDataset, get_params, get_transform, normalize -from data.image_folder import make_dataset -from PIL import Image -import torchvision.transforms as transforms -import numpy as np -from data.Load_Bigfile import BigFileMemoryLoader -import random -import cv2 -from io import BytesIO - -def pil_to_np(img_PIL): - '''Converts image in PIL format to np.array. - - From W x H x C [0...255] to C x W x H [0..1] - ''' - ar = np.array(img_PIL) - - if len(ar.shape) == 3: - ar = ar.transpose(2, 0, 1) - else: - ar = ar[None, ...] - - return ar.astype(np.float32) / 255. - - -def np_to_pil(img_np): - '''Converts image in np.array format to PIL image. - - From C x W x H [0..1] to W x H x C [0...255] - ''' - ar = np.clip(img_np * 255, 0, 255).astype(np.uint8) - - if img_np.shape[0] == 1: - ar = ar[0] - else: - ar = ar.transpose(1, 2, 0) - - return Image.fromarray(ar) - -def synthesize_salt_pepper(image,amount,salt_vs_pepper): - - ## Give PIL, return the noisy PIL - - img_pil=pil_to_np(image) - - out = img_pil.copy() - p = amount - q = salt_vs_pepper - flipped = np.random.choice([True, False], size=img_pil.shape, - p=[p, 1 - p]) - salted = np.random.choice([True, False], size=img_pil.shape, - p=[q, 1 - q]) - peppered = ~salted - out[flipped & salted] = 1 - out[flipped & peppered] = 0. - noisy = np.clip(out, 0, 1).astype(np.float32) - - - return np_to_pil(noisy) - -def synthesize_gaussian(image,std_l,std_r): - - ## Give PIL, return the noisy PIL - - img_pil=pil_to_np(image) - - mean=0 - std=random.uniform(std_l/255.,std_r/255.) - gauss=np.random.normal(loc=mean,scale=std,size=img_pil.shape) - noisy=img_pil+gauss - noisy=np.clip(noisy,0,1).astype(np.float32) - - return np_to_pil(noisy) - -def synthesize_speckle(image,std_l,std_r): - - ## Give PIL, return the noisy PIL - - img_pil=pil_to_np(image) - - mean=0 - std=random.uniform(std_l/255.,std_r/255.) - gauss=np.random.normal(loc=mean,scale=std,size=img_pil.shape) - noisy=img_pil+gauss*img_pil - noisy=np.clip(noisy,0,1).astype(np.float32) - - return np_to_pil(noisy) - - -def synthesize_low_resolution(img): - w,h=img.size - - new_w=random.randint(int(w/2),w) - new_h=random.randint(int(h/2),h) - - img=img.resize((new_w,new_h),Image.BICUBIC) - - if random.uniform(0,1)<0.5: - img=img.resize((w,h),Image.NEAREST) - else: - img = img.resize((w, h), Image.BILINEAR) - - return img - - -def convertToJpeg(im,quality): - with BytesIO() as f: - im.save(f, format='JPEG',quality=quality) - f.seek(0) - return Image.open(f).convert('RGB') - - -def blur_image_v2(img): - - - x=np.array(img) - kernel_size_candidate=[(3,3),(5,5),(7,7)] - kernel_size=random.sample(kernel_size_candidate,1)[0] - std=random.uniform(1.,5.) - - #print("The gaussian kernel size: (%d,%d) std: %.2f"%(kernel_size[0],kernel_size[1],std)) - blur=cv2.GaussianBlur(x,kernel_size,std) - - return Image.fromarray(blur.astype(np.uint8)) - -def online_add_degradation_v2(img): - - task_id=np.random.permutation(4) - - for x in task_id: - if x==0 and random.uniform(0,1)<0.7: - img = blur_image_v2(img) - if x==1 and random.uniform(0,1)<0.7: - flag = random.choice([1, 2, 3]) - if flag == 1: - img = synthesize_gaussian(img, 5, 50) - if flag == 2: - img = synthesize_speckle(img, 5, 50) - if flag == 3: - img = synthesize_salt_pepper(img, random.uniform(0, 0.01), random.uniform(0.3, 0.8)) - if x==2 and random.uniform(0,1)<0.7: - img=synthesize_low_resolution(img) - - if x==3 and random.uniform(0,1)<0.7: - img=convertToJpeg(img,random.randint(40,100)) - - return img - - -def irregular_hole_synthesize(img,mask): - - img_np=np.array(img).astype('uint8') - mask_np=np.array(mask).astype('uint8') - mask_np=mask_np/255 - img_new=img_np*(1-mask_np)+mask_np*255 - - - hole_img=Image.fromarray(img_new.astype('uint8')).convert("RGB") - - return hole_img,mask.convert("L") - -def zero_mask(size): - x=np.zeros((size,size,3)).astype('uint8') - mask=Image.fromarray(x).convert("RGB") - return mask - - - -class UnPairOldPhotos_SR(BaseDataset): ## Synthetic + Real Old - def initialize(self, opt): - self.opt = opt - self.isImage = 'domainA' in opt.name - self.task = 'old_photo_restoration_training_vae' - self.dir_AB = opt.dataroot - if self.isImage: - - self.load_img_dir_L_old=os.path.join(self.dir_AB,"Real_L_old.bigfile") - self.load_img_dir_RGB_old=os.path.join(self.dir_AB,"Real_RGB_old.bigfile") - self.load_img_dir_clean=os.path.join(self.dir_AB,"VOC_RGB_JPEGImages.bigfile") - - self.loaded_imgs_L_old=BigFileMemoryLoader(self.load_img_dir_L_old) - self.loaded_imgs_RGB_old=BigFileMemoryLoader(self.load_img_dir_RGB_old) - self.loaded_imgs_clean=BigFileMemoryLoader(self.load_img_dir_clean) - - else: - # self.load_img_dir_clean=os.path.join(self.dir_AB,self.opt.test_dataset) - self.load_img_dir_clean=os.path.join(self.dir_AB,"VOC_RGB_JPEGImages.bigfile") - self.loaded_imgs_clean=BigFileMemoryLoader(self.load_img_dir_clean) - - #### - print("-------------Filter the imgs whose size <256 in VOC-------------") - self.filtered_imgs_clean=[] - for i in range(len(self.loaded_imgs_clean)): - img_name,img=self.loaded_imgs_clean[i] - h,w=img.size - if h<256 or w<256: - continue - self.filtered_imgs_clean.append((img_name,img)) - - print("--------Origin image num is [%d], filtered result is [%d]--------" % ( - len(self.loaded_imgs_clean), len(self.filtered_imgs_clean))) - ## Filter these images whose size is less than 256 - - # self.img_list=os.listdir(load_img_dir) - self.pid = os.getpid() - - def __getitem__(self, index): - - - is_real_old=0 - - sampled_dataset=None - degradation=None - if self.isImage: ## domain A , contains 2 kinds of data: synthetic + real_old - P=random.uniform(0,2) - if P>=0 and P<1: - if random.uniform(0,1)<0.5: - sampled_dataset=self.loaded_imgs_L_old - self.load_img_dir=self.load_img_dir_L_old - else: - sampled_dataset=self.loaded_imgs_RGB_old - self.load_img_dir=self.load_img_dir_RGB_old - is_real_old=1 - if P>=1 and P<2: - sampled_dataset=self.filtered_imgs_clean - self.load_img_dir=self.load_img_dir_clean - degradation=1 - else: - - sampled_dataset=self.filtered_imgs_clean - self.load_img_dir=self.load_img_dir_clean - - sampled_dataset_len=len(sampled_dataset) - - index=random.randint(0,sampled_dataset_len-1) - - img_name,img = sampled_dataset[index] - - if degradation is not None: - img=online_add_degradation_v2(img) - - path=os.path.join(self.load_img_dir,img_name) - - # AB = Image.open(path).convert('RGB') - # split AB image into A and B - - # apply the same transform to both A and B - - if random.uniform(0,1) <0.1: - img=img.convert("L") - img=img.convert("RGB") - ## Give a probability P, we convert the RGB image into L - - - A=img - w,h=A.size - if w<256 or h<256: - A=transforms.Scale(256,Image.BICUBIC)(A) - ## Since we want to only crop the images (256*256), for those old photos whose size is smaller than 256, we first resize them. - - transform_params = get_params(self.opt, A.size) - A_transform = get_transform(self.opt, transform_params) - - B_tensor = inst_tensor = feat_tensor = 0 - A_tensor = A_transform(A) - - - input_dict = {'label': A_tensor, 'inst': is_real_old, 'image': A_tensor, - 'feat': feat_tensor, 'path': path} - return input_dict - - def __len__(self): - return len(self.loaded_imgs_clean) ## actually, this is useless, since the selected index is just a random number - - def name(self): - return 'UnPairOldPhotos_SR' - - -class PairOldPhotos(BaseDataset): - def initialize(self, opt): - self.opt = opt - self.isImage = 'imagegan' in opt.name - self.task = 'old_photo_restoration_training_mapping' - self.dir_AB = opt.dataroot - if opt.isTrain: - self.load_img_dir_clean= os.path.join(self.dir_AB, "VOC_RGB_JPEGImages.bigfile") - self.loaded_imgs_clean = BigFileMemoryLoader(self.load_img_dir_clean) - - print("-------------Filter the imgs whose size <256 in VOC-------------") - self.filtered_imgs_clean = [] - for i in range(len(self.loaded_imgs_clean)): - img_name, img = self.loaded_imgs_clean[i] - h, w = img.size - if h < 256 or w < 256: - continue - self.filtered_imgs_clean.append((img_name, img)) - - print("--------Origin image num is [%d], filtered result is [%d]--------" % ( - len(self.loaded_imgs_clean), len(self.filtered_imgs_clean))) - - else: - self.load_img_dir=os.path.join(self.dir_AB,opt.test_dataset) - self.loaded_imgs=BigFileMemoryLoader(self.load_img_dir) - - self.pid = os.getpid() - - def __getitem__(self, index): - - - - if self.opt.isTrain: - img_name_clean,B = self.filtered_imgs_clean[index] - path = os.path.join(self.load_img_dir_clean, img_name_clean) - if self.opt.use_v2_degradation: - A=online_add_degradation_v2(B) - ### Remind: A is the input and B is corresponding GT - else: - - if self.opt.test_on_synthetic: - - img_name_B,B=self.loaded_imgs[index] - A=online_add_degradation_v2(B) - img_name_A=img_name_B - path = os.path.join(self.load_img_dir, img_name_A) - else: - img_name_A,A=self.loaded_imgs[index] - img_name_B,B=self.loaded_imgs[index] - path = os.path.join(self.load_img_dir, img_name_A) - - - if random.uniform(0,1)<0.1 and self.opt.isTrain: - A=A.convert("L") - B=B.convert("L") - A=A.convert("RGB") - B=B.convert("RGB") - ## In P, we convert the RGB into L - - - ##test on L - - # split AB image into A and B - # w, h = img.size - # w2 = int(w / 2) - # A = img.crop((0, 0, w2, h)) - # B = img.crop((w2, 0, w, h)) - w,h=A.size - if w<256 or h<256: - A=transforms.Scale(256,Image.BICUBIC)(A) - B=transforms.Scale(256, Image.BICUBIC)(B) - - # apply the same transform to both A and B - transform_params = get_params(self.opt, A.size) - A_transform = get_transform(self.opt, transform_params) - B_transform = get_transform(self.opt, transform_params) - - B_tensor = inst_tensor = feat_tensor = 0 - A_tensor = A_transform(A) - B_tensor = B_transform(B) - - input_dict = {'label': A_tensor, 'inst': inst_tensor, 'image': B_tensor, - 'feat': feat_tensor, 'path': path} - return input_dict - - def __len__(self): - - if self.opt.isTrain: - return len(self.filtered_imgs_clean) - else: - return len(self.loaded_imgs) - - def name(self): - return 'PairOldPhotos' - - -class PairOldPhotos_with_hole(BaseDataset): - def initialize(self, opt): - self.opt = opt - self.isImage = 'imagegan' in opt.name - self.task = 'old_photo_restoration_training_mapping' - self.dir_AB = opt.dataroot - if opt.isTrain: - self.load_img_dir_clean= os.path.join(self.dir_AB, "VOC_RGB_JPEGImages.bigfile") - self.loaded_imgs_clean = BigFileMemoryLoader(self.load_img_dir_clean) - - print("-------------Filter the imgs whose size <256 in VOC-------------") - self.filtered_imgs_clean = [] - for i in range(len(self.loaded_imgs_clean)): - img_name, img = self.loaded_imgs_clean[i] - h, w = img.size - if h < 256 or w < 256: - continue - self.filtered_imgs_clean.append((img_name, img)) - - print("--------Origin image num is [%d], filtered result is [%d]--------" % ( - len(self.loaded_imgs_clean), len(self.filtered_imgs_clean))) - - else: - self.load_img_dir=os.path.join(self.dir_AB,opt.test_dataset) - self.loaded_imgs=BigFileMemoryLoader(self.load_img_dir) - - self.loaded_masks = BigFileMemoryLoader(opt.irregular_mask) - - self.pid = os.getpid() - - def __getitem__(self, index): - - - - if self.opt.isTrain: - img_name_clean,B = self.filtered_imgs_clean[index] - path = os.path.join(self.load_img_dir_clean, img_name_clean) - - - B=transforms.RandomCrop(256)(B) - A=online_add_degradation_v2(B) - ### Remind: A is the input and B is corresponding GT - - else: - img_name_A,A=self.loaded_imgs[index] - img_name_B,B=self.loaded_imgs[index] - path = os.path.join(self.load_img_dir, img_name_A) - - #A=A.resize((256,256)) - A=transforms.CenterCrop(256)(A) - B=A - - if random.uniform(0,1)<0.1 and self.opt.isTrain: - A=A.convert("L") - B=B.convert("L") - A=A.convert("RGB") - B=B.convert("RGB") - ## In P, we convert the RGB into L - - if self.opt.isTrain: - mask_name,mask=self.loaded_masks[random.randint(0,len(self.loaded_masks)-1)] - else: - mask_name, mask = self.loaded_masks[index%100] - mask = mask.resize((self.opt.loadSize, self.opt.loadSize), Image.NEAREST) - - if self.opt.random_hole and random.uniform(0,1)>0.5 and self.opt.isTrain: - mask=zero_mask(256) - - if self.opt.no_hole: - mask=zero_mask(256) - - - A,_=irregular_hole_synthesize(A,mask) - - if not self.opt.isTrain and self.opt.hole_image_no_mask: - mask=zero_mask(256) - - transform_params = get_params(self.opt, A.size) - A_transform = get_transform(self.opt, transform_params) - B_transform = get_transform(self.opt, transform_params) - - if transform_params['flip'] and self.opt.isTrain: - mask=mask.transpose(Image.FLIP_LEFT_RIGHT) - - mask_tensor = transforms.ToTensor()(mask) - - - B_tensor = inst_tensor = feat_tensor = 0 - A_tensor = A_transform(A) - B_tensor = B_transform(B) - - input_dict = {'label': A_tensor, 'inst': mask_tensor[:1], 'image': B_tensor, - 'feat': feat_tensor, 'path': path} - return input_dict - - def __len__(self): - - if self.opt.isTrain: - return len(self.filtered_imgs_clean) - - else: - return len(self.loaded_imgs) - - def name(self): - return 'PairOldPhotos_with_hole' \ No newline at end of file diff --git a/spaces/xuetao/bingo3/src/components/markdown.tsx b/spaces/xuetao/bingo3/src/components/markdown.tsx deleted file mode 100644 index d4491467a1f14d1d72e535caac9c40636054e5df..0000000000000000000000000000000000000000 --- a/spaces/xuetao/bingo3/src/components/markdown.tsx +++ /dev/null @@ -1,9 +0,0 @@ -import { FC, memo } from 'react' -import ReactMarkdown, { Options } from 'react-markdown' - -export const MemoizedReactMarkdown: FC = memo( - ReactMarkdown, - (prevProps, nextProps) => - prevProps.children === nextProps.children && - prevProps.className === nextProps.className -) diff --git a/spaces/yfyangd/PictureBookUnderstanding/BLIP/data/vqa_dataset.py b/spaces/yfyangd/PictureBookUnderstanding/BLIP/data/vqa_dataset.py deleted file mode 100644 index 92ec1df429b3910316ddd554bfea01c6e7922cae..0000000000000000000000000000000000000000 --- a/spaces/yfyangd/PictureBookUnderstanding/BLIP/data/vqa_dataset.py +++ /dev/null @@ -1,88 +0,0 @@ -import os -import json -import random -from PIL import Image - -import torch -from torch.utils.data import Dataset -from data.utils import pre_question - -from torchvision.datasets.utils import download_url - -class vqa_dataset(Dataset): - def __init__(self, transform, ann_root, vqa_root, vg_root, train_files=[], split="train"): - self.split = split - - self.transform = transform - self.vqa_root = vqa_root - self.vg_root = vg_root - - if split=='train': - urls = {'vqa_train':'https://storage.googleapis.com/sfr-vision-language-research/datasets/vqa_train.json', - 'vqa_val':'https://storage.googleapis.com/sfr-vision-language-research/datasets/vqa_val.json', - 'vg_qa':'https://storage.googleapis.com/sfr-vision-language-research/datasets/vg_qa.json'} - - self.annotation = [] - for f in train_files: - download_url(urls[f],ann_root) - self.annotation += json.load(open(os.path.join(ann_root,'%s.json'%f),'r')) - else: - download_url('https://storage.googleapis.com/sfr-vision-language-research/datasets/vqa_test.json',ann_root) - self.annotation = json.load(open(os.path.join(ann_root,'vqa_test.json'),'r')) - - download_url('https://storage.googleapis.com/sfr-vision-language-research/datasets/answer_list.json',ann_root) - self.answer_list = json.load(open(os.path.join(ann_root,'answer_list.json'),'r')) - - - def __len__(self): - return len(self.annotation) - - def __getitem__(self, index): - - ann = self.annotation[index] - - if ann['dataset']=='vqa': - image_path = os.path.join(self.vqa_root,ann['image']) - elif ann['dataset']=='vg': - image_path = os.path.join(self.vg_root,ann['image']) - - image = Image.open(image_path).convert('RGB') - image = self.transform(image) - - if self.split == 'test': - question = pre_question(ann['question']) - question_id = ann['question_id'] - return image, question, question_id - - - elif self.split=='train': - - question = pre_question(ann['question']) - - if ann['dataset']=='vqa': - answer_weight = {} - for answer in ann['answer']: - if answer in answer_weight.keys(): - answer_weight[answer] += 1/len(ann['answer']) - else: - answer_weight[answer] = 1/len(ann['answer']) - - answers = list(answer_weight.keys()) - weights = list(answer_weight.values()) - - elif ann['dataset']=='vg': - answers = [ann['answer']] - weights = [0.2] - - return image, question, answers, weights - - -def vqa_collate_fn(batch): - image_list, question_list, answer_list, weight_list, n = [], [], [], [], [] - for image, question, answer, weights in batch: - image_list.append(image) - question_list.append(question) - weight_list += weights - answer_list += answer - n.append(len(answer)) - return torch.stack(image_list,dim=0), question_list, answer_list, torch.Tensor(weight_list), n \ No newline at end of file diff --git a/spaces/yfyangd/PictureBookUnderstanding/app.py b/spaces/yfyangd/PictureBookUnderstanding/app.py deleted file mode 100644 index 929a23e1adf5e7a6f7fa3ae36fa952413cb712f3..0000000000000000000000000000000000000000 --- a/spaces/yfyangd/PictureBookUnderstanding/app.py +++ /dev/null @@ -1,150 +0,0 @@ -import clip -import gc -import numpy as np -import os -import pandas as pd -import requests -import torch -import torchvision.transforms as T -import torchvision.transforms.functional as TF - -#from IPython.display import display -from PIL import Image -from torch import nn -from torch.nn import functional as F -from torchvision import transforms -from torchvision.transforms.functional import InterpolationMode -from BLIP.models.blip import blip_decoder - -import gradio as gr - -device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') -blip_image_eval_size = 384 -blip_model_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base_caption.pth' -blip_model = blip_decoder(pretrained=blip_model_url, image_size=blip_image_eval_size, vit='base') -blip_model.eval() -blip_model = blip_model.to(device) - -def generate_caption(pil_image): - gpu_image = transforms.Compose([ - transforms.Resize((blip_image_eval_size, blip_image_eval_size), interpolation=InterpolationMode.BICUBIC), - transforms.ToTensor(), - transforms.Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)) - ])(pil_image).unsqueeze(0).to(device) - - with torch.no_grad(): - caption = blip_model.generate(gpu_image, sample=False, num_beams=3, max_length=20, min_length=5) - return caption[0] - -def load_list(filename): - with open(filename, 'r', encoding='utf-8', errors='replace') as f: - items = [line.strip() for line in f.readlines()] - return items - -def rank(model, image_features, text_array, top_count=1): - top_count = min(top_count, len(text_array)) - text_tokens = clip.tokenize([text for text in text_array])#.cuda() - with torch.no_grad(): - text_features = model.encode_text(text_tokens).float() - text_features /= text_features.norm(dim=-1, keepdim=True) - - similarity = torch.zeros((1, len(text_array))).to(device) - for i in range(image_features.shape[0]): - similarity += (100.0 * image_features[i].unsqueeze(0) @ text_features.T).softmax(dim=-1) - similarity /= image_features.shape[0] - - top_probs, top_labels = similarity.cpu().topk(top_count, dim=-1) - return [(text_array[top_labels[0][i].numpy()], (top_probs[0][i].numpy()*100)) for i in range(top_count)] - -def interrogate(cover): - image = Image.fromarray(cover) - #image = cover - models = models1 - #caption = generate_caption(Image.fromarray(cover)) - caption = generate_caption(image) - if len(models) == 0: - #print(f"\n\n{caption}") - return - - table = [] - bests = [[('',0)]]*5 - for model_name in models: - #print(f"Interrogating with {model_name}...") - model, preprocess = clip.load(model_name) - #model.cuda().eval() - - images = preprocess(image).unsqueeze(0)#.cuda() - with torch.no_grad(): - image_features = model.encode_image(images).float() - image_features /= image_features.norm(dim=-1, keepdim=True) - - ranks = [ - rank(model, image_features, mediums), - rank(model, image_features, ["by "+artist for artist in artists]), - rank(model, image_features, trending_list), - rank(model, image_features, movements), - rank(model, image_features, flavors, top_count=3) - ] - - for i in range(len(ranks)): - confidence_sum = 0 - for ci in range(len(ranks[i])): - confidence_sum += ranks[i][ci][1] - if confidence_sum > sum(bests[i][t][1] for t in range(len(bests[i]))): - bests[i] = ranks[i] - - row = [model_name] - for r in ranks: - row.append(', '.join([f"{x[0]} ({x[1]:0.1f}%)" for x in r])) - - table.append(row) - - del model - gc.collect() - #display(pd.DataFrame(table, columns=["Model", "Medium", "Artist", "Trending", "Movement", "Flavors"])) - - flaves = ', '.join([f"{x[0]}" for x in bests[4]]) - medium = bests[0][0][0] - if caption.startswith(medium): - return(f"{caption} {bests[1][0][0]}, {bests[2][0][0]}, {bests[3][0][0]}, {flaves}") - #print(f"{caption} {bests[3][0][0]}, {flaves}") - else: - return(f"{caption}, {medium} {bests[1][0][0]}, {bests[2][0][0]}, {bests[3][0][0]}, {flaves}") - #print(f"{caption} {bests[3][0][0]}, {flaves}") - -data_path = "./clip-interrogator/data/" - -artists = load_list(os.path.join(data_path, 'artists.txt')) -flavors = load_list(os.path.join(data_path, 'flavors.txt')) -mediums = load_list(os.path.join(data_path, 'mediums.txt')) -movements = load_list(os.path.join(data_path, 'movements.txt')) - -sites = ['Artstation', 'behance', 'cg society', 'cgsociety', 'deviantart', 'dribble', 'flickr', 'instagram', 'pexels', 'pinterest', 'pixabay', 'pixiv', 'polycount', 'reddit', 'shutterstock', 'tumblr', 'unsplash', 'zbrush central'] -trending_list = [site for site in sites] -trending_list.extend(["trending on "+site for site in sites]) -trending_list.extend(["featured on "+site for site in sites]) -trending_list.extend([site+" contest winner" for site in sites]) - -models1 = ['ViT-B/32'] - -width = 130 -height = 180 - -cover = gr.inputs.Image(shape=(width, height), label='Upload cover image to classify') -label = gr.outputs.Label(label='Model prediction') - -examples=["00064.jpg","00068.jpg", "00069.jpg"] - -title="Image2Text-CLIP Application" - -description=''' -此文本是使用 OpenAI CLIP 模型針對各種藝術家、媒介和風格測試給定圖像,轉化出AI對於圖像的理解. - - - - -### 以下請輸入指定圖片, 或是選擇以下3個樣本 - -''' - -gr.Interface(fn=interrogate,inputs=cover,outputs=label,examples=examples,title=title,description=description).launch()#(share=True) \ No newline at end of file diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/levit/convert_levit_timm_to_pytorch.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/levit/convert_levit_timm_to_pytorch.py deleted file mode 100644 index 6f285a6de3938d513f67869c8ac830b500aaae19..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/levit/convert_levit_timm_to_pytorch.py +++ /dev/null @@ -1,181 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Convert LeViT checkpoints from timm.""" - - -import argparse -import json -from collections import OrderedDict -from functools import partial -from pathlib import Path - -import timm -import torch -from huggingface_hub import hf_hub_download - -from transformers import LevitConfig, LevitForImageClassificationWithTeacher, LevitImageProcessor -from transformers.utils import logging - - -logging.set_verbosity_info() -logger = logging.get_logger() - - -def convert_weight_and_push( - hidden_sizes: int, name: str, config: LevitConfig, save_directory: Path, push_to_hub: bool = True -): - print(f"Converting {name}...") - - with torch.no_grad(): - if hidden_sizes == 128: - if name[-1] == "S": - from_model = timm.create_model("levit_128s", pretrained=True) - else: - from_model = timm.create_model("levit_128", pretrained=True) - if hidden_sizes == 192: - from_model = timm.create_model("levit_192", pretrained=True) - if hidden_sizes == 256: - from_model = timm.create_model("levit_256", pretrained=True) - if hidden_sizes == 384: - from_model = timm.create_model("levit_384", pretrained=True) - - from_model.eval() - our_model = LevitForImageClassificationWithTeacher(config).eval() - huggingface_weights = OrderedDict() - - weights = from_model.state_dict() - og_keys = list(from_model.state_dict().keys()) - new_keys = list(our_model.state_dict().keys()) - print(len(og_keys), len(new_keys)) - for i in range(len(og_keys)): - huggingface_weights[new_keys[i]] = weights[og_keys[i]] - our_model.load_state_dict(huggingface_weights) - - x = torch.randn((2, 3, 224, 224)) - out1 = from_model(x) - out2 = our_model(x).logits - - assert torch.allclose(out1, out2), "The model logits don't match the original one." - - checkpoint_name = name - print(checkpoint_name) - - if push_to_hub: - our_model.save_pretrained(save_directory / checkpoint_name) - image_processor = LevitImageProcessor() - image_processor.save_pretrained(save_directory / checkpoint_name) - - print(f"Pushed {checkpoint_name}") - - -def convert_weights_and_push(save_directory: Path, model_name: str = None, push_to_hub: bool = True): - filename = "imagenet-1k-id2label.json" - num_labels = 1000 - expected_shape = (1, num_labels) - - repo_id = "huggingface/label-files" - num_labels = num_labels - id2label = json.load(open(hf_hub_download(repo_id, filename, repo_type="dataset"), "r")) - id2label = {int(k): v for k, v in id2label.items()} - - id2label = id2label - label2id = {v: k for k, v in id2label.items()} - - ImageNetPreTrainedConfig = partial(LevitConfig, num_labels=num_labels, id2label=id2label, label2id=label2id) - - names_to_hidden_sizes = { - "levit-128S": 128, - "levit-128": 128, - "levit-192": 192, - "levit-256": 256, - "levit-384": 384, - } - - names_to_config = { - "levit-128S": ImageNetPreTrainedConfig( - hidden_sizes=[128, 256, 384], - num_attention_heads=[4, 6, 8], - depths=[2, 3, 4], - key_dim=[16, 16, 16], - drop_path_rate=0, - ), - "levit-128": ImageNetPreTrainedConfig( - hidden_sizes=[128, 256, 384], - num_attention_heads=[4, 8, 12], - depths=[4, 4, 4], - key_dim=[16, 16, 16], - drop_path_rate=0, - ), - "levit-192": ImageNetPreTrainedConfig( - hidden_sizes=[192, 288, 384], - num_attention_heads=[3, 5, 6], - depths=[4, 4, 4], - key_dim=[32, 32, 32], - drop_path_rate=0, - ), - "levit-256": ImageNetPreTrainedConfig( - hidden_sizes=[256, 384, 512], - num_attention_heads=[4, 6, 8], - depths=[4, 4, 4], - key_dim=[32, 32, 32], - drop_path_rate=0, - ), - "levit-384": ImageNetPreTrainedConfig( - hidden_sizes=[384, 512, 768], - num_attention_heads=[6, 9, 12], - depths=[4, 4, 4], - key_dim=[32, 32, 32], - drop_path_rate=0.1, - ), - } - - if model_name: - convert_weight_and_push( - names_to_hidden_sizes[model_name], model_name, names_to_config[model_name], save_directory, push_to_hub - ) - else: - for model_name, config in names_to_config.items(): - convert_weight_and_push(names_to_hidden_sizes[model_name], model_name, config, save_directory, push_to_hub) - return config, expected_shape - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - # Required parameters - parser.add_argument( - "--model_name", - default=None, - type=str, - help="The name of the model you wish to convert, it must be one of the supported Levit* architecture,", - ) - parser.add_argument( - "--pytorch_dump_folder_path", - default="levit-dump-folder/", - type=Path, - required=False, - help="Path to the output PyTorch model directory.", - ) - parser.add_argument("--push_to_hub", action="store_true", help="Push model and image processor to the hub") - parser.add_argument( - "--no-push_to_hub", - dest="push_to_hub", - action="store_false", - help="Do not push model and image processor to the hub", - ) - - args = parser.parse_args() - pytorch_dump_folder_path: Path = args.pytorch_dump_folder_path - pytorch_dump_folder_path.mkdir(exist_ok=True, parents=True) - convert_weights_and_push(pytorch_dump_folder_path, args.model_name, args.push_to_hub) diff --git a/spaces/yl12053/so-vits-4.1-Kitasan-Black/diffusion/wavenet.py b/spaces/yl12053/so-vits-4.1-Kitasan-Black/diffusion/wavenet.py deleted file mode 100644 index 3d48c7eaaa0e8191b27a5d1890eb657cbcc0d143..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Kitasan-Black/diffusion/wavenet.py +++ /dev/null @@ -1,108 +0,0 @@ -import math -from math import sqrt - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn import Mish - - -class Conv1d(torch.nn.Conv1d): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - nn.init.kaiming_normal_(self.weight) - - -class SinusoidalPosEmb(nn.Module): - def __init__(self, dim): - super().__init__() - self.dim = dim - - def forward(self, x): - device = x.device - half_dim = self.dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, device=device) * -emb) - emb = x[:, None] * emb[None, :] - emb = torch.cat((emb.sin(), emb.cos()), dim=-1) - return emb - - -class ResidualBlock(nn.Module): - def __init__(self, encoder_hidden, residual_channels, dilation): - super().__init__() - self.residual_channels = residual_channels - self.dilated_conv = nn.Conv1d( - residual_channels, - 2 * residual_channels, - kernel_size=3, - padding=dilation, - dilation=dilation - ) - self.diffusion_projection = nn.Linear(residual_channels, residual_channels) - self.conditioner_projection = nn.Conv1d(encoder_hidden, 2 * residual_channels, 1) - self.output_projection = nn.Conv1d(residual_channels, 2 * residual_channels, 1) - - def forward(self, x, conditioner, diffusion_step): - diffusion_step = self.diffusion_projection(diffusion_step).unsqueeze(-1) - conditioner = self.conditioner_projection(conditioner) - y = x + diffusion_step - - y = self.dilated_conv(y) + conditioner - - # Using torch.split instead of torch.chunk to avoid using onnx::Slice - gate, filter = torch.split(y, [self.residual_channels, self.residual_channels], dim=1) - y = torch.sigmoid(gate) * torch.tanh(filter) - - y = self.output_projection(y) - - # Using torch.split instead of torch.chunk to avoid using onnx::Slice - residual, skip = torch.split(y, [self.residual_channels, self.residual_channels], dim=1) - return (x + residual) / math.sqrt(2.0), skip - - -class WaveNet(nn.Module): - def __init__(self, in_dims=128, n_layers=20, n_chans=384, n_hidden=256): - super().__init__() - self.input_projection = Conv1d(in_dims, n_chans, 1) - self.diffusion_embedding = SinusoidalPosEmb(n_chans) - self.mlp = nn.Sequential( - nn.Linear(n_chans, n_chans * 4), - Mish(), - nn.Linear(n_chans * 4, n_chans) - ) - self.residual_layers = nn.ModuleList([ - ResidualBlock( - encoder_hidden=n_hidden, - residual_channels=n_chans, - dilation=1 - ) - for i in range(n_layers) - ]) - self.skip_projection = Conv1d(n_chans, n_chans, 1) - self.output_projection = Conv1d(n_chans, in_dims, 1) - nn.init.zeros_(self.output_projection.weight) - - def forward(self, spec, diffusion_step, cond): - """ - :param spec: [B, 1, M, T] - :param diffusion_step: [B, 1] - :param cond: [B, M, T] - :return: - """ - x = spec.squeeze(1) - x = self.input_projection(x) # [B, residual_channel, T] - - x = F.relu(x) - diffusion_step = self.diffusion_embedding(diffusion_step) - diffusion_step = self.mlp(diffusion_step) - skip = [] - for layer in self.residual_layers: - x, skip_connection = layer(x, cond, diffusion_step) - skip.append(skip_connection) - - x = torch.sum(torch.stack(skip), dim=0) / sqrt(len(self.residual_layers)) - x = self.skip_projection(x) - x = F.relu(x) - x = self.output_projection(x) # [B, mel_bins, T] - return x[:, None, :, :] diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/cocoeval/cocoeval.h b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/cocoeval/cocoeval.h deleted file mode 100644 index db246e49a026b7cd989b305f4d3d98100be3c912..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/cocoeval/cocoeval.h +++ /dev/null @@ -1,88 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. -#pragma once - -#include -#include -#include -#include -#include - -namespace py = pybind11; - -namespace detectron2 { - -namespace COCOeval { - -// Annotation data for a single object instance in an image -struct InstanceAnnotation { - InstanceAnnotation( - uint64_t id, - double score, - double area, - bool is_crowd, - bool ignore) - : id{id}, score{score}, area{area}, is_crowd{is_crowd}, ignore{ignore} {} - uint64_t id; - double score = 0.; - double area = 0.; - bool is_crowd = false; - bool ignore = false; -}; - -// Stores intermediate results for evaluating detection results for a single -// image that has D detected instances and G ground truth instances. This stores -// matches between detected and ground truth instances -struct ImageEvaluation { - // For each of the D detected instances, the id of the matched ground truth - // instance, or 0 if unmatched - std::vector detection_matches; - - // The detection score of each of the D detected instances - std::vector detection_scores; - - // Marks whether or not each of G instances was ignored from evaluation (e.g., - // because it's outside area_range) - std::vector ground_truth_ignores; - - // Marks whether or not each of D instances was ignored from evaluation (e.g., - // because it's outside aRng) - std::vector detection_ignores; -}; - -template -using ImageCategoryInstances = std::vector>>; - -// C++ implementation of COCO API cocoeval.py::COCOeval.evaluateImg(). For each -// combination of image, category, area range settings, and IOU thresholds to -// evaluate, it matches detected instances to ground truth instances and stores -// the results into a vector of ImageEvaluation results, which will be -// interpreted by the COCOeval::Accumulate() function to produce precion-recall -// curves. The parameters of nested vectors have the following semantics: -// image_category_ious[i][c][d][g] is the intersection over union of the d'th -// detected instance and g'th ground truth instance of -// category category_ids[c] in image image_ids[i] -// image_category_ground_truth_instances[i][c] is a vector of ground truth -// instances in image image_ids[i] of category category_ids[c] -// image_category_detection_instances[i][c] is a vector of detected -// instances in image image_ids[i] of category category_ids[c] -std::vector EvaluateImages( - const std::vector>& area_ranges, // vector of 2-tuples - int max_detections, - const std::vector& iou_thresholds, - const ImageCategoryInstances>& image_category_ious, - const ImageCategoryInstances& - image_category_ground_truth_instances, - const ImageCategoryInstances& - image_category_detection_instances); - -// C++ implementation of COCOeval.accumulate(), which generates precision -// recall curves for each set of category, IOU threshold, detection area range, -// and max number of detections parameters. It is assumed that the parameter -// evaluations is the return value of the functon COCOeval::EvaluateImages(), -// which was called with the same parameter settings params -py::dict Accumulate( - const py::object& params, - const std::vector& evalutations); - -} // namespace COCOeval -} // namespace detectron2 diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/layers/test_nms.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/layers/test_nms.py deleted file mode 100644 index a042db6147f110a82597c98f38e6b2221ccad53c..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/layers/test_nms.py +++ /dev/null @@ -1,33 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from __future__ import absolute_import, division, print_function, unicode_literals -import unittest -import torch - -from detectron2.layers import batched_nms -from detectron2.utils.testing import random_boxes - - -class TestNMS(unittest.TestCase): - def _create_tensors(self, N): - boxes = random_boxes(N, 200) - scores = torch.rand(N) - return boxes, scores - - def test_nms_scriptability(self): - N = 2000 - num_classes = 50 - boxes, scores = self._create_tensors(N) - idxs = torch.randint(0, num_classes, (N,)) - scripted_batched_nms = torch.jit.script(batched_nms) - err_msg = "NMS is incompatible with jit-scripted NMS for IoU={}" - - for iou in [0.2, 0.5, 0.8]: - keep_ref = batched_nms(boxes, scores, idxs, iou) - backup = boxes.clone() - scripted_keep = scripted_batched_nms(boxes, scores, idxs, iou) - assert torch.allclose(boxes, backup), "boxes modified by jit-scripted batched_nms" - self.assertTrue(torch.equal(keep_ref, scripted_keep), err_msg.format(iou)) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/z-uo/HTS-Audio-Transformer/sed_model.py b/spaces/z-uo/HTS-Audio-Transformer/sed_model.py deleted file mode 100644 index c82ed23880c613a28b57c4870823af3025bdcd29..0000000000000000000000000000000000000000 --- a/spaces/z-uo/HTS-Audio-Transformer/sed_model.py +++ /dev/null @@ -1,357 +0,0 @@ -# Ke Chen -# knutchen@ucsd.edu -# HTS-AT: A HIERARCHICAL TOKEN-SEMANTIC AUDIO TRANSFORMER FOR SOUND CLASSIFICATION AND DETECTION -# The Model Training Wrapper -import numpy as np -import librosa -import os -import bisect -from numpy.lib.function_base import average - -from sklearn.metrics import average_precision_score, roc_auc_score, accuracy_score - -from utils import get_loss_func, get_mix_lambda, d_prime -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as cp -import torch.optim as optim -from torch.nn.parameter import Parameter -import torch.distributed as dist -import pytorch_lightning as pl -from utils import do_mixup, get_mix_lambda, do_mixup_label - - -class SEDWrapper(pl.LightningModule): - def __init__(self, sed_model, config, dataset): - super().__init__() - self.sed_model = sed_model - self.config = config - self.dataset = dataset - self.loss_func = get_loss_func(config.loss_type) - - def evaluate_metric(self, pred, ans): - ap = [] - if self.config.dataset_type == "audioset": - mAP = np.mean(average_precision_score(ans, pred, average = None)) - mAUC = np.mean(roc_auc_score(ans, pred, average = None)) - dprime = d_prime(mAUC) - return {"mAP": mAP, "mAUC": mAUC, "dprime": dprime} - else: - acc = accuracy_score(ans, np.argmax(pred, 1)) - return {"acc": acc} - def forward(self, x, mix_lambda = None): - output_dict = self.sed_model(x, mix_lambda) - return output_dict["clipwise_output"], output_dict["framewise_output"] - - def inference(self, x): - self.device_type = next(self.parameters()).device - self.eval() - x = torch.from_numpy(x).float().to(self.device_type) - print(x.shape) - output_dict = self.sed_model(x, None, True) - for key in output_dict.keys(): - output_dict[key] = output_dict[key].detach().cpu().numpy() - return output_dict - - def training_step(self, batch, batch_idx): - self.device_type = next(self.parameters()).device - mix_lambda = torch.from_numpy(get_mix_lambda(0.5, len(batch["waveform"]))).to(self.device_type) - # Another Choice: also mixup the target, but AudioSet is not a perfect data - # so "adding noise" might be better than purly "mix" - # batch["target"] = do_mixup_label(batch["target"]) - # batch["target"] = do_mixup(batch["target"], mix_lambda) - pred, _ = self(batch["waveform"], mix_lambda) - loss = self.loss_func(pred, batch["target"]) - self.log("loss", loss, on_epoch= True, prog_bar=True) - return loss - def training_epoch_end(self, outputs): - # Change: SWA, deprecated - # for opt in self.trainer.optimizers: - # if not type(opt) is SWA: - # continue - # opt.swap_swa_sgd() - self.dataset.generate_queue() - - - def validation_step(self, batch, batch_idx): - pred, _ = self(batch["waveform"]) - return [pred.detach(), batch["target"].detach()] - - def validation_epoch_end(self, validation_step_outputs): - self.device_type = next(self.parameters()).device - pred = torch.cat([d[0] for d in validation_step_outputs], dim = 0) - target = torch.cat([d[1] for d in validation_step_outputs], dim = 0) - gather_pred = [torch.zeros_like(pred) for _ in range(dist.get_world_size())] - gather_target = [torch.zeros_like(target) for _ in range(dist.get_world_size())] - dist.barrier() - if self.config.dataset_type == "audioset": - metric_dict = { - "mAP": 0., - "mAUC": 0., - "dprime": 0. - } - else: - metric_dict = { - "acc":0. - } - dist.all_gather(gather_pred, pred) - dist.all_gather(gather_target, target) - if dist.get_rank() == 0: - gather_pred = torch.cat(gather_pred, dim = 0).cpu().numpy() - gather_target = torch.cat(gather_target, dim = 0).cpu().numpy() - if self.config.dataset_type == "scv2": - gather_target = np.argmax(gather_target, 1) - metric_dict = self.evaluate_metric(gather_pred, gather_target) - print(self.device_type, dist.get_world_size(), metric_dict, flush = True) - - if self.config.dataset_type == "audioset": - self.log("mAP", metric_dict["mAP"] * float(dist.get_world_size()), on_epoch = True, prog_bar=True, sync_dist=True) - self.log("mAUC", metric_dict["mAUC"] * float(dist.get_world_size()), on_epoch = True, prog_bar=True, sync_dist=True) - self.log("dprime", metric_dict["dprime"] * float(dist.get_world_size()), on_epoch = True, prog_bar=True, sync_dist=True) - else: - self.log("acc", metric_dict["acc"] * float(dist.get_world_size()), on_epoch = True, prog_bar=True, sync_dist=True) - dist.barrier() - - def time_shifting(self, x, shift_len): - shift_len = int(shift_len) - new_sample = torch.cat([x[:, shift_len:], x[:, :shift_len]], axis = 1) - return new_sample - - def test_step(self, batch, batch_idx): - print(batch['waveform'].shape) - exit() - self.device_type = next(self.parameters()).device - preds = [] - # time shifting optimization - if self.config.fl_local or self.config.dataset_type != "audioset": - shift_num = 1 # framewise localization cannot allow the time shifting - else: - shift_num = 10 - for i in range(shift_num): - pred, pred_map = self(batch["waveform"]) - preds.append(pred.unsqueeze(0)) - batch["waveform"] = self.time_shifting(batch["waveform"], shift_len = 100 * (i + 1)) - preds = torch.cat(preds, dim=0) - pred = preds.mean(dim = 0) - if self.config.fl_local: - return [ - pred.detach().cpu().numpy(), - pred_map.detach().cpu().numpy(), - batch["audio_name"], - batch["real_len"].cpu().numpy() - ] - else: - return [pred.detach(), batch["target"].detach()] - - def test_epoch_end(self, test_step_outputs): - self.device_type = next(self.parameters()).device - if self.config.fl_local: - pred = np.concatenate([d[0] for d in test_step_outputs], axis = 0) - pred_map = np.concatenate([d[1] for d in test_step_outputs], axis = 0) - audio_name = np.concatenate([d[2] for d in test_step_outputs], axis = 0) - real_len = np.concatenate([d[3] for d in test_step_outputs], axis = 0) - heatmap_file = os.path.join(self.config.heatmap_dir, self.config.test_file + "_" + str(self.device_type) + ".npy") - save_npy = [ - { - "audio_name": audio_name[i], - "heatmap": pred_map[i], - "pred": pred[i], - "real_len":real_len[i] - } - for i in range(len(pred)) - ] - np.save(heatmap_file, save_npy) - else: - self.device_type = next(self.parameters()).device - pred = torch.cat([d[0] for d in test_step_outputs], dim = 0) - target = torch.cat([d[1] for d in test_step_outputs], dim = 0) - gather_pred = [torch.zeros_like(pred) for _ in range(dist.get_world_size())] - gather_target = [torch.zeros_like(target) for _ in range(dist.get_world_size())] - dist.barrier() - if self.config.dataset_type == "audioset": - metric_dict = { - "mAP": 0., - "mAUC": 0., - "dprime": 0. - } - else: - metric_dict = { - "acc":0. - } - dist.all_gather(gather_pred, pred) - dist.all_gather(gather_target, target) - if dist.get_rank() == 0: - gather_pred = torch.cat(gather_pred, dim = 0).cpu().numpy() - gather_target = torch.cat(gather_target, dim = 0).cpu().numpy() - if self.config.dataset_type == "scv2": - gather_target = np.argmax(gather_target, 1) - metric_dict = self.evaluate_metric(gather_pred, gather_target) - print(self.device_type, dist.get_world_size(), metric_dict, flush = True) - if self.config.dataset_type == "audioset": - self.log("mAP", metric_dict["mAP"] * float(dist.get_world_size()), on_epoch = True, prog_bar=True, sync_dist=True) - self.log("mAUC", metric_dict["mAUC"] * float(dist.get_world_size()), on_epoch = True, prog_bar=True, sync_dist=True) - self.log("dprime", metric_dict["dprime"] * float(dist.get_world_size()), on_epoch = True, prog_bar=True, sync_dist=True) - else: - self.log("acc", metric_dict["acc"] * float(dist.get_world_size()), on_epoch = True, prog_bar=True, sync_dist=True) - dist.barrier() - - - def configure_optimizers(self): - optimizer = optim.AdamW( - filter(lambda p: p.requires_grad, self.parameters()), - lr = self.config.learning_rate, - betas = (0.9, 0.999), eps = 1e-08, weight_decay = 0.05, - ) - # Change: SWA, deprecated - # optimizer = SWA(optimizer, swa_start=10, swa_freq=5) - def lr_foo(epoch): - if epoch < 3: - # warm up lr - lr_scale = self.config.lr_rate[epoch] - else: - # warmup schedule - lr_pos = int(-1 - bisect.bisect_left(self.config.lr_scheduler_epoch, epoch)) - if lr_pos < -3: - lr_scale = max(self.config.lr_rate[0] * (0.98 ** epoch), 0.03 ) - else: - lr_scale = self.config.lr_rate[lr_pos] - return lr_scale - scheduler = optim.lr_scheduler.LambdaLR( - optimizer, - lr_lambda=lr_foo - ) - - return [optimizer], [scheduler] - - - -class Ensemble_SEDWrapper(pl.LightningModule): - def __init__(self, sed_models, config, dataset): - super().__init__() - - self.sed_models = nn.ModuleList(sed_models) - self.config = config - self.dataset = dataset - - def evaluate_metric(self, pred, ans): - if self.config.dataset_type == "audioset": - mAP = np.mean(average_precision_score(ans, pred, average = None)) - mAUC = np.mean(roc_auc_score(ans, pred, average = None)) - dprime = d_prime(mAUC) - return {"mAP": mAP, "mAUC": mAUC, "dprime": dprime} - else: - acc = accuracy_score(ans, np.argmax(pred, 1)) - return {"acc": acc} - - def forward(self, x, sed_index, mix_lambda = None): - self.sed_models[sed_index].eval() - preds = [] - pred_maps = [] - # time shifting optimization - if self.config.fl_local or self.config.dataset_type != "audioset": - shift_num = 1 # framewise localization cannot allow the time shifting - else: - shift_num = 10 - for i in range(shift_num): - pred, pred_map = self.sed_models[sed_index](x) - pred_maps.append(pred_map.unsqueeze(0)) - preds.append(pred.unsqueeze(0)) - x = self.time_shifting(x, shift_len = 100 * (i + 1)) - preds = torch.cat(preds, dim=0) - pred_maps = torch.cat(pred_maps, dim = 0) - pred = preds.mean(dim = 0) - pred_map = pred_maps.mean(dim = 0) - return pred, pred_map - - - def time_shifting(self, x, shift_len): - shift_len = int(shift_len) - new_sample = torch.cat([x[:, shift_len:], x[:, :shift_len]], axis = 1) - return new_sample - - def test_step(self, batch, batch_idx): - self.device_type = next(self.parameters()).device - if self.config.fl_local: - pred = torch.zeros(len(batch["waveform"]), self.config.classes_num).float().to(self.device_type) - pred_map = torch.zeros(len(batch["waveform"]), 1024, self.config.classes_num).float().to(self.device_type) - for j in range(len(self.sed_models)): - temp_pred, temp_pred_map = self(batch["waveform"], j) - pred = pred + temp_pred - pred_map = pred_map + temp_pred_map - pred = pred / len(self.sed_models) - pred_map = pred_map / len(self.sed_models) - return [ - pred.detach().cpu().numpy(), - pred_map.detach().cpu().numpy(), - batch["audio_name"], - batch["real_len"].cpu().numpy() - ] - else: - pred = torch.zeros(len(batch["waveform"]), self.config.classes_num).float().to(self.device_type) - for j in range(len(self.sed_models)): - temp_pred, _ = self(batch["waveform"], j) - pred = pred + temp_pred - pred = pred / len(self.sed_models) - return [ - pred.detach(), - batch["target"].detach(), - ] - - def test_epoch_end(self, test_step_outputs): - self.device_type = next(self.parameters()).device - if self.config.fl_local: - pred = np.concatenate([d[0] for d in test_step_outputs], axis = 0) - pred_map = np.concatenate([d[1] for d in test_step_outputs], axis = 0) - audio_name = np.concatenate([d[2] for d in test_step_outputs], axis = 0) - real_len = np.concatenate([d[3] for d in test_step_outputs], axis = 0) - heatmap_file = os.path.join(self.config.heatmap_dir, self.config.test_file + "_" + str(self.device_type) + ".npy") - print(pred.shape) - print(pred_map.shape) - print(real_len.shape) - save_npy = [ - { - "audio_name": audio_name[i], - "heatmap": pred_map[i], - "pred": pred[i], - "real_len":real_len[i] - } - for i in range(len(pred)) - ] - np.save(heatmap_file, save_npy) - else: - pred = torch.cat([d[0] for d in test_step_outputs], dim = 0) - target = torch.cat([d[1] for d in test_step_outputs], dim = 0) - gather_pred = [torch.zeros_like(pred) for _ in range(dist.get_world_size())] - gather_target = [torch.zeros_like(target) for _ in range(dist.get_world_size())] - - dist.barrier() - if self.config.dataset_type == "audioset": - metric_dict = { - "mAP": 0., - "mAUC": 0., - "dprime": 0. - } - else: - metric_dict = { - "acc":0. - } - dist.all_gather(gather_pred, pred) - dist.all_gather(gather_target, target) - if dist.get_rank() == 0: - gather_pred = torch.cat(gather_pred, dim = 0).cpu().numpy() - gather_target = torch.cat(gather_target, dim = 0).cpu().numpy() - if self.config.dataset_type == "scv2": - gather_target = np.argmax(gather_target, 1) - metric_dict = self.evaluate_metric(gather_pred, gather_target) - print(self.device_type, dist.get_world_size(), metric_dict, flush = True) - if self.config.dataset_type == "audioset": - self.log("mAP", metric_dict["mAP"] * float(dist.get_world_size()), on_epoch = True, prog_bar=True, sync_dist=True) - self.log("mAUC", metric_dict["mAUC"] * float(dist.get_world_size()), on_epoch = True, prog_bar=True, sync_dist=True) - self.log("dprime", metric_dict["dprime"] * float(dist.get_world_size()), on_epoch = True, prog_bar=True, sync_dist=True) - else: - self.log("acc", metric_dict["acc"] * float(dist.get_world_size()), on_epoch = True, prog_bar=True, sync_dist=True) - dist.barrier() - - - \ No newline at end of file diff --git a/spaces/zamasam/loligod/README.md b/spaces/zamasam/loligod/README.md deleted file mode 100644 index 18c70e810976a604b41438ebb4aabc6a7162117e..0000000000000000000000000000000000000000 --- a/spaces/zamasam/loligod/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Loligod -emoji: 📉 -colorFrom: gray -colorTo: green -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/zhaoys/wfms-kuiwenc/postcss.config.js b/spaces/zhaoys/wfms-kuiwenc/postcss.config.js deleted file mode 100644 index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000 --- a/spaces/zhaoys/wfms-kuiwenc/postcss.config.js +++ /dev/null @@ -1,6 +0,0 @@ -module.exports = { - plugins: { - tailwindcss: {}, - autoprefixer: {}, - }, -} diff --git a/spaces/zlc99/M4Singer/modules/parallel_wavegan/layers/tf_layers.py b/spaces/zlc99/M4Singer/modules/parallel_wavegan/layers/tf_layers.py deleted file mode 100644 index c0f46bd755c161cda2ac904fe37f3f3c6357a88d..0000000000000000000000000000000000000000 --- a/spaces/zlc99/M4Singer/modules/parallel_wavegan/layers/tf_layers.py +++ /dev/null @@ -1,129 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright 2020 MINH ANH (@dathudeptrai) -# MIT License (https://opensource.org/licenses/MIT) - -"""Tensorflow Layer modules complatible with pytorch.""" - -import tensorflow as tf - - -class TFReflectionPad1d(tf.keras.layers.Layer): - """Tensorflow ReflectionPad1d module.""" - - def __init__(self, padding_size): - """Initialize TFReflectionPad1d module. - - Args: - padding_size (int): Padding size. - - """ - super(TFReflectionPad1d, self).__init__() - self.padding_size = padding_size - - @tf.function - def call(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input tensor (B, T, 1, C). - - Returns: - Tensor: Padded tensor (B, T + 2 * padding_size, 1, C). - - """ - return tf.pad(x, [[0, 0], [self.padding_size, self.padding_size], [0, 0], [0, 0]], "REFLECT") - - -class TFConvTranspose1d(tf.keras.layers.Layer): - """Tensorflow ConvTranspose1d module.""" - - def __init__(self, channels, kernel_size, stride, padding): - """Initialize TFConvTranspose1d( module. - - Args: - channels (int): Number of channels. - kernel_size (int): kernel size. - strides (int): Stride width. - padding (str): Padding type ("same" or "valid"). - - """ - super(TFConvTranspose1d, self).__init__() - self.conv1d_transpose = tf.keras.layers.Conv2DTranspose( - filters=channels, - kernel_size=(kernel_size, 1), - strides=(stride, 1), - padding=padding, - ) - - @tf.function - def call(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input tensor (B, T, 1, C). - - Returns: - Tensors: Output tensor (B, T', 1, C'). - - """ - x = self.conv1d_transpose(x) - return x - - -class TFResidualStack(tf.keras.layers.Layer): - """Tensorflow ResidualStack module.""" - - def __init__(self, - kernel_size, - channels, - dilation, - bias, - nonlinear_activation, - nonlinear_activation_params, - padding, - ): - """Initialize TFResidualStack module. - - Args: - kernel_size (int): Kernel size. - channles (int): Number of channels. - dilation (int): Dilation ine. - bias (bool): Whether to add bias parameter in convolution layers. - nonlinear_activation (str): Activation function module name. - nonlinear_activation_params (dict): Hyperparameters for activation function. - padding (str): Padding type ("same" or "valid"). - - """ - super(TFResidualStack, self).__init__() - self.block = [ - getattr(tf.keras.layers, nonlinear_activation)(**nonlinear_activation_params), - TFReflectionPad1d(dilation), - tf.keras.layers.Conv2D( - filters=channels, - kernel_size=(kernel_size, 1), - dilation_rate=(dilation, 1), - use_bias=bias, - padding="valid", - ), - getattr(tf.keras.layers, nonlinear_activation)(**nonlinear_activation_params), - tf.keras.layers.Conv2D(filters=channels, kernel_size=1, use_bias=bias) - ] - self.shortcut = tf.keras.layers.Conv2D(filters=channels, kernel_size=1, use_bias=bias) - - @tf.function - def call(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input tensor (B, T, 1, C). - - Returns: - Tensor: Output tensor (B, T, 1, C). - - """ - _x = tf.identity(x) - for i, layer in enumerate(self.block): - _x = layer(_x) - shortcut = self.shortcut(x) - return shortcut + _x