diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Al Amin Accounting Software Crack Keygen The Ultimate Guide for Windows Users.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Al Amin Accounting Software Crack Keygen The Ultimate Guide for Windows Users.md
deleted file mode 100644
index 8e45a570a40870b0a884fbe03e920afb8f6388e7..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Al Amin Accounting Software Crack Keygen The Ultimate Guide for Windows Users.md
+++ /dev/null
@@ -1,152 +0,0 @@
-
-
Al-Amin Accounting Software: A Comprehensive Solution for Your Business Needs
-
If you are looking for a reliable, efficient, and user-friendly accounting software for your business, you might want to consider Al-Amin Accounting Software. Al-Amin Accounting Software is a product of SyrianSoft, a leading software company in the Middle East that has been developing accounting solutions since 1992. Al-Amin Accounting Software is designed to meet the needs of small, medium, and large businesses in various sectors and industries. It offers a range of features and benefits that can help you manage your business operations more effectively and efficiently.
In this article, we will explore the features and benefits of Al-Amin Accounting Software, how to download and install it on your computer, how to crack and activate it (and why you shouldn't), and some alternatives to consider. By the end of this article, you will have a better understanding of what Al-Amin Accounting Software can do for your business and how to get started with it.
-
Features and Benefits of Al-Amin Accounting Software
-
Al-Amin Accounting Software is a comprehensive solution that covers various aspects of your business management. It has four main modules: accounting and financial management, inventory and warehouse management, sales and customer relationship management, and human resources and payroll management. Each module has its own features and benefits that can help you streamline your business processes and improve your productivity and profitability. Here are some of the key features and benefits of each module:
-
Accounting and financial management
-
This module helps you manage your accounts, invoices, payments, budgets, etc. with ease and accuracy. Some of the features and benefits of this module are:
-
-
It supports multiple currencies, languages, branches, companies, etc.
-
It allows you to create unlimited accounts, sub-accounts, cost centers, etc.
-
It enables you to record various types of transactions such as cash receipts, cash payments, bank deposits, bank withdrawals, journal entries, etc.
-
It generates various types of invoices such as sales invoices, purchase invoices, service invoices, proforma invoices, etc.
-
It tracks your receivables and payables and sends reminders to your customers and suppliers.
-
It helps you manage your cash flow and budget by providing cash flow statements, budget reports, variance analysis, etc.
-
It integrates with other modules such as inventory, sales, payroll, etc. to provide accurate financial data.
-
It produces various types of financial reports such as balance sheet, income statement, trial balance, general ledger, etc.
-
-
Inventory and warehouse management
-
This module helps you track your stock, purchases, sales, transfers, etc. with ease and accuracy. Some of the features and benefits of this module are:
-
al amin accounting software activation code
-al amin accounting software license key generator
-al amin accounting software serial number free download
-al amin accounting software full version cracked
-al amin accounting software patch file
-al amin accounting software registration key
-al amin accounting software unlock code
-al amin accounting software crack keygen torrent
-al amin accounting software crack keygen online
-al amin accounting software crack keygen download
-al amin accounting software crack keygen 2021
-al amin accounting software crack keygen 2022
-al amin accounting software crack keygen 2023
-al amin accounting software crack keygen latest version
-al amin accounting software crack keygen updated version
-al amin accounting software crack keygen for windows
-al amin accounting software crack keygen for mac
-al amin accounting software crack keygen for linux
-al amin accounting software crack keygen for android
-al amin accounting software crack keygen for ios
-how to crack al amin accounting software
-how to get al amin accounting software for free
-how to install al amin accounting software cracked version
-how to use al amin accounting software without license key
-how to bypass al amin accounting software activation
-is it safe to use al amin accounting software crack keygen
-is it legal to use al amin accounting software crack keygen
-is it ethical to use al amin accounting software crack keygen
-what are the benefits of using al amin accounting software crack keygen
-what are the risks of using al amin accounting software crack keygen
-what are the alternatives to using al amin accounting software crack keygen
-where to find al amin accounting software crack keygen
-where to download al amin accounting software crack keygen
-where to buy al amin accounting software crack keygen
-where to sell al amin accounting software crack keygen
-who uses al amin accounting software crack keygen
-who makes al amin accounting software crack keygen
-who sells al amin accounting software crack keygen
-why use al amin accounting software crack keygen
-why not use al amin accounting software crack keygen
-best way to use al amin accounting software crack keygen
-best place to get al amin accounting software crack keygen
-best source of al amin accounting software crack keygen
-best method to generate al amin accounting software crack keygen
-best tool for creating al amin accounting software crack keygen
-easiest way to use al amin accounting software crack keygen
-easiest place to get al amin accounting software crack keygen
-easiest source of al amin accounting software crack keygen
-easiest method to generate al amin accounting software crack keygen
-
-
It supports multiple warehouses, locations, units, categories, etc.
-
It allows you to create unlimited items, sub-items, batches, serial numbers, etc.
-
It enables you to record various types of transactions such as purchase orders, purchase receipts, purchase returns, sales orders, sales deliveries, sales returns, stock transfers, stock adjustments, etc.
-
It tracks your inventory levels, costs, prices, margins, etc. and alerts you when your stock is low or high.
-
It helps you manage your inventory valuation by using different methods such as FIFO, LIFO, average cost, standard cost, etc.
-
It integrates with other modules such as accounting, sales, payroll, etc. to provide accurate inventory data.
-
It produces various types of inventory reports such as inventory status, inventory movement, inventory valuation, inventory aging, etc.
-
-
Sales and customer relationship management
-
This module helps you manage your sales orders, quotations, contracts, customers, etc. with ease and efficiency. Some of the features and benefits of this module are:
-
-
It supports multiple sales channels, markets, segments, etc.
-
It allows you to create unlimited customers, sub-customers, contacts, leads, opportunities, etc.
-
It enables you to record various types of transactions such as quotations, sales orders, sales contracts, sales deliveries, sales invoices, sales returns, etc.
-
It tracks your sales performance by providing sales analysis by customer, product, branch, region, etc.
-
It helps you manage your customer relationship by providing customer profile, history, feedback, loyalty, etc.
-
It integrates with other modules such as accounting, inventory, payroll, etc. to provide accurate sales data.
-
It produces various types of sales reports such as sales summary, sales detail, sales commission, sales forecast, etc.
-
-
Human resources and payroll management
-
This module helps you manage your employees, salaries, deductions, leaves, etc. with ease and compliance. Some of the features and benefits of this module are:
-
-
It supports multiple branches, departments, positions, grades, etc.
-
It allows you to create unlimited employees, sub-employees, dependents, beneficiaries, etc.
-
It enables you to record various types of transactions such as attendance, absence, overtime, leave, loan, advance, bonus, penalty, etc.
-
It tracks your payroll costs by providing payroll analysis by employee, branch, department, position, grade, etc.
-
It helps you manage your payroll compliance by providing tax calculation, social security calculation, insurance calculation, wage protection system (WPS), etc.
-
It integrates with other modules such as accounting, inventory, sales, etc. to provide accurate payroll data.
-
It produces various types of payroll reports such as payslip, payroll summary, payroll detail, payroll statement, payroll register, etc.
-
-
How to Download and Install Al-Ameen Accounting Software
-
If you are interested in trying out Al-Ameen Accounting Software for yourself or for your business, you can download it from the official website of SyrianSoft. Here are the steps to download and install Al-Ameen Accounting Software on your computer:
-
System requirements
-
Before downloading Al-Ameen Accounting Software,
make sure that your computer meets the minimum or recommended specifications for running the software.
-
According to the developer's website, the minimum and recommended system requirements for Al-Ameen Accounting Software are as follows:
- | Software | Minimum | Recommended | | --- | --- | --- | | Microsoft SQL Server | 2012 | 2012 or higher | | Microsoft .NET Framework | 4.5.2 | 4.5.2 or higher | | Visual C++ Redistributable for Visual Studio | 2015 | 2015 or higher | | Sentinel Protection Key | Required | Required | | Internet Explorer | 11 | 11 or higher | | Platform Update (Windows 7 SP1 and Windows Server 2008 R2 SP1) | Required | Required | | Hardware | Minimum | Recommended | | --- | --- | --- | | Processor | 1 GHz | 2 GHz or higher | | Memory | 2 GB | 4 GB or higher | | Hard Disk (Free Space) | 500 MB | 1 GB or higher |
Download links
-
To download Al-Ameen Accounting Software, you need to visit the official website of SyrianSoft and register for an account. After logging in, you can access the download page and choose the version that suits your needs. The latest version of Al-Ameen Accounting Software is 9.0 - (900.11), which was released on May 18, 2017. The download package consists of two files: Release_Notes.pdf and V_9_900_16_11.exe. The total size of the package is about 255 MB.
-
Installation steps
-
To install Al-Ameen Accounting Software on your computer, you need to follow these steps:
-
-
Download the two files from the download page and save them in one folder on your hard disk.
-
Click the file V_9_900_16_11.exe and an extract window will appear. Click Extract button and wait for the extraction process to finish.
-
A new file Ameen.exe will appear in the same folder where you saved the downloaded files. Click this file and the installation wizard will start on your computer.
-
Follow the instructions on the screen to complete the installation process. You may need to restart your computer after the installation.
-
After restarting your computer, you can launch Al-Ameen Accounting Software from the Start menu or from the desktop shortcut.
-
-
How to Crack and Activate Al-Ameen Accounting Software
-
If you are wondering how to crack and activate Al-Ameen Accounting Software, we have some bad news for you: it is not possible, and even if it was, it would be illegal and unethical. Here are some reasons why you should not try to crack and activate Al-Ameen Accounting Software:
-
Disclaimer
-
Al-Ameen Accounting Software is a licensed software that requires a valid protection key to run. The protection key is a hardware device that plugs into your computer's USB port and verifies your license with the developer's server. Without the protection key, Al-Ameen Accounting Software will run as a demo version with limited functionality and data entry. Cracking and activating Al-Ameen Accounting Software means bypassing the protection key and using a fake license to run the full version of the software. This is a violation of the terms and conditions of use of Al-Ameen Accounting Software and an infringement of the intellectual property rights of SyrianSoft. By cracking and activating Al-Ameen Accounting Software, you are committing a crime that can result in legal action against you.
-
Risks and consequences
-
Even if you manage to find a way to crack and activate Al-Ameen Accounting Software, you are exposing yourself to various risks and consequences that can harm your computer, your data, and your business. Some of these risks and consequences are:
-
-
You may download malware or viruses that can damage your computer or steal your personal information.
-
You may get a corrupted or outdated version of Al-Ameen Accounting Software that can cause errors or crashes.
-
You may lose your data or compromise its security by using an unverified source of Al-Ameen Accounting Software.
-
You may miss out on important updates, patches, bug fixes, and new features that SyrianSoft provides for its customers.
-
You may face technical issues or compatibility problems that SyrianSoft cannot help you with because you are using an illegal version of Al-Ameen Accounting Software.
-
You may lose your credibility and reputation as a business owner by using a pirated software that does not comply with professional standards and ethics.
-
-
Alternatives
-
If you are looking for alternatives to cracking and activating Al-Ameen Accounting Software, you have some options that are legal and ethical. Some of these options are:
-
-
You can buy a legitimate license of Al-Ameen Accounting Software from SyrianSoft or its authorized dealers. This way, you can enjoy all the features and benefits of Al-Ameen Accounting Software without any risk or consequence.
-
You can request a free trial of Al-Ameen Accounting Software from SyrianSoft or its authorized dealers. This way, you can test Al-Ameen Accounting Software for a limited period of time before deciding whether to buy it or not.
-
You can look for other accounting software that suits your budget and needs. There are many accounting software available in the market that offer different features and prices. You can compare them and choose the one that works best for you.
-
-
Conclusion
-
In conclusion, Al-Ameen Accounting Software is a comprehensive solution for your business needs that offers various features and benefits that can help you manage your accounting, inventory, sales, and payroll processes more effectively and efficiently. It is easy to download and install on your computer, but it requires a valid protection key to run. Cracking and activating Al-Ameen Accounting Software is not possible, and even if it was, it would be illegal and unethical. You should avoid doing so and look for legal and ethical alternatives instead. We hope this article has given you a clear overview of what Al-Ameen Accounting Software can do for your business and how to get started with it. If you have any questions or comments, please feel free to contact us. We would love to hear from you.
-
Frequently Asked Questions
-
Here are some frequently asked questions about Al-Ameen Accounting Software:
-
-
What is the price of Al-Ameen Accounting Software? The price of Al-Ameen Accounting Software depends on the number of users, modules, and features you need. You can contact SyrianSoft or its authorized dealers for a quotation.
-
How can I get support for Al-Ameen Accounting Software? You can get support for Al-Ameen Accounting Software by contacting SyrianSoft or its authorized dealers via phone, email, or online chat. You can also visit their website for online help, tutorials, and FAQs.
-
Can I use Al-Ameen Accounting Software on multiple computers? Yes, you can use Al-Ameen Accounting Software on multiple computers as long as they are connected to the same network. You will need one protection key per computer, however.
-
Can I customize Al-Ameen Accounting Software according to my needs? Yes, you can customize Al-Ameen Accounting Software according to your needs by using its built-in tools such as report designer, form designer, label designer, etc. You can also request SyrianSoft or its authorized dealers for custom development services if you need more advanced customization.
-
Can I integrate Al-Ameen Accounting Software with other software? Yes, you can integrate Al-Ameen Accounting Software with other software by using its built-in tools such as data import/export, data synchronization, web services, etc. You can also request SyrianSoft or its authorized dealers for integration services if you need more complex integration.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EASEUS Partition Master 6.0.1 Server Edition Portable 64 Bit.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EASEUS Partition Master 6.0.1 Server Edition Portable 64 Bit.md
deleted file mode 100644
index 0fdc3ba505ed3d1239bf0df9d3cdef664455af1e..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EASEUS Partition Master 6.0.1 Server Edition Portable 64 Bit.md
+++ /dev/null
@@ -1,119 +0,0 @@
-
-
EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit
-
EASEUS Partition Master is a powerful and easy-to-use partition software that allows you to create, resize, move, merge, split, clone, recover, convert, and manage disk partitions on Windows servers and PCs. It supports various file systems such as FAT32, NTFS, EXT2/EXT3/EXT4, ReFS, exFAT, etc. It also supports MBR and GPT disk styles, dynamic disks and volumes, RAID arrays, SSDs and HDDs, USB drives and memory cards.
-
In this article, we will introduce EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit, which is a special version of EASEUS Partition Master that can run directly from a USB flash drive or CD/DVD without installation. We will also show you how to use it to perform some common partition operations on your server or PC.
-
EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit
What is EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?
-
EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit is a portable version of EASEUS Partition Master 6.0.1 Server Edition that can run on any Windows server or PC with a 64-bit processor without installation or activation. It has all the features of EASEUS Partition Master 6.0.1 Server Edition, which include:
-
-
Resize/move partition: You can resize or move any partition on your disk without losing data or rebooting your system.
-
Clone disk/partition: You can clone an entire disk or a single partition to another disk or partition for backup or migration purposes.
-
Merge/split partition: You can merge two adjacent partitions into one larger partition or split a large partition into two smaller partitions for better disk space management.
-
Convert disk/partition: You can convert a disk from MBR to GPT or vice versa without deleting partitions or data. You can also convert a partition from one file system to another without formatting or losing data.
-
Recover partition: You can recover deleted or lost partitions from unallocated space or damaged disks with ease.
-
Manage dynamic volume: You can create, delete, format, resize, move, extend, shrink, split, merge, change drive letter, set active/inactive, explore properties of dynamic volumes on your disk.
-
Partition through command prompts: You can execute partition commands through command prompts for advanced users.
-
Repair RAID-5 volume: You can repair corrupted RAID-5 volume by reconstructing the data of the failed member disk.
-
-
EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit is compatible with Windows Server 2003/2008/2012/2016/2019 and Windows XP/Vista/7/8/10 (64-bit only). It supports up to 32 disks and unlimited hard disk capacity.
-
Why use EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?
-
EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit has some advantages over other partition software:
-
-
It is portable: You can run it from a USB flash drive or CD/DVD without installing it on your system. This is convenient and safe, as you don't need to modify your system or registry settings.
-
It is fast: You can perform partition operations in a few minutes or seconds, depending on the size and speed of your disk.
-
It is reliable: You can trust EASEUS Partition Master to handle your disk partitions without causing any data loss or system crash.
-
It is versatile: You can use EASEUS Partition Master to manage not only basic disks, but also dynamic disks, RAID arrays, SSDs, and external devices.
-
It is cost-effective: You can get EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit for free from the official website or some third-party sources. You don't need to pay for a license or subscription fee.
-
-
How to use EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?
-
To use EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit, you need to follow these steps:
-
-
Download EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit from the official website or some third-party sources. The file size is about 40 MB.
-
Extract the downloaded file to a USB flash drive or CD/DVD. You can use any compression software such as WinRAR or 7-Zip to do this.
-
Connect the USB flash drive or CD/DVD to the server or PC that you want to manage the disk partitions on.
-
Run the EPM.exe file from the USB flash drive or CD/DVD. You will see the main interface of EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit.
-
Select the disk or partition that you want to operate on from the disk map or the list on the left panel.
-
Right-click on the disk or partition and choose the desired operation from the context menu. You can also use the toolbar buttons or the menu bar options to access the operations.
-
Follow the instructions on the screen to complete the operation. You may need to confirm some actions or restart your system depending on the operation.
-
-
Some common partition operations with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit
-
In this section, we will show you how to perform some common partition operations with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit, such as resizing, cloning, merging, splitting, converting, and recovering partitions.
-
How to resize a partition with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?
-
To resize a partition with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit, you need to follow these steps:
-
-
Select the partition that you want to resize from the disk map or the list on the left panel.
-
Right-click on the partition and choose Resize/Move from the context menu.
-
In the pop-up window, drag the left or right border of the partition to adjust its size. You can also enter the exact size in MB in the boxes below.
-
Click OK to confirm the changes. You will see a pending operation on the bottom panel.
-
Click Apply on the toolbar to execute the operation. You may need to restart your system if you are resizing a system partition.
-
-
How to clone a disk/partition with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?
-
To clone a disk/partition with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit, you need to follow these steps:
-
-
Select the disk or partition that you want to clone from the disk map or the list on the left panel.
-
Right-click on the disk or partition and choose Clone from the context menu.
-
In the pop-up window, select the destination disk or partition that you want to clone to. Make sure it has enough space to hold all the data from the source disk or partition.
-
Click Next to continue. You can choose to clone the disk or partition sector by sector or adjust the partition layout on the destination disk or partition.
-
Click Proceed to start the cloning process. You may need to restart your system if you are cloning a system disk or partition.
-
-
How to merge partitions with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?
-
To merge partitions with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit, you need to follow these steps:
-
-
-
Select one of the partitions that you want to merge from the disk map or the list on the left panel.
-
Right-click on the partition and choose Merge from the context menu.
-
In the pop-up window, select another partition that you want to merge with the first one. The two partitions must be adjacent and have the same file system.
-
Click OK to confirm the changes. You will see a pending operation on the bottom panel.
-
Click Apply on the toolbar to execute the operation. You may need to restart your system if you are merging a system partition.
-
-
How to split a partition with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?
-
To split a partition with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit, you need to follow these steps:
-
-
Select the partition that you want to split from the disk map or the list on the left panel.
-
Right-click on the partition and choose Split from the context menu.
-
In the pop-up window, drag the slider or enter the size in MB to specify how much space you want to allocate for the new partition.
-
Click OK to confirm the changes. You will see a pending operation on the bottom panel.
-
Click Apply on the toolbar to execute the operation. You may need to restart your system if you are splitting a system partition.
-
-
How to convert a disk/partition with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?
-
To convert a disk/partition with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit, you need to follow these steps:
-
-
Select the disk or partition that you want to convert from the disk map or the list on the left panel.
-
Right-click on the disk or partition and choose Convert from the context menu.
-
In the pop-up window, choose whether you want to convert a disk from MBR to GPT or vice versa, or convert a partition from one file system to another.
-
Click OK to confirm the changes. You will see a pending operation on the bottom panel.
-
Click Apply on the toolbar to execute the operation. You may need to restart your system if you are converting a system disk or partition.
-
-
How to recover a partition with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?
-
To recover a partition with EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit, you need to follow these steps:
-
-
Select an unallocated space or a damaged disk that contains the deleted or lost partition from the disk map or the list on the left panel.
-
Right-click on the unallocated space or the damaged disk and choose Partition Recovery from the context menu.
-
In the pop-up window, choose whether you want to perform a quick scan or a deep scan to search for the deleted or lost partition. A quick scan is faster but may not find all the partitions, while a deep scan is slower but more thorough.
-
Click Next to start the scanning process. You will see a list of found partitions on the right panel.
-
Select the partition that you want to recover and click Proceed to recover it. You can also preview the files on the partition before recovering it.
-
Click Apply on the toolbar to execute the operation. You may need to restart your system if you are recovering a system partition.
-
-
Conclusion
-
EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit is a powerful and easy-to-use partition software that can run directly from a USB flash drive or CD/DVD without installation. It can help you create, resize, move, merge, split, clone, recover, convert, and manage disk partitions on Windows servers and PCs. It supports various file systems, disk styles, dynamic disks and volumes, RAID arrays, SSDs and HDDs, USB drives and memory cards. It is fast, reliable, versatile, and cost-effective. It is a great tool for disk partition management and maintenance.
-
FAQs
-
Q: How can I get EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?
-
A: You can get EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit for free from the official website or some third-party sources. You can also download it from this link:
-
Q: What are the system requirements for EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?
-
A: EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit requires a Windows server or PC with a 64-bit processor, at least 512 MB of RAM, and at least 100 MB of free disk space.
-
Q: What are the limitations of EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?
-
A: EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit has some limitations compared to other versions of EASEUS Partition Master, such as:
-
-
It does not support Windows Server 2000/2003 R2/2008 R2/2012 R2/2016 R2/2019 R2.
-
It does not support Windows XP/Vista/7/8/10 (32-bit only).
-
It does not support Linux partitions such as EXT4/EXT3/EXT2/SWAP/XFS/Btrfs.
-
It does not support BitLocker encrypted partitions.
-
It does not support ReFS file system.
-
It does not support WinPE bootable disk creation.
-
-
Q: How can I update EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit?
-
A: EASEUS Partition Master 6.0.1 Server Edition Portable 64 bit does not support automatic updates. You need to download the latest version from the official website or some third-party sources and replace the old version on your USB flash drive or CD/DVD.
-
Q: How can I contact EASEUS for technical support or feedback?
-
A: You can contact EASEUS by email at support@easeus.com or by phone at +1-800-570-4634 (toll-free in US and Canada) or +86-28-85432479 (international). You can also visit their website at for more information and resources.
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EasyWorship 7 Full Version The Ultimate Solution for Creating and Presenting Worship Media.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EasyWorship 7 Full Version The Ultimate Solution for Creating and Presenting Worship Media.md
deleted file mode 100644
index fa4ae2cfd50e03dce5fcad2aed38f86bb82312bb..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EasyWorship 7 Full Version The Ultimate Solution for Creating and Presenting Worship Media.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
How to Download and Install EasyWorship 7 Full Version for Free
-
EasyWorship 7 is a powerful and easy-to-use software that allows you to create and present worship slides, lyrics, videos, scriptures, and more. With EasyWorship 7, you can design and customize your own media library, schedule and manage your services, and control your presentation from any device. EasyWorship 7 is a great tool for churches, ministries, and worship teams who want to enhance their worship experience and engage their audience.
However, EasyWorship 7 is not a free software. You need to purchase a license to use it legally and access all its features. The official price of EasyWorship 7 is $499 for the full version and $199 for the upgrade version. This may be too expensive for some users who want to try out the software or use it for personal or non-commercial purposes.
-
Fortunately, there is a way to download and install EasyWorship 7 full version for free and use it without paying anything. In this article, we will show you how to do that step by step. But before we proceed, we want to warn you that downloading and using cracked software is illegal and risky. You may face legal consequences, malware infections, data loss, or other problems if you choose to do so. We do not condone or encourage piracy in any way. This article is for educational purposes only.
-
What is EasyWorship 7 Full Version?
-
A full version of a software is a complete and unlocked version that has all the features and functions of the original software. A full version of a software usually requires a license key or activation code to use it legally and properly.
-
-
EasyWorship 7 full version is a complete and unlocked version of EasyWorship 7 that has all the features and functions of the original software. It does not require a license key or activation code to use it. It also has some additional features or functions that are not available in the official release. For example, some users claim that the full version has more themes, backgrounds, fonts, and transitions than the original one.
-
However, using EasyWorship 7 full version also has some drawbacks and risks. For one thing, it is illegal and violates the terms and conditions of Softouch Development Inc., the developer of EasyWorship. You may face legal actions or penalties if you are caught using it. For another thing, it is unsafe and unreliable. You may download malware or viruses along with the full version that can harm your computer or steal your data. You may also experience errors, crashes, bugs, or compatibility issues that can affect your work quality and efficiency.
-
How to Download and Install EasyWorship 7 Full Version for Free?
-
If you still want to download and install EasyWorship 7 full version for free despite the risks and consequences, here are the steps you need to follow:
-
-
Go to a website that offers EasyWorship 7 full version for free download. There are many websites that claim to provide this service, but not all of them are trustworthy or legitimate. Some of them may contain malware or viruses that can infect your computer or redirect you to other unwanted sites. To avoid this, you should do some research and check the reviews and ratings of the website before downloading anything from it.
-
Select the download link or button and wait for the download process to start. Depending on the size of the file and your internet speed, this may take some time. You may also need to complete some surveys or offers before you can access the download link.
-
Once the download is complete, locate the file on your computer and extract it using a file extractor program such as WinRAR or 7-Zip. You should see a folder containing the setup file and the crack file.
-
Run the setup file and follow the instructions to install EasyWorship 7 on your computer. You may need to enter some information such as your name, email address, or country during the installation process.
-
After the installation is done, do not run or open EasyWorship 7 yet. Instead, go to the folder where you extracted the crack file and copy it.
-
Paste the crack file into the installation directory of EasyWorship 7. This is usually located at C ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Chess Titans Free _HOT_ Download Full Version For Pc.md b/spaces/1gistliPinn/ChatGPT4/Examples/Chess Titans Free _HOT_ Download Full Version For Pc.md
deleted file mode 100644
index 97c1a7d013112dc50a0a42cfbd6516aff23563d8..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Chess Titans Free _HOT_ Download Full Version For Pc.md
+++ /dev/null
@@ -1,48 +0,0 @@
-
-
-You can play chess against the computer and see your progress. There is also a friendly ranking system to see who is the best player of the tournament. With a single click you can take a snapshot, add new pieces or save the game.
-
-Easy to play, easy to learn
-
-Simple three-dimensional graphics, to keep it as clear and easy to learn as possible. Simply drag and drop your pieces into the game to play. Want to play chess with the computer? You can even set the computer to play for you.
-
-A traditional look
-
-Choose your colors and set the background and playing pieces. You can even change the background and use hex colors. The game is classic in its look, but there is a lot of detail.
-
-Play against the computer
-
-Play against the computer in a friendly competition. You can choose the level of difficulty or play a friend’s game. The computer knows the standard moves and pieces, so you don’t have to tell it. Create your own board or play against the computer in a three-dimensional board.
-
-Chess Titans for Windows lets you play three different board sizes, with three levels of difficulty. It also comes with eight unique game boards to choose from. It is also a friendly competition between friends, as there are 10,000 different boards available.
-
-The new version of Chess Titans has been completely redesigned. The new chess engines are used to play the game. The new chess engines used are the HyperChess and Chess King. The game is better than ever, and has a completely new user interface.
-
-Use the 10,000 boards available
-
-Play a friend's game or play against the computer
-
-Create your own board or play against the computer
-
-Controls:
-
-Move your pieces: left and right arrow keys
-
-Drag a piece to a new square: W
-
-Drag a piece to open the piece menu: A
-
-Drag a piece to select a piece: S
-
-Switch a piece with another piece: B
-
-Take a snapshot: Ctrl+F
-
-List the pieces on the board: Space bar
-
-Save the game: Ctrl+S
-
-Chess Titans for Windows is a classic chess game, but with a twist. After starting the game, you can play with or against the computer. You can choose the type of game, board size and level of difficulty. There are 10 4fefd39f24
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dominos Pizza - Food Delivery APK A Must-Have App for Pizza Lovers.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dominos Pizza - Food Delivery APK A Must-Have App for Pizza Lovers.md
deleted file mode 100644
index 8660baa869f9261225cc52fd5dffcafd964cc238..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dominos Pizza - Food Delivery APK A Must-Have App for Pizza Lovers.md
+++ /dev/null
@@ -1,113 +0,0 @@
-
-
Domino's APK: How to Order Pizza Online with Ease
-
Do you love pizza? Do you want to order it online from the comfort of your home or office? Do you want to enjoy delicious pizza at affordable prices and fast delivery? If you answered yes to any of these questions, then you need to download Domino's APK on your Android device.
-
What is Domino's APK?
-
A brief introduction to the app and its features
-
Domino's APK is the official app of Domino's Pizza, one of the most popular pizza chains in the world. With this app, you can order pizza online from your nearest Domino's outlet and get it delivered to your doorstep in no time. You can also customize your pizza with your choice of crust, toppings, cheese, and sauces. You can also order other items from the menu, such as pasta, sandwiches, salads, desserts, drinks, and more.
How to download and install the app on your device
-
Downloading and installing Domino's APK is very easy and simple. All you have to do is follow these steps:
-
-
Search for Domino's APK or Pizza delivery app on the Google Play Store or Apple App Store and tap on install.
-
Wait for the app to download and install on your device.
-
Open the app and grant the necessary permissions for location, camera, storage, etc.
-
You are ready to order pizza online with Domino's APK.
-
-
How to Use Domino's APK to Order Pizza Online
-
How to create an account and log in
-
To use Domino's APK, you need to create an account and log in with your email address or phone number. You can also sign up with your Facebook or Google account. Creating an account will help you save your preferences, address, payment details, and order history. You can also earn rewards points for every order you place with Domino's APK.
-
How to browse the menu and customize your order
-
Once you log in, you can browse the menu by tapping on the categories or using the search bar. You can also filter the menu by price, popularity, or ratings. You can tap on any item you like and see its details, such as ingredients, calories, price, etc. You can also customize your order by adding or removing toppings, cheese, sauces, etc. You can also choose the size and quantity of your order.
-
How to apply coupons and offers
-
Domino's APK offers various coupons and offers that can help you save money on your order. You can find them on the home screen or under the deals section. You can also enter a coupon code manually if you have one. To apply a coupon or offer, simply select it and add it to your cart. You will see the discounted price on your checkout screen.
-
How to track your order and enjoy contactless delivery
-
After placing your order, you can track its status and progress on the app or on the website. You can also call the store or the delivery person if you have any queries or issues. Domino's APK also offers contactless delivery, which means you can get your order delivered without any physical contact with the delivery person. You can choose this option on the app or on the website and pay online. You can also instruct the delivery person to leave your order at a safe place, such as your doorstep, lobby, or gate.
-
Why Choose Domino's APK for Pizza Delivery?
-
The benefits of ordering from Domino's
-
There are many reasons why you should choose Domino's APK for pizza delivery. Here are some of them:
-
-
Domino's offers a wide variety of pizzas and other items to suit your taste and budget.
-
Domino's guarantees fast and fresh delivery of your order within 30 minutes or less.
-
Domino's has a 100% satisfaction guarantee, which means you can get a free replacement or refund if you are not happy with your order.
-
Domino's has a loyalty program called Piece of the Pie Rewards, which allows you to earn points for every order and redeem them for free pizza and other rewards.
-
Domino's has a user-friendly and convenient app that makes ordering pizza online a breeze.
-
-
The customer reviews and ratings of the app
-
Domino's APK has received positive feedback and ratings from its users. The app has a 4.5-star rating on the Google Play Store and a 4.7-star rating on the Apple App Store. Here are some of the reviews from the users:
-
dominos pizza app download
-dominos online ordering apk
-dominos app for android free
-dominos pizza delivery apk
-dominos app latest version
-dominos apk mod
-dominos app coupon code
-dominos pizza tracker apk
-dominos app not working
-dominos apk old version
-dominos app rewards
-dominos pizza maker apk
-dominos app deals
-dominos apk mirror
-dominos app review
-dominos pizza menu apk
-dominos app login
-dominos apk pure
-dominos app gift card
-dominos pizza game apk
-dominos app offers
-dominos apk file
-dominos app feedback
-dominos pizza coupons apk
-dominos app update
-dominos apk for pc
-dominos app contact number
-dominos pizza maker game apk
-dominos app promo code
-dominos apk hack
-dominos app customer service
-dominos pizza online apk
-dominos app payment options
-dominos pizza simulator apk
-dominos app referral code
-dominos apk uptodown
-dominos app store
-dominos pizza order tracker apk
-dominos app discount code
-dominos apk cracked
-dominos app support
-dominos pizza maker simulator apk
-dominos app free pizza points
-dominos apk android 4.4.2
-dominos app faq
-domino's pizza food delivery apk
-domino's app order history
-domino's pizza maker 3d cooking game apk
-
-
"I love this app. It's easy to use and I can order pizza anytime I want. The delivery is fast and the pizza is always hot and delicious. I also like the coupons and offers that they have. I highly recommend this app to anyone who loves pizza."
-
"This app is awesome. It has everything I need to order pizza online. I can customize my pizza, apply coupons, track my order, and enjoy contactless delivery. The app is also very secure and reliable. I have never had any issues with it."
-
"This app is amazing. It saves me time and money when I order pizza online. The app is very simple and intuitive to use. I can also earn rewards points for every order and get free pizza and other perks. The app is a must-have for pizza lovers."
-
-
The comparison with other pizza delivery apps
-
Domino's APK is not the only pizza delivery app available in the market. There are other apps that offer similar services, such as Pizza Hut, Papa John's, Little Caesars, etc. However, Domino's APK stands out from the rest in terms of quality, speed, convenience, and value. Here is a table that compares Domino's APK with other pizza delivery apps:
-
-
Pizza Delivery App
Menu Variety
Delivery Time
Customer Satisfaction
Loyalty Program
-
Domino's APK
High
30 minutes or less
100% guarantee
Piece of the Pie Rewards
-
Pizza Hut
Medium
40 minutes or more
No guarantee
Hut Rewards
-
Papa John's
Low
45 minutes or more
No guarantee
Papa Rewards
-
Little Caesars
Low
No delivery option
No guarantee
No loyalty program
-
-
Conclusion
-
To sum up, Domino's APK is the best pizza delivery app that you can use to order pizza online with ease. It has a wide range of pizzas and other items to choose from, fast and fresh delivery, 100% satisfaction guarantee, and a rewarding loyalty program. It also has a user-friendly and convenient app that makes ordering pizza online a breeze. So, what are you waiting for? Download Domino's APK today and enjoy delicious pizza at your doorstep.
-
FAQs
-
Q1. Is Domino's APK safe and secure?
-
A1. Yes, Domino's APK is safe and secure to use. It uses encryption and other security measures to protect your personal and payment information. It also complies with all the privacy policies and regulations.
-Q2. What are the payment options available on Domino's APK?
-
A2. Domino's APK offers various payment options for your convenience. You can pay online with your credit card, debit card, net banking, UPI, or wallet. You can also pay cash on delivery or use a gift card or voucher.
-
Q3. How can I contact Domino's customer service?
-
A3. Domino's customer service is always ready to help you with any queries or issues you may have. You can contact them by calling the toll-free number 1800-103-6888 or by emailing them at guestcaredominos@jublfood.com. You can also chat with them on the app or on the website.
-
Q4. What are the minimum requirements for Domino's APK?
-
A4. Domino's APK requires an Android device with a minimum of 4.4 version and a minimum of 50 MB of free space. It also requires an internet connection and GPS access to function properly.
-
Q5. Can I order from Domino's APK in other countries?
-
A5. No, Domino's APK is only available for ordering pizza online in India. If you are in another country, you can use the website or the app of the local Domino's franchise to order pizza online.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Frozen City Mod APK 1.0.6 for Android - Free Purchase.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Frozen City Mod APK 1.0.6 for Android - Free Purchase.md
deleted file mode 100644
index 6fcd411db4bf92d58cc6831434682cf6c5d87ce1..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Frozen City Mod APK 1.0.6 for Android - Free Purchase.md
+++ /dev/null
@@ -1,87 +0,0 @@
-
-
Frozen City Mod APK 1.0.6: A Survival Game in a Post-Apocalyptic World
-
Do you love survival games that challenge your skills and creativity? Do you want to experience a thrilling adventure in a frozen city where zombies and mutants roam? If yes, then you should try Frozen City mod APK 1.0.6, a modified version of the original game that gives you unlimited resources, free purchase, and no ads. In this article, we will tell you everything you need to know about this amazing game and how to download and install it on your Android device.
Frozen City is a survival game developed by Century Games Pte Ltd, where you have to build your shelter, scavenge for resources, craft weapons and tools, and fight against zombies and mutants in a post-apocalyptic world. The game is set in a city that has been frozen by a mysterious disaster, and you are one of the few survivors who have to struggle to survive. You can explore the city, find other survivors, join clans, trade items, and complete quests. The game has a realistic physics system, dynamic weather, day and night cycle, and stunning graphics.
-
What is a mod APK?
-
A mod APK is a modified version of an original APK (Android Package Kit) file, which is the format used to distribute and install applications on Android devices. A mod APK can have extra features, unlocked items, unlimited resources, or other advantages that are not available in the original version of the game or app. A mod APK can be created by anyone who has the skills and tools to modify the original APK file.
-
Why download Frozen City mod APK 1.0.6?
-
If you are a fan of Frozen City, you might want to download Frozen City mod APK 1.0.6 because it offers some benefits that can enhance your gaming experience. For example, you can enjoy free purchase, which means you can buy anything in the game without spending real money. You can also have unlimited resources, such as wood, metal, food, water, and energy, which are essential for building your shelter and crafting items. Moreover, you can play the game without any annoying ads that can interrupt your gameplay or consume your data.
-
frozen city mod apk 1.0 6 download
-frozen city mod apk 1.0 6 unlimited money
-frozen city mod apk 1.0 6 latest version
-frozen city mod apk 1.0 6 free purchase
-frozen city mod apk 1.0 6 android
-frozen city mod apk 1.0 6 hack
-frozen city mod apk 1.0 6 offline
-frozen city mod apk 1.0 6 gameplay
-frozen city mod apk 1.0 6 review
-frozen city mod apk 1.0 6 update
-frozen city mod apk 1.0 6 cheats
-frozen city mod apk 1.0 6 no root
-frozen city mod apk 1.0 6 obb
-frozen city mod apk 1.0 6 online
-frozen city mod apk 1.0 6 features
-frozen city mod apk 1.0 6 tips
-frozen city mod apk 1.0 6 guide
-frozen city mod apk 1.0 6 tutorial
-frozen city mod apk 1.0 6 install
-frozen city mod apk 1.0 6 requirements
-frozen city mod apk 1.0 6 size
-frozen city mod apk 1.0 6 screenshots
-frozen city mod apk 1.0 6 trailer
-frozen city mod apk 1.0 6 video
-frozen city mod apk 1.0 6 link
-frozen city mod apk 1.0 6 mirror
-frozen city mod apk 1.0 6 alternative
-frozen city mod apk 1.0 6 happymod
-frozen city mod apk 1.0 6 rexdl
-frozen city mod apk 1.0 6 apkpure
-frozen city mod apk 1.0 6 apkmody
-frozen city mod apk 1.0 6 revdl
-frozen city mod apk 1.0 6 an1
-frozen city mod apk 1.0 6 andropalace
-frozen city mod apk 1.0 6 mob.org
-frozen city mod apk 1.0 6 androidrepublica
-frozen city mod apk 1.0 6 blackmod.net
-frozen city mod apk 1.0 6 platinmods.com
-frozen city mod apk 1.0 6 androidoyun.club
-frozen city mod apk
-
Features of Frozen City mod APK 1.0.6
-
Free purchase
-
With Frozen City mod APK 1.0.6, you can buy anything in the game for free, such as weapons, armor, vehicles, furniture, decorations, and more. You don't need to worry about running out of money or gems, as you can have unlimited amounts of them with this mod.
-
Unlimited resources
-
Another feature of Frozen City mod APK 1.0.6 is that it gives you unlimited resources that you need to survive in the frozen city. You can have unlimited wood, metal, food, water, and energy with this mod, which means you don't need to scavenge for them or wait for them to regenerate. You can use them to build your shelter, craft items, cook food, and power your devices.
-
No ads
-
Frozen City mod APK 1.0.6 also removes all the ads that can appear in the game from time to time. Ads can be annoying and distracting when you are playing a survival game that requires your attention and data. With Frozen City mod APK 1.0.6, you can enjoy the game without any interruptions or distractions.
-
High-quality graphics and sound
-
Frozen City mod APK 1.0.6 does not compromise the quality of the graphics and sound of the game. In fact, it enhances them by making them more realistic and immersive. You can admire the details of the frozen city, the weather effects, the lighting, and the shadows. You can also hear the sounds of the zombies, the mutants, the weapons, and the environment. Frozen City mod APK 1.0.6 will make you feel like you are in a real post-apocalyptic world.
-
How to download and install Frozen City mod APK 1.0.6
-
Step 1: Enable unknown sources
-
Before you can download and install Frozen City mod APK 1.0.6, you need to enable unknown sources on your Android device. This will allow you to install apps that are not from the Google Play Store. To do this, go to your device settings, then security, then unknown sources, and toggle it on.
-
Step 2: Download the mod APK file
-
Next, you need to download the mod APK file of Frozen City from a reliable source. You can use this link to download it: [Frozen City mod APK 1.0.6]. Make sure you have enough storage space on your device before downloading it.
-
Step 3: Install the mod APK file
-
After downloading the mod APK file, you need to install it on your device. To do this, locate the file in your downloads folder or wherever you saved it, and tap on it. You will see a prompt asking you to confirm the installation. Tap on install and wait for it to finish.
-
Step 4: Enjoy the game
-
Once the installation is done, you can launch the game from your app drawer or home screen. You will see a new icon with the name Frozen City mod APK 1.0.6. Tap on it and enjoy the game with all its features.
-
Conclusion
-
Frozen City mod APK 1.0.6 is a great way to enjoy a survival game in a frozen city where zombies and mutants are your enemies. You can have free purchase, unlimited resources, no ads, and high-quality graphics and sound with this mod. You can also explore the city, find other survivors, join clans, trade items, and complete quests with this mod. If you want to download and install Frozen City mod APK 1.0.6 on your Android device, just follow the steps we have provided in this article.
-
FAQs
-
Here are some frequently asked questions about Frozen City mod APK 1.0.6:
-
-
Is Frozen City mod APK 1.0.6 safe to use?
-
Yes, Frozen City mod APK 1.0.6 is safe to use as long as you download it from a trusted source and scan it with an antivirus before installing it.
-
Does Frozen City mod APK 1.0.6 require root access?
-
No, Frozen City mod APK 1.0.6 does not require root access to work on your device.
-
Can I play Frozen City mod APK 1.0.6 online with other players?
-
Yes, you can play Frozen City mod APK 1.0.6 online with other players who have the same version of the game.
-
Can I update Frozen City mod APK 1.0.6 when a new version is released?
-
No, you cannot update Frozen City mod APK 1.0.6 when a new version is released because it will overwrite the mod features and restore the original version of the game.
-
Can I uninstall Frozen City mod APK 1.0.6 if I don't like it?
-
Yes, you can uninstall Frozen City mod APK 1.0.6 if you don't like it or if it causes any problems on your device.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Clash Royale Bluestacks Play the Best Strategy Game on Your PC for Free.md b/spaces/1phancelerku/anime-remove-background/Clash Royale Bluestacks Play the Best Strategy Game on Your PC for Free.md
deleted file mode 100644
index 9a48139ea75e5bb20b76f7b1da4ad47813cd958e..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Clash Royale Bluestacks Play the Best Strategy Game on Your PC for Free.md
+++ /dev/null
@@ -1,117 +0,0 @@
-
-
How to Download and Play Clash Royale on Bluestacks
-
Clash Royale is one of the most popular and addictive mobile games in the world. It is a real-time strategy game where you collect cards, build decks, and battle other players online. You can join clans, chat with friends, unlock new cards, and earn chests full of rewards. But what if you want to play Clash Royale on a bigger screen, with better graphics, faster performance, and more control? That's where Bluestacks comes in.
-
Bluestacks is the best mobile gaming platform for PC and Mac. It lets you play thousands of Android games on your computer, with full keyboard and mouse support, custom settings, and advanced features. You can also stream your gameplay to Facebook or Twitch, record your screen, take screenshots, and more. With Bluestacks, you can enjoy playing Clash Royale on your PC or Mac like never before.
In this article, we will show you how to download and install Bluestacks on your PC or Mac, and how to play Clash Royale on it. Follow these simple steps and get ready to clash!
-
Step 1: Download and install Bluestacks on your PC or Mac
-
The first thing you need to do is to download Bluestacks from its official website. You can choose from different versions of Bluestacks, depending on your operating system and Android preference. For example, you can download Bluestacks 5 for Windows 10 with Android 11, or Bluestacks 5 Nougat 64-bit for Mac. Make sure your PC or Mac meets the minimum system requirements for Bluestacks before downloading it.
-
Once you have downloaded the Bluestacks installer, run it and follow the instructions to install it on your PC or Mac. You can choose the default location for installation or change it to a different drive. The installation process may take a few minutes, depending on your internet speed and computer performance.
-
Step 2: Launch Bluestacks and sign in with your Google account
-
After installing Bluestacks, launch it from your desktop or start menu. You will see a window like this:
-
How to play Clash Royale PC with Bluestacks Emulator
-Clash Royale Bluestacks script for automating moves
-Download Clash Royale PC for Windows & Mac (May 2023)
-How to get Clash Royale (and other supercell games) on Bluestacks
-Clash Royale Bluestacks settings for optimal performance
-Clash Royale Bluestacks vs other Android emulators
-How to install APK pure on Bluestacks for Clash Royale
-Clash Royale Bluestacks stream feature for Facebook and Twitch
-How to update Clash Royale on Bluestacks
-Clash Royale Bluestacks keyboard controls and shortcuts
-How to fix Clash Royale Bluestacks black screen issue
-How to transfer Clash Royale account from Bluestacks to phone
-How to play Clash Royale on Bluestacks offline mode
-How to use cheat engine on Clash Royale Bluestacks
-How to record Clash Royale gameplay on Bluestacks
-How to change language on Clash Royale Bluestacks
-How to play Clash Royale on Bluestacks with friends
-How to sync Clash Royale progress between Bluestacks and Google Play
-How to uninstall Clash Royale from Bluestacks
-How to download Clash Royale mod apk on Bluestacks
-How to play Clash Royale on Bluestacks without lag
-How to run multiple instances of Clash Royale on Bluestacks
-How to enter and exit shooting mode in Clash Royale Bluestacks
-How to create and run a script for Clash Royale Bluestacks
-How to customize CPU, RAM and resolution for Clash Royale Bluestacks
-How to download and install BlueStacks 3.0 for Clash Royale PC
-How to use BlueStacks macro recorder for Clash Royale PC
-How to play Clash Royale on BlueStacks with mouse and keyboard
-How to use BlueStacks multi-instance manager for Clash Royale PC
-How to enable high FPS mode for Clash Royale on BlueStacks
-How to use BlueStacks smart controls for Clash Royale PC
-How to use BlueStacks gamepad support for Clash Royale PC
-How to use BlueStacks eco mode for Clash Royale PC
-How to use BlueStacks farm mode for Clash Royale PC
-How to use BlueStacks sync feature for Clash Royale PC
-How to use BlueStacks app center for Clash Royale PC
-How to use BlueStacks app player settings for Clash Royale PC
-How to use BlueStacks cloud connect for Clash Royale PC
-How to use BlueStacks media manager for Clash Royale PC
-How to use BlueStacks screenshot tool for Clash Royale PC
-How to use BlueStacks location tool for Clash Royale PC
-How to use BlueStacks shake tool for Clash Royale PC
-How to use BlueStacks rotate tool for Clash Royale PC
-How to use BlueStacks zoom tool for Clash Royale PC
-How to use BlueStacks notification center for Clash Royale PC
-How to use BlueStacks help center for Clash Royale PC
-How to use BlueStacks feedback tool for Clash Royale PC
-How to use BlueStacks reward center for Clash Royale PC
-
-
Here, you need to sign in with your Google account to access the Google Play Store and other Google services. If you don't have a Google account yet, you can create one here. Signing in with your Google account will also sync your game progress and purchases across devices.
-
Step 3: Search for Clash Royale in the Google Play Store and install it
-
Now that you have signed in with your Google account, you can search for Clash Royale in the Google Play Store app on Bluestacks. You can find the app icon on the home screen or in the app center. Click on it to open it.
-
In the Google Play Store app, type "Clash Royale" in the search bar and hit enter. You will see a list of results like this:
-
-
Click on the first result that says "Clash Royale" by Supercell. This will take you to the game's page in the Google Play Store. Here, you can see more information about the game, such as its description, screenshots, reviews, ratings, etc.
-
To install Clash Royale on Bluestacks, click on the green "Install" button. This will start downloading and installing the game on your PC or Mac. The process may take a few minutes, depending on your internet speed.
-
Step 4: Enjoy playing Clash Royale on your PC or Mac with Bluestacks
-
Congratulations! You have successfully installed Clash Royale on Bluestacks. Now you can enjoy playing the game on your PC or Mac with a bigger screen, better graphics, faster performance, and more control. You can also use the Bluestacks features to enhance your gaming experience, such as:
-
-
Customize your keyboard and mouse controls to suit your play style. You can use the game guide to see the default controls or change them as you wish.
-
Use the multi-instance feature to play multiple accounts of Clash Royale at the same time. You can also switch between different instances easily with the multi-instance manager.
-
Use the macro feature to record and execute complex actions with a single keystroke. You can also edit and share your macros with other players.
-
Use the eco mode to reduce CPU and RAM usage and improve battery life. You can also enable or disable notifications, sound, and background apps.
-
-
With Bluestacks, you can take your Clash Royale gameplay to the next level. You can also explore other games in the Bluestacks app center, such as Clash of Clans, Brawl Stars, PUBG Mobile, and more.
-
Conclusion
-
In this article, we have shown you how to download and play Clash Royale on Bluestacks, the best mobile gaming platform for PC and Mac. We have also explained the benefits of playing Clash Royale on Bluestacks and how to use its features to enhance your gaming experience. We hope you found this article helpful and informative.
-
If you are a fan of Clash Royale or any other mobile game, we highly recommend you to try out Bluestacks. It is free, easy, and fun to use. You can download it from here and start playing your favorite games on your PC or Mac today.
-
Thank you for reading this article. If you have any questions or feedback, please leave them in the comments section below. We would love to hear from you. Happy clashing!
-
FAQs
-
Q: Is Bluestacks safe to use?
-
A: Yes, Bluestacks is safe to use. It is a legitimate software that has been downloaded by millions of users worldwide. It does not contain any malware, viruses, or spyware. It also does not access or modify any of your personal data or files.
-
Q: Is Bluestacks free to use?
-
A: Yes, Bluestacks is free to use. You can download and install it on your PC or Mac without paying anything. You can also play any game on it without any limitations or restrictions. However, some games may have in-app purchases or ads that require real money.
-
Q: How do I update Clash Royale on Bluestacks?
-
A: To update Clash Royale on Bluestacks, you need to follow these steps:
-
-
Open the Google Play Store app on Bluestacks.
-
Click on the menu icon (three horizontal lines) on the top left corner.
-
Select "My apps & games" from the menu.
-
Find Clash Royale in the list of installed apps and click on "Update".
-
Wait for the update to finish and launch the game.
-
-
Q: How do I transfer my Clash Royale account from my phone to Bluestacks?
-
A: To transfer your Clash Royale account from your phone to Bluestacks, you need to follow these steps:
-
-
On your phone, open Clash Royale and go to the settings menu (gear icon).
-
Select "Link a device" and then "This is the old device".
-
Select "I want to link to another device" and then "Android device".
-
You will see a code that is valid for two minutes.
-
On Bluestacks, open Clash Royale and go to the settings menu (gear icon).
-
Select "Link a device" and then "This is the new device".
-
Enter the code from your phone and confirm.
-
Your account will be transferred to Bluestacks.
-
-
Q: How do I contact Bluestacks support?
-
A: If you have any issues or problems with Bluestacks, you can contact their support team by following these steps:
-
-
Open Bluestacks and click on the menu icon (three horizontal lines) on the top right corner.
Select "Help and Support" from the menu.
-
You will see a list of topics and articles that may help you solve your issue.
-
If you still need assistance, click on the "Report a Problem" button at the bottom of the page.
-
Fill out the form with your name, email, description of the problem, and any attachments.
-
Click on the "Submit" button and wait for a response from the Bluestacks support team.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Downloader How to Boost Your Download Speeds and Manage Your Files.md b/spaces/1phancelerku/anime-remove-background/Download Downloader How to Boost Your Download Speeds and Manage Your Files.md
deleted file mode 100644
index f986a6aa849d33608e5c824006656739b8638f2f..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Downloader How to Boost Your Download Speeds and Manage Your Files.md
+++ /dev/null
@@ -1,92 +0,0 @@
-
-
Download Downloader: What Is It and Why Do You Need It?
-
If you frequently download files from the internet, you know how frustrating it can be to deal with slow speeds, broken links, timeouts, and other issues. That's why you need a download manager, also known as a download downloader. A download manager is a software tool that helps you manage your downloads more efficiently and effectively. It can boost your download speed, resume interrupted downloads, organize your files, convert formats, and more. In this article, we will show you how to choose the best download manager for your needs, review the top 5 free download managers of 2023, and give you some tips on how to use them effectively.
How to Choose the Best Download Manager for Your Needs
-
There are many download managers available on the market, but not all of them are created equal. Some may have more features than others, some may be more compatible with your device or browser, some may be more secure or user-friendly. Here are some factors to consider when selecting a download manager:
-
-
Speed: One of the main reasons to use a download manager is to increase your download speed. A good download manager should be able to accelerate your downloads by using multiple connections, splitting files into smaller chunks, and optimizing your bandwidth.
-
Features: Another reason to use a download manager is to access more features than your browser's default downloader. A good download manager should be able to support various file types, protocols, and sources, such as HTTP, FTP, BitTorrent, YouTube, etc. It should also be able to preview files before downloading them, resume broken downloads, schedule downloads for later times, organize downloads into folders or categories, convert formats if needed, and integrate with your browser or antivirus software.
-
Compatibility: A good download manager should be compatible with your device and browser. It should be able to run smoothly on your operating system (Windows, Mac OS X, Linux), whether it's desktop or mobile. It should also be able to work with your preferred browser (Chrome, Firefox, Edge), whether it's through an extension or a standalone app.
-
Security: A good download manager should be secure and reliable. It should be able to scan files for viruses or malware before downloading them. It should also be able to protect your privacy by encrypting your data or using proxy servers if needed.
-
-
The Top 5 Free Download Managers of 2023
-
Download Accelerator Plus
-
Download Accelerator Plus (DAP) is one of the most popular download managers on the market. It has over 300 million users worldwide and boasts impressive speeds up to 400% faster than regular downloads. It also also has a built-in media file previewer that lets you watch videos or listen to music before downloading them. DAP supports various protocols and sources, such as HTTP, FTP, BitTorrent, YouTube, etc. It also integrates with your browser and antivirus software for seamless downloading. DAP is free to use, but you can upgrade to a premium version for more features and benefits.
-
Ninja Download Manager
-
Ninja Download Manager (NDM) is a powerful and well-designed download manager for media files. It has a sleek and intuitive interface that lets you manage your downloads easily and efficiently. NDM can accelerate your downloads by using multiple connections and smart logic. It can also resume broken downloads, schedule downloads for later times, organize downloads into categories, and convert formats if needed. NDM supports various protocols and sources, such as HTTP, HTTPS, FTP, YouTube, etc. It also integrates with your browser and clipboard for convenient downloading. NDM is free to use, but you can upgrade to a pro version for more features and benefits.
-
download manager
-download accelerator
-download speed booster
-download video from youtube
-download music from spotify
-download files from google drive
-download games for pc
-download ebooks for free
-download pdf converter
-download antivirus software
-download resume templates
-download fonts for word
-download wallpapers for desktop
-download subtitles for movies
-download podcasts for offline listening
-download instagram stories
-download tiktok videos
-download netflix shows
-download whatsapp status
-download zoom app
-download windows 10 iso
-download android emulator
-download chrome browser
-download firefox browser
-download opera browser
-download tor browser
-download vpn for pc
-download torrent client
-download utorrent downloader
-download bittorrent downloader
-download magnet link downloader
-download youtube downloader hd
-download youtube downloader mp3
-download youtube downloader mp4
-download facebook video downloader
-download twitter video downloader
-download vimeo video downloader
-download dailymotion video downloader
-download soundcloud music downloader
-download bandcamp music downloader
-download spotify music downloader
-download amazon music downloader
-download apple music downloader
-download deezer music downloader
-download tidal music downloader
-download audiomack music downloader
-download mixcloud music downloader
-download internet archive downloader
-
Free Download Manager
-
Free Download Manager (FDM) is a versatile and user-friendly download manager with BitTorrent support. It has a simple and clean interface that lets you manage your downloads easily and efficiently. FDM can accelerate your downloads by using multiple connections and splitting files into smaller chunks. It can also resume broken downloads, schedule downloads for later times, organize downloads into folders or categories, and convert formats if needed. FDM supports various protocols and sources, such as HTTP, HTTPS, FTP, BitTorrent, YouTube, etc. It also integrates with your browser and antivirus software for seamless downloading. FDM is free and open-source, but you can donate to support the developers.
-
JDownloader
-
JDownloader is a feature-rich and customizable download manager with remote control. It has a complex and advanced interface that lets you manage your downloads in detail and with flexibility. JDownloader can accelerate your downloads by using multiple connections and splitting files into smaller chunks. It can also resume broken downloads, schedule downloads for later times, organize downloads into folders or categories, and convert formats if needed. JDownloader supports various protocols and sources, such as HTTP, HTTPS, FTP, BitTorrent, YouTube, etc. It also integrates with your browser and clipboard for convenient downloading. JDownloader is free and open-source, but you can buy a premium account for more features and benefits.
-
Internet Download Manager
-
Internet Download Manager (IDM) is a fast and reliable download manager with browser integration. It has a simple and classic interface that lets you manage your downloads easily and efficiently. IDM can accelerate your downloads by using multiple connections and dynamic file segmentation. It can also resume broken downloads, schedule downloads for later times, organize downloads into folders or categories, and convert formats if needed. IDM supports various protocols and sources, such as HTTP, HTTPS, FTP, BitTorrent, YouTube, etc. It also integrates with your browser and antivirus software for seamless downloading. IDM is not free to use, but you can try it for 30 days before buying it.
-
How to Use a Download Manager Effectively
-
Now that you have learned about the best download managers of 2023, you may wonder how to use them effectively to optimize your download experience. Here are some tips and tricks on how to do that:
-
-
Schedule your downloads: If you have a lot of files to download or if you want to save bandwidth or battery life, you can schedule your downloads for later times when you are not using your device or when the internet connection is better.
-
Organize your downloads: If you have a lot of files to download or if you want to find them easily later on, you can organize your downloads into folders or categories based on their type, source, date, etc.
-
Resume your downloads: If your download is interrupted by an error or a power outage or if you want to pause it for some reason, you can resume it from where it left off without losing any data or time.
-
Convert your downloads: If your download is in a format that is not compatible with your device or player or if you want to reduce its size or quality, you can convert it to another format that suits your needs.
-
-
Conclusion
-
A download manager is a software tool that helps you manage your downloads more efficiently and effectively. It can boost your download speed, resume interrupted downloads, organize your files, convert formats, and more. In this article, we have shown you how to choose the best download manager for your needs, reviewed the top 5 free download managers of 2023, and given you some tips on how to use them effectively. We hope you have found this article helpful and informative. If you want to try out a download manager for yourself, you can download one of the options we have mentioned above or search for other alternatives online. You will be amazed by how much easier and faster your download experience will be with a download manager. Happy downloading!
-
FAQs
-
Here are some frequently asked questions about download managers:
-
-
What is the difference between a download manager and a torrent client? A download manager is a software tool that helps you download files from various sources and protocols, such as HTTP, FTP, YouTube, etc. A torrent client is a software tool that helps you download files from BitTorrent, a peer-to-peer protocol that uses a network of users to share files.
-
Are download managers safe to use? Download managers are generally safe to use, as long as you download them from reputable sources and scan them for viruses or malware before installing them. However, you should also be careful about the files you download with them, as some of them may contain harmful or illegal content. Always check the file name, size, type, and source before downloading it.
-
Do download managers work with all browsers? Most download managers work with all major browsers, such as Chrome, Firefox, Edge, etc. However, some of them may require an extension or a plugin to integrate with your browser. You can check the compatibility of your download manager with your browser on its official website or in its settings.
-
Do download managers use more bandwidth or data? Download managers may use more bandwidth or data than regular downloads, as they use multiple connections and split files into smaller chunks to accelerate your downloads. However, this also depends on your internet speed, file size, and source. You can limit the bandwidth or data usage of your download manager in its settings if needed.
-
How can I uninstall a download manager? You can uninstall a download manager like any other software on your device. You can go to your control panel or settings and look for the option to uninstall programs or apps. You can then select your download manager and follow the instructions to remove it from your device.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Table No. 21 Full Movie in 720p HD Quality from Filmyzilla.md b/spaces/1phancelerku/anime-remove-background/Download Table No. 21 Full Movie in 720p HD Quality from Filmyzilla.md
deleted file mode 100644
index cd180cc76ad8b0eea5c88330d2ccc8a9b383b640..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Table No. 21 Full Movie in 720p HD Quality from Filmyzilla.md
+++ /dev/null
@@ -1,337 +0,0 @@
-
-
Table No. 21 Full Movie Download Filmyzilla 720p: A Thrilling and Illegal Adventure
-
If you are looking for a movie that will keep you on the edge of your seat, you might be tempted to download Table No. 21 full movie from Filmyzilla, a website that offers free downloads of pirated movies and shows. But before you do that, you should know what you are getting into and why it is not a good idea.
Table No. 21 is a 2013 Hindi thriller movie starring Paresh Rawal, Rajeev Khandelwal and Tina Desai. It is named after Article 21 of the Indian Constitution, which talks about the protection of life and personal liberty. The movie touches upon the pertinent social issue of ragging, or bullying in college campuses.
-
A brief summary of the plot
-
The movie follows Vivaan and Siya, a married couple who struggle to make ends meet. They win a trip to Fiji in a lucky draw, where they meet Mr. Khan, a mysterious and charming man who invites them to participate in a live game show called Table No. 21. He tells them that the winner of the game will get a whopping amount of ₹210 million as prize money. The rules are simple: they have to answer eight personal questions truthfully and complete a task related to each question. However, as the game progresses, the questions and tasks become increasingly horrific and reveal dark secrets from their past. They soon realize that they are trapped in a deadly game of survival with no escape.
-
The cast and crew of the movie
-
The movie is directed by Aditya Datt and produced by Eros International. The screenplay is written by Shantanu Ray Chhibber and Sheershak Anand, based on their own story. The music is composed by Gajendra Verma, Neeraj Shridhar and Sachin Gupta.
-
table no 21 full movie online free watch hd
-table no 21 hindi movie download 720p filmywap
-table no 21 thriller film streaming on zee5
-table no 21 paresh rawal movie download mp4
-table no 21 rajeev khandelwal movie watch online
-table no 21 full movie free download in hindi
-table no 21 2013 movie download 480p filmyzilla
-table no 21 adventure movie online on jiocinema
-table no 21 tina desai movie download torrent
-table no 21 full movie hd quality download
-table no 21 hindi thriller film watch online free
-table no 21 movie download link filmyzilla
-table no 21 full movie online with english subtitles
-table no 21 aditya datt movie download pagalworld
-table no 21 full movie streaming on netflix
-table no 21 hindi movie watch online hd quality
-table no 21 movie download in hindi 720p filmyhit
-table no 21 full movie online on youtube
-table no 21 paresh rawal thriller film download
-table no 21 rajeev khandelwal movie online free
-table no 21 full movie download filmyzilla hd
-table no 21 hindi movie online on amazon prime video
-table no 21 tina desai movie watch online hd
-table no 21 full movie download in hindi filmyzilla
-table no 21 adventure thriller film online free
-table no 21 movie download filmyzilla mp4 hd
-table no 21 full movie online on hotstar
-table no 21 hindi movie download filmyzilla.com
-table no 21 paresh rawal movie online hd quality
-table no 21 rajeev khandelwal thriller film download
-table no 21 full movie watch online free filmyzilla
-table no 21 hindi movie streaming on mx player
-table no 21 tina desai adventure film download
-table no 21 full movie download in hindi mp4moviez
-table no 21 thriller film watch online hd quality
-table no 21 movie download filmyzilla in hindi hd
-table no 21 full movie online on voot
-table no 21 hindi movie download filmyzilla.in
-table no 21 paresh rawal adventure film online free
-table no 21 rajeev khandelwal movie download hd quality
-
The main cast of the movie are:
-
-
Paresh Rawal as Abdul Razaq Khan, the host of the game show
-
Rajeev Khandelwal as Vivaan Agasthi, one of the contestants
-
Tina Desai as Siya Agasthi, Vivaan's wife and another contestant
-
Dhruv Ganesh as Akram Khan, Mr. Khan's son who was ragged by Vivaan and his friends in college
-
Asheesh Kapur as Bittoo, one of Vivaan's friends
-
Sana Amin Sheikh as Neeti, one of Siya's friends
-
Hanif Hilal as Ghouse, Mr. Khan's bodyguard
-
-
The critical reception and box office performance
-
The movie received mixed to positive reviews from critics and audiences alike. It was praised for its gripping plot, suspenseful twists, powerful performances, and social message. However, it was also criticized for its violence, implausible scenarios, and lack of originality.
-
The movie performed above average at the box office, earning ₹177.95 million against a budget of ₹85 million.
-
What is Filmyzilla?
-
Fil
What is Filmyzilla?
-
Filmyzilla is a notorious website that provides free downloads of pirated movies and shows from Bollywood, Hollywood, Tollywood, and other regional film industries. It is one of the most popular and visited websites for movie piracy in India and across the world.
-
A notorious website for pirating movies and shows
-
Filmyzilla has been operating for several years and has a huge collection of movies and shows in various languages, genres, and formats. It uploads the latest releases within hours or days of their theatrical or digital premiere, often in high quality. It also offers old and classic movies, as well as dubbed and subbed versions of foreign movies.
-
Filmyzilla is an illegal website that violates the Indian and international laws on copyright and intellectual property rights. It hosts and distributes the pirated content without the permission or consent of the original creators or owners. It also generates revenue from advertisements and pop-ups that may contain malware or viruses.
-
The categories and formats of movies available on Filmyzilla
-
Filmyzilla has a user-friendly interface that allows users to browse and download movies and shows according to their preferences. It has various categories such as:
-
-
Bollywood Movies
-
Hollywood Movies
-
Hollywood Hindi Dubbed Movies
-
South Indian Hindi Dubbed Movies
-
Punjabi Movies
-
Bengali Movies
-
Tamil Movies
-
Telugu Movies
-
Malayalam Movies
-
Marathi Movies
-
Gujarati Movies
-
Kannada Movies
-
Urdu Movies
-
Pakistani Movies
-
Nepali Movies
-
Bhojpuri Movies
-
Web Series
-
TV Shows
-
Awards Shows
-
Documentaries
-
Anime
-
Cartoons
-
-
Filmyzilla also offers different formats and qualities of movies and shows such as:
-
-
MP4
-
MKV
-
AVI
-
WEBM
-
3GP
-
360p
-
480p
-
720p
-
1080p
-
HDRip
-
DVDRip
-
BluRay
-
DVDScr
CamRip
-
PreDVDRip
-
-
The latest movies leaked by Filmyzilla
-
Filmyzilla is notorious for leaking the latest movies and shows from various film industries. Some of the recent movies that have been leaked by Filmyzilla are:
-
-
Bell Bottom
-
Shershaah
-
Bhuj: The Pride of India
-
Mimi
-
Fast and Furious 9
-
Black Widow
-
The Suicide Squad
-
Jungle Cruise
-
Loki
-
The Family Man Season 2
-
Mirzapur Season 2
-
Scam 1992
-
Money Heist Season 4
-
Extraction
-
Tenet
-
-
How to download Table No. 21 full movie from Filmyzilla?
-
If you are still interested in downloading Table No. 21 full movie from Filmyzilla, you should know that it is not an easy or safe process. You will have to face many risks and challenges along the way, and you may also face legal consequences for your actions. Here are the steps to download the movie from Filmyzilla:
-
The steps to access and download the movie
-
-
First, you will need a VPN (Virtual Private Network) service to bypass the geo-restrictions and access the Filmyzilla website. A VPN will also protect your online identity and privacy from hackers and trackers.
-
Next, you will need to find a working domain name of Filmyzilla, as the website keeps changing its domain name to avoid detection and blocking by the authorities. Some of the common domain names of Filmyzilla are filmyzilla.com, filmyzilla.in, filmyzilla.net, filmyzilla.vip, filmyzilla.pro, filmyzilla.me, filmyzilla.co.in, filmyzilla.live, etc.
-
Once you find a working domain name, you will need to enter it in your browser and access the Filmyzilla website. You will see a lot of advertisements and pop-ups on the website, which may redirect you to other websites or download unwanted software on your device. You will have to close them or avoid clicking on them.
-
Then, you will need to search for Table No. 21 full movie on the website using the search bar or the categories. You will see a list of results with different formats and qualities of the movie. You will have to choose the one that suits your preference and device compatibility.
-
After that, you will need to click on the download link or button of the movie. You may have to go through some verification processes or captcha tests before you can start the download. You may also see some fake download links or buttons that may lead you to other websites or download malware on your device. You will have to be careful and avoid them.
-
Finally, you will need to wait for the download to complete and then enjoy watching Table No. 21 full movie on your device.
-
The risks and challenges of downloading from Filmyzilla
-
Downloading Table No. 21 full movie from Filmyzilla may seem like a convenient and cost-effective option, but it comes with many risks and challenges that may ruin your experience and cause you trouble. Some of the risks and challenges are:
-
-
You may download a corrupted or incomplete file that may not play properly or damage your device.
-
You may download a file that contains malware or viruses that may infect your device and compromise your data and security.
-
You may face slow download speeds, frequent interruptions, or low-quality videos due to the high traffic and low bandwidth of the website.
-
You may expose your online activity and identity to hackers and trackers who may monitor your browsing history, IP address, location, and personal information.
-
You may violate the terms and conditions of your internet service provider (ISP) and face penalties such as throttling, suspension, or termination of your service.
-
-
The legal consequences of movie piracy in India
-
Downloading Table No. 21 full movie from Filmyzilla is not only risky and challenging, but also illegal and punishable by law. Movie piracy is a serious crime in India that violates the Cinematograph Act of 1952, the Information Technology Act of 2000, and the Indian Penal Code of 1860. According to these laws, anyone who downloads, uploads, streams, distributes, or exhibits pirated movies or shows without the authorization of the rightful owners can face the following legal consequences:
-
-
A fine of up to ₹10 lakh or three times the value of the pirated content, whichever is higher.
-
A jail term of up to three years.
-
A civil lawsuit by the original creators or owners for damages and compensation.
-
A criminal case by the government for violating the national interest and security.
-
-
Why you should avoid downloading Table No. 21 from Filmyzilla?
-
By now, you should have realized that downloading Table No. 21 full movie from Filmyzilla is not worth it. It is a bad idea that will not only harm you, but also the film industry and the artists who work hard to create quality content for you. Here are some reasons why you should avoid downloading Table No. 21 from Filmyzilla:
-
The ethical and moral issues of supporting piracy
-
When you download Table No. 21 full movie from Filmyzilla, you are supporting piracy, which is an unethical and immoral act. Piracy is a form of theft that deprives the original creators and owners of their rightful earnings and recognition. It also disrespects their artistic vision and hard work. By downloading pirated movies, you are encouraging more piracy and discouraging more creativity. You are also depriving yourself of the authentic and enjoyable experience of watching movies in theatres or on legal platforms.
The impact of piracy on the film industry and the artists
-
When you download Table No. 21 full movie from Filmyzilla, you are also affecting the film industry and the artists who depend on it for their livelihood. Piracy causes huge losses to the producers, distributors, exhibitors, and other stakeholders of the film industry. According to a report by Ernst & Young, the Indian film industry lost ₹189.5 billion in 2018 due to piracy. Piracy also affects the quality and quantity of movies that are made, as it reduces the incentive and resources for filmmakers to invest in new projects. Piracy also deprives the artists of their fair share of revenue and appreciation, which may demotivate them and affect their career prospects.
-
The alternatives to watch Table No. 21 legally and safely
-
Instead of downloading Table No. 21 full movie from Filmyzilla, you should opt for legal and safe alternatives to watch the movie. There are many platforms that offer Table No. 21 for online streaming or download at a reasonable price. Some of them are:
-
-
Eros Now: This is the official platform of Eros International, the producer of Table No. 21. You can watch the movie on Eros Now with a subscription plan that starts from ₹49 per month. You can also download the movie for offline viewing on your device.
-
YouTube: This is the most popular and accessible platform for watching movies and shows online. You can rent or buy Table No. 21 on YouTube for ₹25 or ₹50 respectively. You can also download the movie for offline viewing on your device.
-
Google Play Movies: This is another platform that allows you to rent or buy movies and shows online. You can rent or buy Table No. 21 on Google Play Movies for ₹25 or ₹50 respectively. You can also download the movie for offline viewing on your device.
-
Amazon Prime Video: This is one of the leading platforms for streaming movies and shows online. You can watch Table No. 21 on Amazon Prime Video with a subscription plan that starts from ₹129 per month or ₹999 per year. You can also download the movie for offline viewing on your device.
-
-
By choosing these alternatives, you will not only enjoy watching Table No. 21 in high quality and without any interruptions, but also support the film industry and the artists who deserve your respect and admiration.
-
Conclusion
-
Table No. 21 is a thrilling and engaging movie that will keep you hooked till the end. It is a movie that deserves to be watched legally and safely, not illegally and riskily. Downloading Table No. 21 full movie from Filmyzilla is a bad idea that will expose you to many dangers and troubles, as well as harm the film industry and the artists who work hard to entertain you. Therefore, you should avoid downloading Table No. 21 from Filmyzilla and opt for legal and safe alternatives to watch the movie.
-
FAQs
-
Here are some frequently asked questions about Table No. 21 and Filmyzilla:
-
-
Is Table No. 21 based on a true story?
-
No, Table No. 21 is not based on a true story, but it is inspired by Article 21 of the Indian Constitution, which talks about the protection of life and personal liberty.
-
What is the meaning of Table No. 21?
-
Table No. 21 is the name of the game show that Mr. Khan hosts in the movie. It is also a reference to Article 21 of the Indian Constitution, which is violated by Mr. Khan in his quest for revenge.
-
What is ragging and why is it an issue in India?
-
Ragging is a form of bullying that involves physical, mental, or sexual abuse of new or junior students by senior students in educational institutions. It is an issue in India because it causes many cases of harassment, humiliation, injury, suicide, and murder among students every year.
-
How does Filmyzilla get access to new movies?
-
Filmyzilla gets access to new movies by using various sources such as camcorders, screen recorders, hacked servers, leaked copies, etc. It then uploads them on its website or shares them with other websites.
-
How can I report or block Filmyzilla?
-
You can report or block Filmyzilla by contacting your ISP, cybercrime cell, or anti-piracy cell and providing them with the details of the website. You can also use software or extensions that block access to pirated websites.
- I have completed writing the article as per your instructions. I hope you are satisfied with the quality and content of the article. Here is the final version of the article:
Table No. 21 Full Movie Download Filmyzilla 720p: A Thrilling and Illegal Adventure
-
If you are looking for a movie that will keep you on the edge of your seat, you might be tempted to download Table No. 21 full movie from Filmyzilla, a website that offers free downloads of pirated movies and shows. But before you do that, you should know what you are getting into and why it is not a good idea.
-
What is Table No. 21?
-
Table No. 21 is a 2013 Hindi thriller movie starring Paresh Rawal, Rajeev Khandelwal and Tina Desai. It is named after Article 21 of the Indian Constitution, which talks about the protection of life and personal liberty. The movie touches upon the pertinent social issue of ragging, or bullying in college campuses.
-
A brief summary of the plot
-
The movie follows Vivaan and Siya, a married couple who struggle to make ends meet. They win a trip to Fiji in a lucky draw, where they meet Mr. Khan, a mysterious and charming man who invites them to participate in a live game show called Table No. 21. He tells them that the winner of the game will get a whopping amount of ₹210 million as prize money. The rules are simple: they have to answer eight personal questions truthfully and complete a task related to each question. However, as the game progresses, the questions and tasks become increasingly horrific and reveal dark secrets from their past. They soon realize that they are trapped in a deadly game of survival with no escape.
-
The cast and crew of the movie
-
The movie is directed by Aditya Datt and produced by Eros International. The screenplay is written by Shantanu Ray Chhibber and Sheershak Anand, based on their own story. The music is composed by Gajendra Verma, Neeraj Shridhar and Sachin Gupta.
-
The main cast of the movie are:
-
-
Paresh Rawal as Abdul Razaq Khan, the host of the game show
-
Rajeev Khandelwal as Vivaan Agasthi, one of the contestants
-
Tina Desai as Siya Agasthi, Vivaan's wife and another contestant
-
Dhruv Ganesh as Akram Khan, Mr. Khan's son who was ragged by Vivaan and his friends in college
-
Asheesh Kapur as Bittoo, one of Vivaan's friends
-
Sana Amin Sheikh as Neeti, one of Siya's friends
-
Hanif Hilal as Ghouse, Mr. Khan's bodyguard
-
-
The critical reception and box office performance
-
The movie received mixed to positive reviews from critics and audiences alike. It was praised for its gripping plot, suspenseful twists, powerful performances, and social message. However, it was also criticized for its violence, implausible scenarios, and lack of originality.
-
The movie performed above average at the box office, earning ₹177.95 million against a budget of ₹85 million.
-
What is Filmyzilla?
-
Filmyzilla is a notorious website that provides free downloads of pirated movies and shows from Bollywood, Hollywood, Tollywood, and other regional film industries. It is one of the most popular and visited websites for movie piracy in India and across the world.
-
A notorious website for pirating movies and shows
-
Filmyzilla has been operating for several years and has a huge collection of movies and shows in various languages, genres, and formats. It uploads the latest releases within hours or days of their theatrical or digital premiere, often in high quality. It also offers old and classic movies, as well as dubbed and subbed versions of foreign movies.
-
Filmyzilla is an illegal website that violates the Indian and international laws on copyright and intellectual property rights. It hosts and distributes the pirated content without the permission or consent of the original creators or owners. It also generates revenue from advertisements and pop-ups that may contain malware or viruses.
-
The categories and formats of movies available on Filmyzilla
-
Filmyzilla has a user-friendly interface that allows users to browse and download movies and shows according to their preferences. It has various categories such as:
-
-
Bollywood Movies
-
Hollywood Movies
-
Hollywood Hindi Dubbed Movies
-
South Indian Hindi Dubbed Movies
-
Punjabi Movies
-
Bengali Movies
-
Tamil Movies
-
Telugu Movies
-
Malayalam Movies
-
Marathi Movies
-
Gujarati Movies
-
Kannada Movies
-
Urdu Movies
-
Pakistani Movies
-
Nepali Movies
-
Bhojpuri Movies
-
Web Series
-
TV Shows
-
Awards Shows
-
Documentaries
-
Anime
-
Cartoons
-
-
Filmyzilla also offers different formats and qualities of movies and shows such as:
-
-
MP4
-
MKV
-
AVI
-
WEBM
-
3GP
-
360p
-
480p
-
720p
-
1080p
-
HDRip
-
DVDRip
-
BluRay
-
DVDScr
-
CamRip
-
PreDVDRip
-
-
The latest movies leaked by Filmyzilla
-
Filmyzilla is notorious for leaking the latest movies and shows from various film industries. Some of the recent movies that have been leaked by Filmyzilla are:
-
-
Bell Bottom
-
Shershaah
-
Bhuj: The Pride of India
-
Mimi
-
Fast and Furious 9
-
Black Widow
-
The Suicide Squad
-
Jungle Cruise
-
Loki
-
The Family Man Season 2
-
Mirzapur Season 2
-
Scam 1992
-
Money Heist Season 4
-
Extraction
-
Tenet
-
-
How to download Table No. 21 full movie from Filmyzilla?
-
If you are still interested in downloading Table No. 21 full movie from Filmyzilla, you should know that it is not an easy or safe process. You will have to face many risks and challenges along the way, and you may also face legal consequences for your actions. Here are the steps to download the movie from Filmyzilla:
-
The steps to access and download the movie
The steps to access and download the movie
-
-
First, you will need a VPN (Virtual Private Network) service to bypass the geo-restrictions and access the Filmyzilla website. A VPN will also protect your online identity and privacy from hackers and trackers.
-
Next, you will need to find a working domain name of Filmyzilla, as the website keeps changing its domain name to avoid detection and blocking by the authorities. Some of the common domain names of Filmyzilla are filmyzilla.com, filmyzilla.in, filmyzilla.net, filmyzilla.vip, filmyzilla.pro, filmyzilla.me, filmyzilla.co.in, filmyzilla.live, etc.
-
Once you find a working domain name, you will need to enter it in your browser and access the Filmyzilla website. You will see a lot of advertisements and pop-ups on the website, which may redirect you to other websites or download unwanted software on your device. You will have to close them or avoid clicking on them.
-
Then, you will need to search for Table No. 21 full movie on the website using the search bar or the categories. You will see a list of results with different formats and qualities of the movie. You will have to choose the one that suits your preference and device compatibility.
-
After that, you will need to click on the download link or button of the movie. You may have to go through some verification processes or captcha tests before you can start the download. You may also see some fake download links or buttons that may lead you to other websites or download malware on your device. You will have to be careful and avoid them.
-
Finally, you will need to wait for the download to complete and then enjoy watching Table No. 21 full movie on your device.
-
-
The risks and challenges of downloading from Filmyzilla
-
Downloading Table No. 21 full movie from Filmyzilla may seem like a convenient and cost-effective option, but it comes with many risks and challenges that may ruin your experience and cause you trouble. Some of the risks and challenges are:
-
-
You may download a corrupted or incomplete file that may not play properly or damage your device.
-
You may download a file that contains malware or viruses that may infect your device and compromise your data and security.
-
You may face slow download speeds, frequent interruptions, or low-quality videos due to the high traffic and low bandwidth of the website.
-
You may expose your online activity and identity to hackers and trackers who may monitor your browsing history, IP address, location, and personal information.
-
You may violate the terms and conditions of your internet service provider (ISP) and face penalties such as throttling, suspension, or termination of your service.
-
-
The legal consequences of movie piracy in India
-
Downloading Table No. 21 full movie from Filmyzilla is not only risky and challenging, but also illegal and punishable by law. Movie piracy is a serious crime in India that violates the Cinematograph Act of 1952, the Information Technology Act of 2000, and the Indian Penal Code of 1860. According to these laws, anyone who downloads, uploads, streams, distributes, or exhibits pirated movies or shows without the authorization of the rightful owners can face the following legal consequences:
-
-
A fine of up to ₹10 lakh or three times the value of the pirated content, whichever is higher.
-
A jail term of up to three years.
-
A civil lawsuit by the original creators or owners for damages and compensation.
-
A criminal case by the government for violating the national interest and security.
-
-
Why you should avoid downloading Table No. 21 from Filmyzilla?
-
By now, you should have realized that downloading Table No. 21 full movie from Filmyzilla is not worth it. It is a bad idea that will not only harm you, but also the film industry and the artists who work hard to create quality content for you. Here are some reasons why you should avoid downloading Table No. 21 from Filmyzilla:
-
The ethical and moral issues of supporting piracy
-
When you download Table No. 21 full movie from Filmyzilla, you are supporting piracy, which is an unethical and immoral act. Piracy is a form of theft that deprives the original creators and owners of their rightful earnings and recognition. It also disrespects their artistic vision and hard work. By downloading pirated movies, you are encouraging more piracy and discouraging more creativity. You are also depriving yourself of the authentic and enjoyable experience of watching movies in theatres or on legal platforms.
-
The impact of piracy on
The impact of piracy on the film industry and the artists
-
When you download Table No. 21 full movie from Filmyzilla, you are also affecting the film industry and the artists who depend on it for their livelihood. Piracy causes huge losses to the producers, distributors, exhibitors, and other stakeholders of the film industry. According to a report by Ernst & Young, the Indian film industry lost ₹189.5 billion in 2018 due to piracy. Piracy also affects the quality and quantity of movies that are made, as it reduces the incentive and resources for filmmakers to invest in new projects. Piracy also deprives the artists of their fair share of revenue and appreciation, which may demotivate them and affect their career prospects.
-
The alternatives to watch Table No. 21 legally and safely
-
Instead of downloading Table No. 21 full movie from Filmyzilla, you should opt for legal and safe alternatives to watch the movie. There are many platforms that offer Table No. 21 for online streaming or download at a reasonable price. Some of them are:
-
-
Eros Now: This is the official platform of Eros International, the producer of Table No. 21. You can watch the movie on Eros Now with a subscription plan that starts from ₹49 per month. You can also download the movie for offline viewing on your device.
-
YouTube: This is the most popular and accessible platform for watching movies and shows online. You can rent or buy Table No. 21 on YouTube for ₹25 or ₹50 respectively. You can also download the movie for offline viewing on your device.
-
Google Play Movies: This is another platform that allows you to rent or buy movies and shows online. You can rent or buy Table No. 21 on Google Play Movies for ₹25 or ₹50 respectively. You can also download the movie for offline viewing on your device.
-
Amazon Prime Video: This is one of the leading platforms for streaming movies and shows online. You can watch Table No. 21 on Amazon Prime Video with a subscription plan that starts from ₹129 per month or ₹999 per year. You can also download the movie for offline viewing on your device.
-
-
By choosing these alternatives, you will not only enjoy watching Table No. 21 in high quality and without any interruptions, but also support the film industry and the artists who deserve your respect and admiration.
-
Conclusion
-
Table No. 21 is a thrilling and engaging movie that will keep you hooked till the end. It is a movie that deserves to be watched legally and safely, not illegally and riskily. Downloading Table No. 21 full movie from Filmyzilla is a bad idea that will expose you to many dangers and troubles, as well as harm the film industry and the artists who work hard to entertain you. Therefore, you should avoid downloading Table No. 21 from Filmyzilla and opt for legal and safe alternatives to watch the movie.
-
FAQs
-
Here are some frequently asked questions about Table No. 21 and Filmyzilla:
-
-
Is Table No. 21 based on a true story?
-
No, Table No. 21 is not based on a true story, but it is inspired by Article 21 of the Indian Constitution, which talks about the protection of life and personal liberty.
-
What is the meaning of Table No. 21?
-
Table No. 21 is the name of the game show that Mr. Khan hosts in the movie. It is also a reference to Article 21 of the Indian Constitution, which is violated by Mr. Khan in his quest for revenge.
-
What is ragging and why is it an issue in India?
-
Ragging is a form of bullying that involves physical, mental, or sexual abuse of new or junior students by senior students in educational institutions. It is an issue in India because it causes many cases of harassment, humiliation, injury, suicide, and murder among students every year.
-
How does Filmyzilla get access to new movies?
-
Filmyzilla gets access to new movies by using various sources such as camcorders, screen recorders, hacked servers, leaked copies, etc. It then uploads them on its website or shares them with other websites.
-
How can I report or block Filmyzilla?
-
You can report or block Filmyzilla by contacting your ISP, cybercrime cell, or anti-piracy cell and providing them with the details of the website. You can also use software or extensions that block access to pirated websites.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Treasure Mathstorm and Join the Super Solvers in an Amazing Adventure.md b/spaces/1phancelerku/anime-remove-background/Download Treasure Mathstorm and Join the Super Solvers in an Amazing Adventure.md
deleted file mode 100644
index 386139dc37e49f3eff19d131b24763c7600ce76e..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Treasure Mathstorm and Join the Super Solvers in an Amazing Adventure.md
+++ /dev/null
@@ -1,152 +0,0 @@
-
-
How to Download Treasure Mathstorm: A Fun and Educational Game for Kids
-
Do you want to help your kids learn math in a fun and engaging way? Do you want to introduce them to a classic educational game that has entertained and challenged millions of children around the world? If you answered yes, then you should download Treasure Mathstorm, a game that combines math, adventure, and humor in a delightful way.
Treasure Mathstorm is an educational game designed for kids ages 6 to 8. It was developed by The Learning Company in 1992 and it is part of the Super Solvers series. In this game, you have to help the elves restore Treasure Mountain by solving math problems and finding treasures. Along the way, you will encounter various obstacles, puzzles, and surprises that will make your journey more exciting.
-
In this article, we will tell you everything you need to know about Treasure Mathstorm, including what it is, how to download it, and how to play it. We will also share some tips and tricks to help you get the most out of this game. So, let's get started!
-
What is Treasure Mathstorm?
-
Treasure Mathstorm is an educational game that teaches kids various math skills and concepts in a fun and interactive way. It is suitable for kids who are in grades 1 to 3 or who have a basic knowledge of arithmetic. The game covers topics such as addition, subtraction, multiplication, division, fractions, decimals, time, money, measurement, geometry, logic, and problem-solving.
-
The story and the goal of the game
-
The story of Treasure Mathstorm is that the Master of Mischief, a villain who likes to cause trouble, has invented a machine that changes the weather and freezes Treasure Mountain. He has also hidden all the treasures on the mountain and locked them with math problems. Your goal is to restore the mountain by locating different treasures on the mountain and returning them to the castle at the top. When all the treasures have been restored, the king will have his power back and all of the ice will melt.
-
download treasure mathstorm free
-download treasure mathstorm for windows 10
-download treasure mathstorm online
-download treasure mathstorm mac
-download treasure mathstorm dos
-download treasure mathstorm game
-download treasure mathstorm full version
-download treasure mathstorm iso
-download treasure mathstorm emulator
-download treasure mathstorm 1992
-download treasure mathstorm windows 3.1
-download treasure mathstorm cd rom
-download treasure mathstorm internet archive
-download treasure mathstorm classic reload
-download treasure mathstorm super solvers
-download treasure mathstorm learning company
-download treasure mathstorm educational game
-download treasure mathstorm master of mischief
-download treasure mathstorm mountain
-download treasure mathstorm castle
-download treasure mathstorm elves
-download treasure mathstorm weather machine
-download treasure mathstorm addition subtraction multiplication
-download treasure mathstorm telling time counting money
-download treasure mathstorm skill level
-how to download treasure mathstorm
-where to download treasure mathstorm
-why download treasure mathstorm
-what is treasure mathstorm
-who made treasure mathstorm
-when was treasure mathstorm released
-is treasure mathstorm compatible with windows 10
-is treasure mathstorm still available
-is treasure mathstorm fun
-is treasure mathstorm educational
-can i download treasure mathstorm for free
-can i play treasure mathstorm online
-can i run treasure mathstorm on mac
-can i use dosbox to play treasure mathstorm
-can i get the full version of treasure mathstorm
-best site to download treasure mathstorm
-best way to play treasure mathstorm on windows 10
-best emulator for treasure mathstorm
-best settings for treasure mathstorm dosbox
-best tips and tricks for playing treasure mathstorm
-
The math skills and concepts covered in the game
-
The math skills and concepts covered in Treasure Mathstorm are divided into three levels of difficulty: easy, medium, and hard. You can choose which level you want to play at any time during the game. The math skills and concepts covered in each level are as follows:
-
-
Easy: addition and subtraction up to 18, telling time by hours and half-hours, counting money up to $1.00, identifying shapes and colors.
-
Medium: addition and subtraction up to 99, telling time by quarter-hours, counting money up to $5.00, identifying fractions (halves, thirds, fourths), measuring length with inches.
-
Hard: addition and subtraction up to 999, telling time by minutes, counting money up to $10.00, identifying fractions (sixths, eighths), measuring length with feet.
-
-
The features and benefits of the game
-
Treasure Mathstorm has many features and benefits that make it a great educational game for kids. Some of them are:
-
-
It adapts to your child's skill level and progress. The game automatically adjusts the difficulty of the math problems based on your child's performance. It also keeps track of your child's scores and achievements.
-
It provides feedback and encouragement. The game gives your child immediate feedback on whether they answered a math problem correctly or incorrectly. It also provides hints and explanations when needed. It also praises your child for their efforts and achievements.
-
It offers variety and fun. The game has different types of math problems and activities that keep your child engaged and motivated. It also has colorful graphics, animations, sound effects, and music that make the game more enjoyable.
-
It fosters creativity and exploration. The game allows your child to explore the mountain and discover different treasures and surprises. It also lets your child customize their character and their backpack with different items and accessories.
-
-
How to download Treasure Mathstorm?
-
If you want to download Treasure Mathstorm, you need to make sure that your computer meets the system requirements and compatibility of the game. You also need to find a reliable source and link to download the game. Finally, you need to follow the steps and tips to install and run the game on your computer.
-
The system requirements and compatibility of the game
-
Treasure Mathstorm is an old game that was originally designed for DOS and Windows 3.x operating systems. Therefore, it may not run smoothly on modern computers with newer operating systems such as Windows 10, Mac OS, or Linux. However, there are ways to make the game compatible with your computer by using emulators or virtual machines.
-
An emulator is a software that mimics the functions of an old operating system or device on your computer. A virtual machine is a software that creates a separate environment on your computer that runs an old operating system or device. Both methods allow you to run old games and programs on your computer without affecting your main system.
-
Some of the popular emulators and virtual machines that you can use to run Treasure Mathstorm are:
-
-
DOSBox: an emulator that runs DOS games and programs on Windows, Mac OS, Linux, and other platforms.
-
ScummVM: an emulator that runs games that use the SCUMM engine, such as Treasure Mathstorm.
-
VirtualBox: a virtual machine that runs various operating systems such as Windows 3.x, Windows 95, Windows 98, etc.
-
VMware: another virtual machine that runs various operating systems such as Windows 3.x, Windows 95, Windows 98, etc.
-
-
You can download these emulators and virtual machines from their official websites or from other trusted sources. You can also find tutorials and guides on how to use them online.
-
The sources and links to download the game
-
Once you have chosen an emulator or a virtual machine to run Treasure Mathstorm, you need to find a source and a link to download the game. There are many websites that offer old games for free or for a small fee. However, not all of them are safe and legal. Some of them may contain viruses, malware, or spyware that can harm your computer or steal your personal information. Some of them may also violate the copyright laws or the terms of service of the original developers or publishers of the game.
-
Therefore, you need to be careful and selective when choosing a source and a link to download Treasure Mathstorm. You need to check the reputation and the reviews of the website before downloading anything from it. You also need to scan the downloaded files with an antivirus program before opening them. You also need to respect the rights and the wishes of the original developers or publishers of the game.
-
Some of the reputable and legal sources and links to download Treasure Mathstorm are:
-
-
The Learning Company: the original developer and publisher of Treasure Mathstorm. They offer a digital download of the game for $9.99 on their website.
-
GOG.com: a digital distribution platform that sells old games that are DRM-free (no copy protection) and compatible with modern systems. They offer Treasure Mathstorm for $5.99 on their website.
-
Abandonia: a website that hosts old games that are abandoned by their developers or publishers. They offer Treasure Mathstorm for free on their website.
-
-
The steps and tips to install and run the game
-
After you have downloaded Treasure Mathstorm from a source and a link of your choice, you need to follow these steps and tips to install and run the game on your computer:
-
-
Extract the downloaded files from the ZIP or RAR archive using a program such as WinZip or WinRAR.
-
Create a folder on your computer where you want to store the game files.
-
Copy or move the extracted files to the folder you created in step 2.
-
Open the emulator or the virtual machine of your choice and configure it according to the instructions and the system requirements of the game.
-
Mount or load the game folder or the game file (usually a .exe or a .bat file) on the emulator or the virtual machine and start the game.
-
Enjoy playing Treasure Mathstorm!
-
-
Some tips and tricks to help you install and run the game are:
-
-
If you encounter any errors or problems while installing or running the game, you can try to change the settings of the emulator or the virtual machine, such as the memory, the sound, the graphics, etc.
-
If you want to save your progress and your scores in the game, you need to create a save file on the emulator or the virtual machine. You can also backup your save file on your computer or on a cloud service.
-
If you want to play Treasure Mathstorm with other players online, you can use a program such as DOSBox Daum or DOSBox-X that supports multiplayer mode. You can also use a program such as Hamachi or Tunngle that creates a virtual network for online gaming.
-
-
How to play Treasure Mathstorm?
-
Now that you have installed and run Treasure Mathstorm on your computer, you are ready to play it. In this section, we will explain how to play Treasure Mathstorm, including the main screen and the menu options of the game, the levels and the challenges of the game, and the rewards and the achievements of the game.
-
The main screen and the menu options of the game
-
The main screen of Treasure Mathstorm is where you can see your character, your backpack, your score, your level, and your time. You can also see the mountain and the castle in the background. You can use your mouse or your keyboard to move your character around and interact with different objects and characters on the screen.
-
The menu options of Treasure Mathstorm are located at the top of the screen. You can access them by clicking on them with your mouse or by pressing a key on your keyboard. The menu options are:
-
-
File: where you can start a new game, load a saved game, save your current game, quit the game, or change your player name.
-
Options: where you can change the difficulty level of the math problems, turn on or off the music and sound effects, adjust the volume, or view the credits.
-
Help: where you can get help on how to play Treasure Mathstorm, how to use DOSBox or ScummVM, or how to contact The Learning Company.
-
-
The levels and the challenges of the game
-
Treasure Mathstorm has three levels of difficulty: easy, medium, and hard. You can choose which level you want to play at any time during the game. The level you choose affects the type and the number of math problems you have to solve in the game. Treasure Mathstorm has 10 levels of challenges that you have to complete in order to restore the mountain. Each level has a different theme and a different number of treasures to find. The themes and the number of treasures are: - Level 1: Snowy Slopes (10 treasures) - Level 2: Icy Caves (15 treasures) - Level 3: Frozen Forest (20 treasures) - Level 4: Snowman Village (25 treasures) - Level 5: Ice Castle (30 treasures) - Level 6: Crystal Caverns (35 treasures) - Level 7: Blizzard Bluffs (40 treasures) - Level 8: Polar Peak (45 treasures) - Level 9: Cloud City (50 treasures) - Level 10: Treasure Mountain (55 treasures) To complete a level, you have to find all the treasures on that level and return them to the castle at the top of the mountain. To find a treasure, you have to solve a math problem that is attached to it. To return a treasure, you have to carry it to the castle and drop it in the correct bin. The math problems in Treasure Mathstorm are varied and fun. They include: - Addition and subtraction problems that involve snowballs, snowflakes, icicles, etc. - Multiplication and division problems that involve snowmen, penguins, polar bears, etc. - Fraction problems that involve pies, pizzas, cakes, etc. - Decimal problems that involve thermometers, clocks, scales, etc. - Time problems that involve clocks, watches, calendars, etc. - Money problems that involve coins, bills, wallets, etc. - Measurement problems that involve rulers, tapes, scales, etc. - Geometry problems that involve shapes, angles, lines, etc. - Logic problems that involve patterns, sequences, puzzles, etc. - Problem-solving problems that involve word problems, equations, graphs, etc. The math problems in Treasure Mathstorm are not only educational but also entertaining. They have humorous scenarios and characters that make the game more enjoyable. For example: - You have to help a snowman find his missing nose by solving a fraction problem. - You have to help a penguin buy a hat by solving a money problem. - You have to help a polar bear catch a fish by solving a geometry problem. - You have to help a cloud fairy make a rainbow by solving a logic problem.
The rewards and the achievements of the game
-
Treasure Mathstorm has many rewards and achievements that motivate you to play the game and improve your math skills. Some of them are:
-
-
You can earn stars for each math problem you solve correctly. The more stars you earn, the higher your score will be.
-
You can earn medals for each level you complete. The medals are bronze, silver, gold, and platinum. The higher the medal, the better your performance on that level.
-
You can earn trophies for each level of difficulty you complete. The trophies are easy, medium, and hard. The higher the trophy, the more challenging the math problems you solved.
-
You can earn badges for special achievements in the game. The badges are explorer, adventurer, mastermind, super solver, etc. The more badges you earn, the more skills and concepts you mastered.
-
You can customize your character and your backpack with different items and accessories that you find or buy in the game. You can also change your character's name and appearance.
-
-
Conclusion
-
Treasure Mathstorm is an educational game that teaches kids math skills and concepts in a fun and interactive way. It is suitable for kids who are in grades 1 to 3 or who have a basic knowledge of arithmetic. The game covers topics such as addition, subtraction, multiplication, division, fractions, decimals, time, money, measurement, geometry, logic, and problem-solving. Treasure Mathstorm is also a fun and engaging game that combines math, adventure, and humor in a delightful way. It has colorful graphics, animations, sound effects, and music that make the game more enjoyable. It also has various obstacles, puzzles, and surprises that make the game more exciting. It also has different types of math problems and activities that keep the game varied and interesting. Treasure Mathstorm is an old game that was originally designed for DOS and Windows 3.x operating systems. Therefore, it may not run smoothly on modern computers with newer operating systems such as Windows 10, Mac OS, or Linux. However, there are ways to make the game compatible with your computer by using emulators or virtual machines. Treasure Mathstorm is a game that you can download from various sources and links online. However, you need to be careful and selective when choosing a source and a link to download the game. You need to check the reputation and the reviews of the website before downloading anything from it. You also need to scan the downloaded files with an antivirus program before opening them. You also need to respect the rights and the wishes of the original developers or publishers of the game. Treasure Mathstorm is a game that you can play by following some steps and tips to install and run the game on your computer. You also need to follow some tips and tricks to help you play the game better and faster. You also need to enjoy playing the game and learning math at the same time. We hope that this article has helped you learn how to download Treasure Mathstorm: a fun and educational game for kids. We also hope that you have fun playing Treasure Mathstorm and improving your math skills. If you have any questions or comments about Treasure Mathstorm or this article, please feel free to contact us or leave a comment below. Thank you for reading!
FAQs
-
Here are some frequently asked questions about Treasure Mathstorm:
-
-
Q: How long does it take to complete Treasure Mathstorm?
-
A: It depends on your skill level and your speed. However, it usually takes about 10 to 15 hours to complete all 10 levels of Treasure Mathstorm.
-
Q: How can I get more stars, medals, trophies, and badges in Treasure Mathstorm?
-
A: You can get more stars by solving more math problems correctly. You can get more medals by completing more levels with higher scores. You can get more trophies by completing more levels of difficulty. You can get more badges by achieving special goals in the game.
-
Q: How can I save my progress and my scores in Treasure Mathstorm?
-
A: You can save your progress and your scores in Treasure Mathstorm by creating a save file on the emulator or the virtual machine that you are using. You can also backup your save file on your computer or on a cloud service.
-
Q: How can I play Treasure Mathstorm with other players online?
-
A: You can play Treasure Mathstorm with other players online by using a program such as DOSBox Daum or DOSBox-X that supports multiplayer mode. You can also use a program such as Hamachi or Tunngle that creates a virtual network for online gaming.
-
Q: Where can I find more information and resources about Treasure Mathstorm?
-
A: You can find more information and resources about Treasure Mathstorm on these websites:
-
-
The Learning Company: the original developer and publisher of Treasure Mathstorm. They offer a digital download of the game for $9.99 on their website.
-
GOG.com: a digital distribution platform that sells old games that are DRM-free (no copy protection) and compatible with modern systems. They offer Treasure Mathstorm for $5.99 on their website.
-
Abandonia: a website that hosts old games that are abandoned by their developers or publishers. They offer Treasure Mathstorm for free on their website.
-
MobyGames: a website that provides information and reviews about old games. They have a page dedicated to Treasure Mathstorm on their website.
-
Wikipedia: a free online encyclopedia that provides information about various topics. They have an article about Treasure Mathstorm on their website.
-
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/30SecondsToMoon/30SecondsToMoon/README.md b/spaces/30SecondsToMoon/30SecondsToMoon/README.md
deleted file mode 100644
index 32bfc53b203454ed16de26d490b66119e5c8043e..0000000000000000000000000000000000000000
--- a/spaces/30SecondsToMoon/30SecondsToMoon/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: 30SecondsToMoon
-emoji: 📉
-colorFrom: green
-colorTo: pink
-sdk: gradio
-sdk_version: 3.50.2
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn_pytorch/src/visualization_utils.py b/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn_pytorch/src/visualization_utils.py
deleted file mode 100644
index bab02be31a6ca44486f98d57de4ab4bfa89394b7..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn_pytorch/src/visualization_utils.py
+++ /dev/null
@@ -1,31 +0,0 @@
-from PIL import ImageDraw
-
-
-def show_bboxes(img, bounding_boxes, facial_landmarks=[]):
- """Draw bounding boxes and facial landmarks.
-
- Arguments:
- img: an instance of PIL.Image.
- bounding_boxes: a float numpy array of shape [n, 5].
- facial_landmarks: a float numpy array of shape [n, 10].
-
- Returns:
- an instance of PIL.Image.
- """
-
- img_copy = img.copy()
- draw = ImageDraw.Draw(img_copy)
-
- for b in bounding_boxes:
- draw.rectangle([
- (b[0], b[1]), (b[2], b[3])
- ], outline='white')
-
- for p in facial_landmarks:
- for i in range(5):
- draw.ellipse([
- (p[i] - 1.0, p[i + 5] - 1.0),
- (p[i] + 1.0, p[i + 5] + 1.0)
- ], outline='blue')
-
- return img_copy
diff --git a/spaces/AIWaves/Software_Company/app.py b/spaces/AIWaves/Software_Company/app.py
deleted file mode 100644
index f61d5b5befa277298e7d06e657e5cdef0e14066f..0000000000000000000000000000000000000000
--- a/spaces/AIWaves/Software_Company/app.py
+++ /dev/null
@@ -1,254 +0,0 @@
-import sys
-
-import os
-from gradio_base import WebUI, UIHelper, PORT, HOST, Client
-from gradio_config import GradioConfig as gc
-from typing import List, Tuple, Any
-import gradio as gr
-import time
-
-class CodeUI(WebUI):
-
- def render_and_register_ui(self):
- self.agent_name:list = [self.cache["agents_name"]] if isinstance(self.cache["agents_name"], str) else self.cache['agents_name']
- gc.add_agent(self.agent_name)
-
- def __init__(
- self,
- client_cmd: list,
- socket_host: str = HOST,
- socket_port: int = PORT,
- bufsize: int = 1024,
- ui_name: str = "CodeUI"
- ):
- super(CodeUI, self).__init__(client_cmd, socket_host, socket_port, bufsize, ui_name)
- self.first_recieve_from_client()
- self.data_history = list()
- self.caller = 0
-
- def construct_ui(self):
- with gr.Blocks(css=gc.CSS) as demo:
- gr.Markdown("""# Agents""")
- gr.Markdown("""**Agents** is an open-source library/framework for building autonomous language agents.if you want to know more about **Agents**, please check our📄 Paper and📦 Github. Here is a demo of **Agents**.""")
- gr.Markdown("""If an error occurs or the queue is too long, please create your own demo by clicking Duplicate This Space in the upper right corner. Please be patient with building, thank you! It takes about 3-4 minutes.""")
- with gr.Row():
- with gr.Column():
- self.text_api = gr.Textbox(
- value = self.cache["api_key"],
- placeholder="openai key",
- label="Please input valid openai key for gpt-3.5-turbo-16k."
- )
- self.radio_mode = gr.Radio(
- [Client.SINGLE_MODE],
- value=Client.SINGLE_MODE,
- interactive=True,
- label = Client.MODE_LABEL,
- info = Client.MODE_INFO
- )
- self.chatbot = gr.Chatbot(
- elem_id="chatbot1"
- )
- self.btn_next = gr.Button(
- value="Next Agent",
- visible=False, elem_id="btn"
- )
- with gr.Row():
- self.text_requirement = gr.Textbox(
- value=self.cache['requirement'],
- placeholder="Please enter your content",
- scale=9,
- )
- self.btn_start = gr.Button(
- value="Start!",
- scale=1
- )
- self.btn_reset = gr.Button(
- value="Restart",
- visible=False
- )
-
- with gr.Column():
- self.file = gr.File(visible=False)
- self.chat_code_show = gr.Chatbot(
- elem_id="chatbot1",
- visible=False
- )
-
- self.btn_start.click(
- fn=self.btn_send_when_click,
- inputs=[self.chatbot, self.text_requirement, self.radio_mode, self.text_api],
- outputs=[self.chatbot, self.btn_start, self.text_requirement, self.btn_reset]
- ).then(
- fn=self.btn_send_after_click,
- inputs=[self.file, self.chatbot, self.chat_code_show, self.btn_start, self.btn_reset, self.text_requirement],
- outputs=[self.file, self.chatbot, self.chat_code_show, self.btn_start, self.btn_reset, self.text_requirement, self.btn_next]
- )
- self.text_requirement.submit(
- fn=self.btn_send_when_click,
- inputs=[self.chatbot, self.text_requirement, self.text_api],
- outputs=[self.chatbot, self.btn_start, self.text_requirement, self.btn_reset]
- ).then(
- fn=self.btn_send_after_click,
- inputs=[self.file, self.chatbot, self.chat_code_show, self.btn_start, self.btn_reset, self.text_requirement],
- outputs=[self.file, self.chatbot, self.chat_code_show, self.btn_start, self.btn_reset, self.text_requirement, self.btn_next]
- )
- self.btn_reset.click(
- fn=self.btn_reset_when_click,
- inputs=[],
- outputs=[self.file, self.chatbot, self.chat_code_show, self.btn_start, self.btn_reset, self.text_requirement, self.btn_next]
- ).then(
- fn=self.btn_reset_after_click,
- inputs=[self.file, self.chatbot, self.chat_code_show, self.btn_start, self.btn_reset, self.text_requirement],
- outputs=[self.file, self.chatbot, self.chat_code_show, self.btn_start, self.btn_reset, self.text_requirement, self.btn_next]
- )
- self.file.select(
- fn=self.file_when_select,
- inputs=[self.file],
- outputs=[self.chat_code_show]
- )
- self.btn_next.click(
- fn = self.btn_next_when_click,
- inputs=[],
- outputs=[self.btn_next]
- ).then(
- fn=self.btn_send_after_click,
- inputs=[self.file, self.chatbot, self.chat_code_show, self.btn_start, self.btn_reset, self.text_requirement],
- outputs=[self.file, self.chatbot, self.chat_code_show, self.btn_start, self.btn_reset, self.text_requirement, self.btn_next]
- )
-
- self.demo = demo
-
-
- def handle_message(self, history:list, state, agent_name, token, node_name):
- if state % 10 == 0:
- self.data_history.append({agent_name: token})
- elif state % 10 == 1:
- # Same state. Need to add new bubble in same bubble.
- if len(self.data_history) == 0:
- self.data_history.append({agent_name:""})
- self.data_history[-1][agent_name] += token
- elif state % 10 == 2:
- # New state. Need to add new bubble.
- history.append([None, ""])
- self.data_history.clear()
- self.data_history.append({agent_name: token})
- else:
- assert False, "Invalid state."
- render_data = self.render_bubble(history, self.data_history, node_name, render_node_name=True)
- return render_data
-
- def btn_send_when_click(self, chatbot, text_requirement, mode, api):
- """
- inputs=[self.chatbot, self.text_requirement, radio, text_api],
- outputs=[self.chatbot, self.btn_start, self.text_requirement, self.btn_reset]
- """
- chatbot = [[UIHelper.wrap_css(content=text_requirement, name="User"), None]]
- yield chatbot,\
- gr.Button.update(visible=True, interactive=False, value="Running"),\
- gr.Textbox.update(visible=True, interactive=False, value=""),\
- gr.Button.update(visible=False, interactive=False)
- self.send_start_cmd({'requirement': text_requirement, "mode": mode, "api_key": api})
- return
-
- def btn_send_after_click(
- self,
- file,
- history,
- show_code,
- btn_send,
- btn_reset,
- text_requirement
- ):
- """
- outputs=[self.file, self.chatbot, self.chat_code_show, self.btn_start, self.btn_reset, self.text_requirement, self.btn_next]
- """
- if self.caller == 0:
- self.data_history = list()
- self.caller = 0
- receive_server = self.receive_server
- while True:
- data_list: List = receive_server.send(None)
- for item in data_list:
- data = eval(item)
- assert isinstance(data, list)
- state, agent_name, token, node_name = data
- assert isinstance(state, int)
- assert state in [10, 11, 12, 99, 98]
- if state == 99:
- # finish
- fs = [self.cache['pwd']+'/output_code/'+_ for _ in os.listdir(self.cache['pwd']+'/output_code')]
- yield gr.File.update(value=fs, visible=True, interactive=True),\
- history, \
- gr.Chatbot.update(visible=True),\
- gr.Button.update(visible=True, interactive=True, value="Start"),\
- gr.Button.update(visible=True, interactive=True),\
- gr.Textbox.update(visible=True, interactive=True, placeholder="Please input your requirement", value=""),\
- gr.Button.update(visible=False)
- return
- elif state == 98:
- yield gr.File.update(visible=False),\
- history, \
- gr.Chatbot.update(visible=False),\
- gr.Button.update(visible=True, interactive=False),\
- gr.Button.update(visible=True, interactive=True),\
- gr.Textbox.update(visible=True, interactive=False),\
- gr.Button.update(visible=True, value=f"Next Agent: 🤖{agent_name} | Next Node: ⭕{node_name}")
- return
- history = self.handle_message(history, state, agent_name, token, node_name)
- yield gr.File.update(visible=False),\
- history, \
- gr.Chatbot.update(visible=False),\
- gr.Button.update(visible=True, interactive=False),\
- gr.Button.update(visible=False, interactive=False),\
- gr.Textbox.update(visible=True, interactive=False),\
- gr.Button.update(visible=False)
-
- def btn_reset_when_click(self):
- """
- inputs = []
- outputs = [self.file, self.chatbot, self.chat_code_show, self.btn_start, self.btn_reset, self.text_requirement, self.btn_next]
- """
- return gr.File.update(visible=False),\
- None, None, gr.Button.update(value="Restarting...", interactive=False),\
- gr.Button.update(value="Restarting...", interactive=False),\
- gr.Textbox.update(value="Restarting", interactive=False),\
- gr.Button.update(visible=False)
-
- def btn_reset_after_click(
- self,
- file,
- chatbot,
- show_code,
- btn_send,
- btn_reset,
- text_requirement
- ):
- self.reset()
- self.first_recieve_from_client(reset_mode=True)
- return gr.File.update(value=None, visible=False),\
- gr.Chatbot.update(value=None, visible=True),\
- gr.Chatbot.update(value=None, visible=False),\
- gr.Button.update(value="Start", visible=True, interactive=True),\
- gr.Button.update(value="Restart", interactive=False, visible=False),\
- gr.Textbox.update(value=self.cache['requirement'], interactive=True, visible=True),\
- gr.Button.update(visible=False)
-
- def file_when_select(self, file):
- CODE_PREFIX = "```python\n{}\n```"
- with open(file.name, "r", encoding='utf-8') as f:
- contents = f.readlines()
- codes = "".join(contents)
- return [[CODE_PREFIX.format(codes),None]]
-
- def btn_next_when_click(self):
- self.caller = 1 # it will remain the value in self.data_history
- self.send_message("nothing")
- time.sleep(0.5)
- yield gr.Button.update(visible=False)
- return
-
-
-if __name__ == '__main__':
- ui = CodeUI(client_cmd=["python","gradio_backend.py"])
- ui.construct_ui()
- ui.run()
\ No newline at end of file
diff --git a/spaces/AIZero2Hero4Health/4-ImageSimilaritySearch-SL/app.py b/spaces/AIZero2Hero4Health/4-ImageSimilaritySearch-SL/app.py
deleted file mode 100644
index 9b287e491115a6952e8577523bef64c2cb57686b..0000000000000000000000000000000000000000
--- a/spaces/AIZero2Hero4Health/4-ImageSimilaritySearch-SL/app.py
+++ /dev/null
@@ -1,186 +0,0 @@
-from html import escape
-import re
-import streamlit as st
-import pandas as pd, numpy as np
-from transformers import CLIPProcessor, CLIPModel
-from st_clickable_images import clickable_images
-
-@st.cache(
- show_spinner=False,
- hash_funcs={
- CLIPModel: lambda _: None,
- CLIPProcessor: lambda _: None,
- dict: lambda _: None,
- },
-)
-def load():
- model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14")
- processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14")
- df = {0: pd.read_csv("data.csv"), 1: pd.read_csv("data2.csv")}
- embeddings = {0: np.load("embeddings.npy"), 1: np.load("embeddings2.npy")}
- for k in [0, 1]:
- embeddings[k] = embeddings[k] / np.linalg.norm(
- embeddings[k], axis=1, keepdims=True
- )
- return model, processor, df, embeddings
-
-
-model, processor, df, embeddings = load()
-source = {0: "\nSource: Unsplash", 1: "\nSource: The Movie Database (TMDB)"}
-
-
-def compute_text_embeddings(list_of_strings):
- inputs = processor(text=list_of_strings, return_tensors="pt", padding=True)
- result = model.get_text_features(**inputs).detach().numpy()
- return result / np.linalg.norm(result, axis=1, keepdims=True)
-
-
-def image_search(query, corpus, n_results=24):
- positive_embeddings = None
-
- def concatenate_embeddings(e1, e2):
- if e1 is None:
- return e2
- else:
- return np.concatenate((e1, e2), axis=0)
-
- splitted_query = query.split("EXCLUDING ")
- dot_product = 0
- k = 0 if corpus == "Unsplash" else 1
- if len(splitted_query[0]) > 0:
- positive_queries = splitted_query[0].split(";")
- for positive_query in positive_queries:
- match = re.match(r"\[(Movies|Unsplash):(\d{1,5})\](.*)", positive_query)
- if match:
- corpus2, idx, remainder = match.groups()
- idx, remainder = int(idx), remainder.strip()
- k2 = 0 if corpus2 == "Unsplash" else 1
- positive_embeddings = concatenate_embeddings(
- positive_embeddings, embeddings[k2][idx : idx + 1, :]
- )
- if len(remainder) > 0:
- positive_embeddings = concatenate_embeddings(
- positive_embeddings, compute_text_embeddings([remainder])
- )
- else:
- positive_embeddings = concatenate_embeddings(
- positive_embeddings, compute_text_embeddings([positive_query])
- )
- dot_product = embeddings[k] @ positive_embeddings.T
- dot_product = dot_product - np.median(dot_product, axis=0)
- dot_product = dot_product / np.max(dot_product, axis=0, keepdims=True)
- dot_product = np.min(dot_product, axis=1)
-
- if len(splitted_query) > 1:
- negative_queries = (" ".join(splitted_query[1:])).split(";")
- negative_embeddings = compute_text_embeddings(negative_queries)
- dot_product2 = embeddings[k] @ negative_embeddings.T
- dot_product2 = dot_product2 - np.median(dot_product2, axis=0)
- dot_product2 = dot_product2 / np.max(dot_product2, axis=0, keepdims=True)
- dot_product -= np.max(np.maximum(dot_product2, 0), axis=1)
-
- results = np.argsort(dot_product)[-1 : -n_results - 1 : -1]
- return [
- (
- df[k].iloc[i]["path"],
- df[k].iloc[i]["tooltip"] + source[k],
- i,
- )
- for i in results
- ]
-
-
-description = """
-# Semantic image search
-**Enter your query and hit enter**
-"""
-
-howto = """
-- Click image to find similar images
-- Use "**;**" to combine multiple queries)
-- Use "**EXCLUDING**", to exclude a query
-"""
-
-
-def main():
- st.markdown(
- """
- """,
- unsafe_allow_html=True,
- )
- st.sidebar.markdown(description)
- with st.sidebar.expander("Advanced use"):
- st.markdown(howto)
-
-
- st.sidebar.markdown(f"Try these test prompts: orange, blue, beach, lighthouse, mountain, sunset, parade")
- st.sidebar.markdown(f"Unsplash has categories that match: backgrounds, photos, nature, iphone, etc")
- st.sidebar.markdown(f"Unsplash images contain animals, apps, events, feelings, food, travel, nature, people, religion, sports, things, stock")
- st.sidebar.markdown(f"Unsplash things include flag, tree, clock, money, tattoo, arrow, book, car, fireworks, ghost, health, kiss, dance, balloon, crown, eye, house, music, airplane, lighthouse, typewriter, toys")
- st.sidebar.markdown(f"unsplash feelings include funny, heart, love, cool, congratulations, love, scary, cute, friendship, inspirational, hug, sad, cursed, beautiful, crazy, respect, transformation, peaceful, happy")
- st.sidebar.markdown(f"unsplash people contain baby, life, women, family, girls, pregnancy, society, old people, musician, attractive, bohemian")
- st.sidebar.markdown(f"imagenet queries include: photo of, photo of many, sculpture of, rendering of, graffiti of, tattoo of, embroidered, drawing of, plastic, black and white, painting, video game, doodle, origami, sketch, etc")
-
-
- _, c, _ = st.columns((1, 3, 1))
- if "query" in st.session_state:
- query = c.text_input("", value=st.session_state["query"])
- else:
-
- query = c.text_input("", value="lighthouse")
- corpus = st.radio("", ["Unsplash"])
- #corpus = st.radio("", ["Unsplash", "Movies"])
- if len(query) > 0:
- results = image_search(query, corpus)
- clicked = clickable_images(
- [result[0] for result in results],
- titles=[result[1] for result in results],
- div_style={
- "display": "flex",
- "justify-content": "center",
- "flex-wrap": "wrap",
- },
- img_style={"margin": "2px", "height": "200px"},
- )
- if clicked >= 0:
- change_query = False
- if "last_clicked" not in st.session_state:
- change_query = True
- else:
- if clicked != st.session_state["last_clicked"]:
- change_query = True
- if change_query:
- st.session_state["query"] = f"[{corpus}:{results[clicked][2]}]"
- st.experimental_rerun()
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/ASJMO/freegpt/client/css/settings.css b/spaces/ASJMO/freegpt/client/css/settings.css
deleted file mode 100644
index 0a409f27d6d185c90ae76d95f64b457e140ae8d9..0000000000000000000000000000000000000000
--- a/spaces/ASJMO/freegpt/client/css/settings.css
+++ /dev/null
@@ -1,44 +0,0 @@
-.settings-container {
- color: var(--colour-2);
- margin: 24px 0px 8px 0px;
- justify-content: center;
-}
-
-.settings-container span {
- font-size: 0.875rem;
- margin: 0;
-}
-
-.settings-container label {
- width: 24px;
- height: 16px;
-}
-
-.settings-container .field {
- justify-content: space-between;
-}
-
-.settings-container .checkbox input + label,
-.settings-container .checkbox input:checked + label:after {
- background: var(--colour-1);
-}
-
-.settings-container .checkbox input + label:after,
-.settings-container .checkbox input:checked + label {
- background: var(--colour-3);
-}
-
-.settings-container .checkbox label:after {
- left: 2px;
- width: 10px;
- height: 10px;
-}
-
-.settings-container .checkbox input:checked + label:after {
- left: calc(100% - 2px - 10px);
-}
-
-.settings-container .dropdown {
- padding: 4px 8px;
- font-size: 0.75rem;
-}
diff --git a/spaces/AchyuthGamer/OpenGPT/client/css/message.css b/spaces/AchyuthGamer/OpenGPT/client/css/message.css
deleted file mode 100644
index 64e04147ee4d1e76dda4f39c4f756c9da63e3874..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/client/css/message.css
+++ /dev/null
@@ -1,65 +0,0 @@
-.message {
- width: 100%;
- overflow-wrap: break-word;
- display: flex;
- gap: var(--section-gap);
- padding: var(--section-gap);
- padding-bottom: 0;
-}
-
-.message:last-child {
- animation: 0.6s show_message;
-}
-
-@keyframes show_message {
- from {
- transform: translateY(10px);
- opacity: 0;
- }
-}
-
-.message .avatar-container img {
- max-width: 48px;
- max-height: 48px;
- box-shadow: 0.4px 0.5px 0.7px -2px rgba(0, 0, 0, 0.08), 1.1px 1.3px 2px -2px rgba(0, 0, 0, 0.041),
- 2.7px 3px 4.8px -2px rgba(0, 0, 0, 0.029), 9px 10px 16px -2px rgba(0, 0, 0, 0.022);
-}
-
-.message .content {
- display: flex;
- flex-direction: column;
- width: 90%;
- gap: 18px;
-}
-
-.message .content p,
-.message .content li,
-.message .content code {
- font-size: 1rem;
- line-height: 1.3;
-}
-
-@media screen and (max-height: 720px) {
- .message {
- padding: 12px;
- gap: 0;
- }
-
- .message .content {
- margin-left: 8px;
- width: 80%;
- }
-
- .message .avatar-container img {
- max-width: 32px;
- max-height: 32px;
- }
-
- .message .content,
- .message .content p,
- .message .content li,
- .message .content code {
- font-size: 0.875rem;
- line-height: 1.3;
- }
-}
diff --git a/spaces/Adapter/T2I-Adapter/ldm/data/utils.py b/spaces/Adapter/T2I-Adapter/ldm/data/utils.py
deleted file mode 100644
index 7ece8c92b4aca12d6c65908900460cc4beaf522e..0000000000000000000000000000000000000000
--- a/spaces/Adapter/T2I-Adapter/ldm/data/utils.py
+++ /dev/null
@@ -1,40 +0,0 @@
-# -*- coding: utf-8 -*-
-
-import cv2
-import numpy as np
-from torchvision.transforms import transforms
-from torchvision.transforms.functional import to_tensor
-from transformers import CLIPProcessor
-
-from basicsr.utils import img2tensor
-
-
-class AddCannyFreezeThreshold(object):
-
- def __init__(self, low_threshold=100, high_threshold=200):
- self.low_threshold = low_threshold
- self.high_threshold = high_threshold
-
- def __call__(self, sample):
- # sample['jpg'] is PIL image
- x = sample['jpg']
- img = cv2.cvtColor(np.array(x), cv2.COLOR_RGB2BGR)
- canny = cv2.Canny(img, self.low_threshold, self.high_threshold)[..., None]
- sample['canny'] = img2tensor(canny, bgr2rgb=True, float32=True) / 255.
- sample['jpg'] = to_tensor(x)
- return sample
-
-
-class AddStyle(object):
-
- def __init__(self, version):
- self.processor = CLIPProcessor.from_pretrained(version)
- self.pil_to_tensor = transforms.ToTensor()
-
- def __call__(self, sample):
- # sample['jpg'] is PIL image
- x = sample['jpg']
- style = self.processor(images=x, return_tensors="pt")['pixel_values'][0]
- sample['style'] = style
- sample['jpg'] = to_tensor(x)
- return sample
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/spiralcurve-plugin.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/spiralcurve-plugin.d.ts
deleted file mode 100644
index 1f1e5ce088c41839d7c859168f5ee7628dc0f161..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/spiralcurve-plugin.d.ts
+++ /dev/null
@@ -1,15 +0,0 @@
-import SpiralCurve from './spiralcurve';
-
-export default class SpiralCurvePlugin extends Phaser.Plugins.BasePlugin {
- add(
- config?: SpiralCurve.IConfig
- ): SpiralCurve;
-
- add(
- x?: number, y?: number,
- startRadius?: number, endRadius?: number,
- startAngle?: number, endAngle?: number,
- rotation?: number
- ): SpiralCurve
-
-}
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/checkbox/Checkbox.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/checkbox/Checkbox.js
deleted file mode 100644
index c991ae0f45c43e96fec5f19c36cef550be8d0d1a..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/checkbox/Checkbox.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import Checkbox from '../../../plugins/checkbox.js';
-export default Checkbox;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/SetTransitCallbackMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/SetTransitCallbackMethods.js
deleted file mode 100644
index 570583b7c218737745351150e75375a9bb003854..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/SetTransitCallbackMethods.js
+++ /dev/null
@@ -1,32 +0,0 @@
-import GetEaseConfig from './GetEaseConfig.js';
-
-var PopUp = function (menu, duration) {
- menu.popUp(GetEaseConfig(menu.root.easeIn, menu))
-}
-
-var ScaleDown = function (menu, duration) {
- // Don't destroy here
- menu.scaleDown(GetEaseConfig(menu.root.easeOut, menu));
-}
-
-export default {
- setTransitInCallback(callback) {
- if (callback === undefined) {
- callback = PopUp;
- }
-
- this.transitInCallback = callback;
- // callback = function(gameObject, duration) {}
- return this;
- },
-
- setTransitOutCallback(callback) {
- if (callback === undefined) {
- callback = ScaleDown;
- }
-
- this.transitOutCallback = callback;
- // callback = function(gameObject, duration) {}
- return this;
- }
-}
\ No newline at end of file
diff --git a/spaces/AlexWang/lama/saicinpainting/evaluation/losses/fid/inception.py b/spaces/AlexWang/lama/saicinpainting/evaluation/losses/fid/inception.py
deleted file mode 100644
index e9bd0863b457aaa40c770eaa4acbb142b18fc18b..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/saicinpainting/evaluation/losses/fid/inception.py
+++ /dev/null
@@ -1,323 +0,0 @@
-import logging
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torchvision import models
-
-try:
- from torchvision.models.utils import load_state_dict_from_url
-except ImportError:
- from torch.utils.model_zoo import load_url as load_state_dict_from_url
-
-# Inception weights ported to Pytorch from
-# http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz
-FID_WEIGHTS_URL = 'https://github.com/mseitzer/pytorch-fid/releases/download/fid_weights/pt_inception-2015-12-05-6726825d.pth'
-
-
-LOGGER = logging.getLogger(__name__)
-
-
-class InceptionV3(nn.Module):
- """Pretrained InceptionV3 network returning feature maps"""
-
- # Index of default block of inception to return,
- # corresponds to output of final average pooling
- DEFAULT_BLOCK_INDEX = 3
-
- # Maps feature dimensionality to their output blocks indices
- BLOCK_INDEX_BY_DIM = {
- 64: 0, # First max pooling features
- 192: 1, # Second max pooling featurs
- 768: 2, # Pre-aux classifier features
- 2048: 3 # Final average pooling features
- }
-
- def __init__(self,
- output_blocks=[DEFAULT_BLOCK_INDEX],
- resize_input=True,
- normalize_input=True,
- requires_grad=False,
- use_fid_inception=True):
- """Build pretrained InceptionV3
-
- Parameters
- ----------
- output_blocks : list of int
- Indices of blocks to return features of. Possible values are:
- - 0: corresponds to output of first max pooling
- - 1: corresponds to output of second max pooling
- - 2: corresponds to output which is fed to aux classifier
- - 3: corresponds to output of final average pooling
- resize_input : bool
- If true, bilinearly resizes input to width and height 299 before
- feeding input to model. As the network without fully connected
- layers is fully convolutional, it should be able to handle inputs
- of arbitrary size, so resizing might not be strictly needed
- normalize_input : bool
- If true, scales the input from range (0, 1) to the range the
- pretrained Inception network expects, namely (-1, 1)
- requires_grad : bool
- If true, parameters of the model require gradients. Possibly useful
- for finetuning the network
- use_fid_inception : bool
- If true, uses the pretrained Inception model used in Tensorflow's
- FID implementation. If false, uses the pretrained Inception model
- available in torchvision. The FID Inception model has different
- weights and a slightly different structure from torchvision's
- Inception model. If you want to compute FID scores, you are
- strongly advised to set this parameter to true to get comparable
- results.
- """
- super(InceptionV3, self).__init__()
-
- self.resize_input = resize_input
- self.normalize_input = normalize_input
- self.output_blocks = sorted(output_blocks)
- self.last_needed_block = max(output_blocks)
-
- assert self.last_needed_block <= 3, \
- 'Last possible output block index is 3'
-
- self.blocks = nn.ModuleList()
-
- if use_fid_inception:
- inception = fid_inception_v3()
- else:
- inception = models.inception_v3(pretrained=True)
-
- # Block 0: input to maxpool1
- block0 = [
- inception.Conv2d_1a_3x3,
- inception.Conv2d_2a_3x3,
- inception.Conv2d_2b_3x3,
- nn.MaxPool2d(kernel_size=3, stride=2)
- ]
- self.blocks.append(nn.Sequential(*block0))
-
- # Block 1: maxpool1 to maxpool2
- if self.last_needed_block >= 1:
- block1 = [
- inception.Conv2d_3b_1x1,
- inception.Conv2d_4a_3x3,
- nn.MaxPool2d(kernel_size=3, stride=2)
- ]
- self.blocks.append(nn.Sequential(*block1))
-
- # Block 2: maxpool2 to aux classifier
- if self.last_needed_block >= 2:
- block2 = [
- inception.Mixed_5b,
- inception.Mixed_5c,
- inception.Mixed_5d,
- inception.Mixed_6a,
- inception.Mixed_6b,
- inception.Mixed_6c,
- inception.Mixed_6d,
- inception.Mixed_6e,
- ]
- self.blocks.append(nn.Sequential(*block2))
-
- # Block 3: aux classifier to final avgpool
- if self.last_needed_block >= 3:
- block3 = [
- inception.Mixed_7a,
- inception.Mixed_7b,
- inception.Mixed_7c,
- nn.AdaptiveAvgPool2d(output_size=(1, 1))
- ]
- self.blocks.append(nn.Sequential(*block3))
-
- for param in self.parameters():
- param.requires_grad = requires_grad
-
- def forward(self, inp):
- """Get Inception feature maps
-
- Parameters
- ----------
- inp : torch.autograd.Variable
- Input tensor of shape Bx3xHxW. Values are expected to be in
- range (0, 1)
-
- Returns
- -------
- List of torch.autograd.Variable, corresponding to the selected output
- block, sorted ascending by index
- """
- outp = []
- x = inp
-
- if self.resize_input:
- x = F.interpolate(x,
- size=(299, 299),
- mode='bilinear',
- align_corners=False)
-
- if self.normalize_input:
- x = 2 * x - 1 # Scale from range (0, 1) to range (-1, 1)
-
- for idx, block in enumerate(self.blocks):
- x = block(x)
- if idx in self.output_blocks:
- outp.append(x)
-
- if idx == self.last_needed_block:
- break
-
- return outp
-
-
-def fid_inception_v3():
- """Build pretrained Inception model for FID computation
-
- The Inception model for FID computation uses a different set of weights
- and has a slightly different structure than torchvision's Inception.
-
- This method first constructs torchvision's Inception and then patches the
- necessary parts that are different in the FID Inception model.
- """
- LOGGER.info('fid_inception_v3 called')
- inception = models.inception_v3(num_classes=1008,
- aux_logits=False,
- pretrained=False)
- LOGGER.info('models.inception_v3 done')
- inception.Mixed_5b = FIDInceptionA(192, pool_features=32)
- inception.Mixed_5c = FIDInceptionA(256, pool_features=64)
- inception.Mixed_5d = FIDInceptionA(288, pool_features=64)
- inception.Mixed_6b = FIDInceptionC(768, channels_7x7=128)
- inception.Mixed_6c = FIDInceptionC(768, channels_7x7=160)
- inception.Mixed_6d = FIDInceptionC(768, channels_7x7=160)
- inception.Mixed_6e = FIDInceptionC(768, channels_7x7=192)
- inception.Mixed_7b = FIDInceptionE_1(1280)
- inception.Mixed_7c = FIDInceptionE_2(2048)
-
- LOGGER.info('fid_inception_v3 patching done')
-
- state_dict = load_state_dict_from_url(FID_WEIGHTS_URL, progress=True)
- LOGGER.info('fid_inception_v3 weights downloaded')
-
- inception.load_state_dict(state_dict)
- LOGGER.info('fid_inception_v3 weights loaded into model')
-
- return inception
-
-
-class FIDInceptionA(models.inception.InceptionA):
- """InceptionA block patched for FID computation"""
- def __init__(self, in_channels, pool_features):
- super(FIDInceptionA, self).__init__(in_channels, pool_features)
-
- def forward(self, x):
- branch1x1 = self.branch1x1(x)
-
- branch5x5 = self.branch5x5_1(x)
- branch5x5 = self.branch5x5_2(branch5x5)
-
- branch3x3dbl = self.branch3x3dbl_1(x)
- branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl)
- branch3x3dbl = self.branch3x3dbl_3(branch3x3dbl)
-
- # Patch: Tensorflow's average pool does not use the padded zero's in
- # its average calculation
- branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1,
- count_include_pad=False)
- branch_pool = self.branch_pool(branch_pool)
-
- outputs = [branch1x1, branch5x5, branch3x3dbl, branch_pool]
- return torch.cat(outputs, 1)
-
-
-class FIDInceptionC(models.inception.InceptionC):
- """InceptionC block patched for FID computation"""
- def __init__(self, in_channels, channels_7x7):
- super(FIDInceptionC, self).__init__(in_channels, channels_7x7)
-
- def forward(self, x):
- branch1x1 = self.branch1x1(x)
-
- branch7x7 = self.branch7x7_1(x)
- branch7x7 = self.branch7x7_2(branch7x7)
- branch7x7 = self.branch7x7_3(branch7x7)
-
- branch7x7dbl = self.branch7x7dbl_1(x)
- branch7x7dbl = self.branch7x7dbl_2(branch7x7dbl)
- branch7x7dbl = self.branch7x7dbl_3(branch7x7dbl)
- branch7x7dbl = self.branch7x7dbl_4(branch7x7dbl)
- branch7x7dbl = self.branch7x7dbl_5(branch7x7dbl)
-
- # Patch: Tensorflow's average pool does not use the padded zero's in
- # its average calculation
- branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1,
- count_include_pad=False)
- branch_pool = self.branch_pool(branch_pool)
-
- outputs = [branch1x1, branch7x7, branch7x7dbl, branch_pool]
- return torch.cat(outputs, 1)
-
-
-class FIDInceptionE_1(models.inception.InceptionE):
- """First InceptionE block patched for FID computation"""
- def __init__(self, in_channels):
- super(FIDInceptionE_1, self).__init__(in_channels)
-
- def forward(self, x):
- branch1x1 = self.branch1x1(x)
-
- branch3x3 = self.branch3x3_1(x)
- branch3x3 = [
- self.branch3x3_2a(branch3x3),
- self.branch3x3_2b(branch3x3),
- ]
- branch3x3 = torch.cat(branch3x3, 1)
-
- branch3x3dbl = self.branch3x3dbl_1(x)
- branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl)
- branch3x3dbl = [
- self.branch3x3dbl_3a(branch3x3dbl),
- self.branch3x3dbl_3b(branch3x3dbl),
- ]
- branch3x3dbl = torch.cat(branch3x3dbl, 1)
-
- # Patch: Tensorflow's average pool does not use the padded zero's in
- # its average calculation
- branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1,
- count_include_pad=False)
- branch_pool = self.branch_pool(branch_pool)
-
- outputs = [branch1x1, branch3x3, branch3x3dbl, branch_pool]
- return torch.cat(outputs, 1)
-
-
-class FIDInceptionE_2(models.inception.InceptionE):
- """Second InceptionE block patched for FID computation"""
- def __init__(self, in_channels):
- super(FIDInceptionE_2, self).__init__(in_channels)
-
- def forward(self, x):
- branch1x1 = self.branch1x1(x)
-
- branch3x3 = self.branch3x3_1(x)
- branch3x3 = [
- self.branch3x3_2a(branch3x3),
- self.branch3x3_2b(branch3x3),
- ]
- branch3x3 = torch.cat(branch3x3, 1)
-
- branch3x3dbl = self.branch3x3dbl_1(x)
- branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl)
- branch3x3dbl = [
- self.branch3x3dbl_3a(branch3x3dbl),
- self.branch3x3dbl_3b(branch3x3dbl),
- ]
- branch3x3dbl = torch.cat(branch3x3dbl, 1)
-
- # Patch: The FID Inception model uses max pooling instead of average
- # pooling. This is likely an error in this specific Inception
- # implementation, as other Inception models use average pooling here
- # (which matches the description in the paper).
- branch_pool = F.max_pool2d(x, kernel_size=3, stride=1, padding=1)
- branch_pool = self.branch_pool(branch_pool)
-
- outputs = [branch1x1, branch3x3, branch3x3dbl, branch_pool]
- return torch.cat(outputs, 1)
diff --git a/spaces/AlexWang/lama/saicinpainting/training/visualizers/directory.py b/spaces/AlexWang/lama/saicinpainting/training/visualizers/directory.py
deleted file mode 100644
index bc42e00500c7a5b70b2cef83b03e45b5bb471ff8..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/saicinpainting/training/visualizers/directory.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import os
-
-import cv2
-import numpy as np
-
-from saicinpainting.training.visualizers.base import BaseVisualizer, visualize_mask_and_images_batch
-from saicinpainting.utils import check_and_warn_input_range
-
-
-class DirectoryVisualizer(BaseVisualizer):
- DEFAULT_KEY_ORDER = 'image predicted_image inpainted'.split(' ')
-
- def __init__(self, outdir, key_order=DEFAULT_KEY_ORDER, max_items_in_batch=10,
- last_without_mask=True, rescale_keys=None):
- self.outdir = outdir
- os.makedirs(self.outdir, exist_ok=True)
- self.key_order = key_order
- self.max_items_in_batch = max_items_in_batch
- self.last_without_mask = last_without_mask
- self.rescale_keys = rescale_keys
-
- def __call__(self, epoch_i, batch_i, batch, suffix='', rank=None):
- check_and_warn_input_range(batch['image'], 0, 1, 'DirectoryVisualizer target image')
- vis_img = visualize_mask_and_images_batch(batch, self.key_order, max_items=self.max_items_in_batch,
- last_without_mask=self.last_without_mask,
- rescale_keys=self.rescale_keys)
-
- vis_img = np.clip(vis_img * 255, 0, 255).astype('uint8')
-
- curoutdir = os.path.join(self.outdir, f'epoch{epoch_i:04d}{suffix}')
- os.makedirs(curoutdir, exist_ok=True)
- rank_suffix = f'_r{rank}' if rank is not None else ''
- out_fname = os.path.join(curoutdir, f'batch{batch_i:07d}{rank_suffix}.jpg')
-
- vis_img = cv2.cvtColor(vis_img, cv2.COLOR_RGB2BGR)
- cv2.imwrite(out_fname, vis_img)
diff --git a/spaces/Aloento/9Nine-PITS/text/frontend/normalizer/numbers.py b/spaces/Aloento/9Nine-PITS/text/frontend/normalizer/numbers.py
deleted file mode 100644
index abe5738fba1f11e21b2c44df0712128090ddfdfb..0000000000000000000000000000000000000000
--- a/spaces/Aloento/9Nine-PITS/text/frontend/normalizer/numbers.py
+++ /dev/null
@@ -1,86 +0,0 @@
-# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# number expansion is not that easy
-import re
-
-import inflect
-
-_inflect = inflect.engine()
-_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])')
-_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)')
-_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)')
-_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)')
-_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)')
-_number_re = re.compile(r'[0-9]+')
-
-
-def _remove_commas(m):
- return m.group(1).replace(',', '')
-
-
-def _expand_decimal_point(m):
- return m.group(1).replace('.', ' point ')
-
-
-def _expand_dollars(m):
- match = m.group(1)
- parts = match.split('.')
- if len(parts) > 2:
- return match + ' dollars' # Unexpected format
- dollars = int(parts[0]) if parts[0] else 0
- cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0
- if dollars and cents:
- dollar_unit = 'dollar' if dollars == 1 else 'dollars'
- cent_unit = 'cent' if cents == 1 else 'cents'
- return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit)
- elif dollars:
- dollar_unit = 'dollar' if dollars == 1 else 'dollars'
- return '%s %s' % (dollars, dollar_unit)
- elif cents:
- cent_unit = 'cent' if cents == 1 else 'cents'
- return '%s %s' % (cents, cent_unit)
- else:
- return 'zero dollars'
-
-
-def _expand_ordinal(m):
- return _inflect.number_to_words(m.group(0))
-
-
-def _expand_number(m):
- num = int(m.group(0))
- if num > 1000 and num < 3000:
- if num == 2000:
- return 'two thousand'
- elif num > 2000 and num < 2010:
- return 'two thousand ' + _inflect.number_to_words(num % 100)
- elif num % 100 == 0:
- return _inflect.number_to_words(num // 100) + ' hundred'
- else:
- return _inflect.number_to_words(
- num, andword='', zero='oh', group=2).replace(', ', ' ')
- else:
- return _inflect.number_to_words(num, andword='')
-
-
-def normalize_numbers(text):
- """ Normalize numbers in English text.
- """
- text = re.sub(_comma_number_re, _remove_commas, text)
- text = re.sub(_pounds_re, r'\1 pounds', text)
- text = re.sub(_dollars_re, _expand_dollars, text)
- text = re.sub(_decimal_number_re, _expand_decimal_point, text)
- text = re.sub(_ordinal_re, _expand_ordinal, text)
- text = re.sub(_number_re, _expand_number, text)
- return text
diff --git a/spaces/Andres99/Tune-A-Video-Training-UI/app.py b/spaces/Andres99/Tune-A-Video-Training-UI/app.py
deleted file mode 100644
index 3e0b9a282fc42c71e6c0f8d7f238a79a9c53c697..0000000000000000000000000000000000000000
--- a/spaces/Andres99/Tune-A-Video-Training-UI/app.py
+++ /dev/null
@@ -1,84 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-
-import os
-from subprocess import getoutput
-
-import gradio as gr
-import torch
-
-from app_inference import create_inference_demo
-from app_training import create_training_demo
-from app_upload import create_upload_demo
-from inference import InferencePipeline
-from trainer import Trainer
-
-TITLE = '# [Tune-A-Video](https://tuneavideo.github.io/) UI'
-
-ORIGINAL_SPACE_ID = 'Tune-A-Video-library/Tune-A-Video-Training-UI'
-SPACE_ID = os.getenv('SPACE_ID', ORIGINAL_SPACE_ID)
-GPU_DATA = getoutput('nvidia-smi')
-SHARED_UI_WARNING = f'''## Attention - Training doesn't work in this shared UI. You can duplicate and use it with a paid private T4 GPU.
-
-
-'''
-
-if os.getenv('SYSTEM') == 'spaces' and SPACE_ID != ORIGINAL_SPACE_ID:
- SETTINGS = f'Settings'
-else:
- SETTINGS = 'Settings'
-
-INVALID_GPU_WARNING = f'''## Attention - the specified GPU is invalid. Training may not work. Make sure you have selected a `T4 GPU` for this task.'''
-
-CUDA_NOT_AVAILABLE_WARNING = f'''## Attention - Running on CPU.
-
-You can assign a GPU in the {SETTINGS} tab if you are running this on HF Spaces.
-You can use "T4 small/medium" to run this demo.
-
-'''
-
-HF_TOKEN_NOT_SPECIFIED_WARNING = f'''The environment variable `HF_TOKEN` is not specified. Feel free to specify your Hugging Face token with write permission if you don't want to manually provide it for every run.
-
-You can check and create your Hugging Face tokens here.
-You can specify environment variables in the "Repository secrets" section of the {SETTINGS} tab.
-
-'''
-
-HF_TOKEN = os.getenv('HF_TOKEN')
-
-
-def show_warning(warning_text: str) -> gr.Blocks:
- with gr.Blocks() as demo:
- with gr.Box():
- gr.Markdown(warning_text)
- return demo
-
-
-pipe = InferencePipeline(HF_TOKEN)
-trainer = Trainer(HF_TOKEN)
-
-with gr.Blocks(css='style.css') as demo:
- if SPACE_ID == ORIGINAL_SPACE_ID:
- show_warning(SHARED_UI_WARNING)
- elif not torch.cuda.is_available():
- show_warning(CUDA_NOT_AVAILABLE_WARNING)
- elif (not 'T4' in GPU_DATA):
- show_warning(INVALID_GPU_WARNING)
-
- gr.Markdown(TITLE)
- with gr.Tabs():
- with gr.TabItem('Train'):
- create_training_demo(trainer, pipe)
- with gr.TabItem('Run'):
- create_inference_demo(pipe, HF_TOKEN)
- with gr.TabItem('Upload'):
- gr.Markdown('''
- - You can use this tab to upload models later if you choose not to upload models in training time or if upload in training time failed.
- ''')
- create_upload_demo(HF_TOKEN)
-
- if not HF_TOKEN:
- show_warning(HF_TOKEN_NOT_SPECIFIED_WARNING)
-
-demo.queue(max_size=1).launch(share=False)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/unclip_text_interpolation.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/unclip_text_interpolation.py
deleted file mode 100644
index 290f45317004182a6aeb0701c42d0fa65899c1ed..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/unclip_text_interpolation.py
+++ /dev/null
@@ -1,573 +0,0 @@
-import inspect
-from typing import List, Optional, Tuple, Union
-
-import torch
-from torch.nn import functional as F
-from transformers import CLIPTextModelWithProjection, CLIPTokenizer
-from transformers.models.clip.modeling_clip import CLIPTextModelOutput
-
-from diffusers import (
- DiffusionPipeline,
- ImagePipelineOutput,
- PriorTransformer,
- UnCLIPScheduler,
- UNet2DConditionModel,
- UNet2DModel,
-)
-from diffusers.pipelines.unclip import UnCLIPTextProjModel
-from diffusers.utils import is_accelerate_available, logging, randn_tensor
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-def slerp(val, low, high):
- """
- Find the interpolation point between the 'low' and 'high' values for the given 'val'. See https://en.wikipedia.org/wiki/Slerp for more details on the topic.
- """
- low_norm = low / torch.norm(low)
- high_norm = high / torch.norm(high)
- omega = torch.acos((low_norm * high_norm))
- so = torch.sin(omega)
- res = (torch.sin((1.0 - val) * omega) / so) * low + (torch.sin(val * omega) / so) * high
- return res
-
-
-class UnCLIPTextInterpolationPipeline(DiffusionPipeline):
-
- """
- Pipeline for prompt-to-prompt interpolation on CLIP text embeddings and using the UnCLIP / Dall-E to decode them to images.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
-
- Args:
- text_encoder ([`CLIPTextModelWithProjection`]):
- Frozen text-encoder.
- tokenizer (`CLIPTokenizer`):
- Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- prior ([`PriorTransformer`]):
- The canonincal unCLIP prior to approximate the image embedding from the text embedding.
- text_proj ([`UnCLIPTextProjModel`]):
- Utility class to prepare and combine the embeddings before they are passed to the decoder.
- decoder ([`UNet2DConditionModel`]):
- The decoder to invert the image embedding into an image.
- super_res_first ([`UNet2DModel`]):
- Super resolution unet. Used in all but the last step of the super resolution diffusion process.
- super_res_last ([`UNet2DModel`]):
- Super resolution unet. Used in the last step of the super resolution diffusion process.
- prior_scheduler ([`UnCLIPScheduler`]):
- Scheduler used in the prior denoising process. Just a modified DDPMScheduler.
- decoder_scheduler ([`UnCLIPScheduler`]):
- Scheduler used in the decoder denoising process. Just a modified DDPMScheduler.
- super_res_scheduler ([`UnCLIPScheduler`]):
- Scheduler used in the super resolution denoising process. Just a modified DDPMScheduler.
-
- """
-
- prior: PriorTransformer
- decoder: UNet2DConditionModel
- text_proj: UnCLIPTextProjModel
- text_encoder: CLIPTextModelWithProjection
- tokenizer: CLIPTokenizer
- super_res_first: UNet2DModel
- super_res_last: UNet2DModel
-
- prior_scheduler: UnCLIPScheduler
- decoder_scheduler: UnCLIPScheduler
- super_res_scheduler: UnCLIPScheduler
-
- # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.__init__
- def __init__(
- self,
- prior: PriorTransformer,
- decoder: UNet2DConditionModel,
- text_encoder: CLIPTextModelWithProjection,
- tokenizer: CLIPTokenizer,
- text_proj: UnCLIPTextProjModel,
- super_res_first: UNet2DModel,
- super_res_last: UNet2DModel,
- prior_scheduler: UnCLIPScheduler,
- decoder_scheduler: UnCLIPScheduler,
- super_res_scheduler: UnCLIPScheduler,
- ):
- super().__init__()
-
- self.register_modules(
- prior=prior,
- decoder=decoder,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- text_proj=text_proj,
- super_res_first=super_res_first,
- super_res_last=super_res_last,
- prior_scheduler=prior_scheduler,
- decoder_scheduler=decoder_scheduler,
- super_res_scheduler=super_res_scheduler,
- )
-
- # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents
- def prepare_latents(self, shape, dtype, device, generator, latents, scheduler):
- if latents is None:
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
- else:
- if latents.shape != shape:
- raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
- latents = latents.to(device)
-
- latents = latents * scheduler.init_noise_sigma
- return latents
-
- # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline._encode_prompt
- def _encode_prompt(
- self,
- prompt,
- device,
- num_images_per_prompt,
- do_classifier_free_guidance,
- text_model_output: Optional[Union[CLIPTextModelOutput, Tuple]] = None,
- text_attention_mask: Optional[torch.Tensor] = None,
- ):
- if text_model_output is None:
- batch_size = len(prompt) if isinstance(prompt, list) else 1
- # get prompt text embeddings
- text_inputs = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- text_input_ids = text_inputs.input_ids
- text_mask = text_inputs.attention_mask.bool().to(device)
-
- untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
-
- if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
- text_input_ids, untruncated_ids
- ):
- removed_text = self.tokenizer.batch_decode(
- untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
- )
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
- )
- text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
-
- text_encoder_output = self.text_encoder(text_input_ids.to(device))
-
- prompt_embeds = text_encoder_output.text_embeds
- text_encoder_hidden_states = text_encoder_output.last_hidden_state
-
- else:
- batch_size = text_model_output[0].shape[0]
- prompt_embeds, text_encoder_hidden_states = text_model_output[0], text_model_output[1]
- text_mask = text_attention_mask
-
- prompt_embeds = prompt_embeds.repeat_interleave(num_images_per_prompt, dim=0)
- text_encoder_hidden_states = text_encoder_hidden_states.repeat_interleave(num_images_per_prompt, dim=0)
- text_mask = text_mask.repeat_interleave(num_images_per_prompt, dim=0)
-
- if do_classifier_free_guidance:
- uncond_tokens = [""] * batch_size
-
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- uncond_text_mask = uncond_input.attention_mask.bool().to(device)
- negative_prompt_embeds_text_encoder_output = self.text_encoder(uncond_input.input_ids.to(device))
-
- negative_prompt_embeds = negative_prompt_embeds_text_encoder_output.text_embeds
- uncond_text_encoder_hidden_states = negative_prompt_embeds_text_encoder_output.last_hidden_state
-
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
-
- seq_len = negative_prompt_embeds.shape[1]
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt)
- negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len)
-
- seq_len = uncond_text_encoder_hidden_states.shape[1]
- uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.repeat(1, num_images_per_prompt, 1)
- uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.view(
- batch_size * num_images_per_prompt, seq_len, -1
- )
- uncond_text_mask = uncond_text_mask.repeat_interleave(num_images_per_prompt, dim=0)
-
- # done duplicates
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
- text_encoder_hidden_states = torch.cat([uncond_text_encoder_hidden_states, text_encoder_hidden_states])
-
- text_mask = torch.cat([uncond_text_mask, text_mask])
-
- return prompt_embeds, text_encoder_hidden_states, text_mask
-
- # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.enable_sequential_cpu_offload
- def enable_sequential_cpu_offload(self, gpu_id=0):
- r"""
- Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline's
- models have their state dicts saved to CPU and then are moved to a `torch.device('meta') and loaded to GPU only
- when their specific submodule has its `forward` method called.
- """
- if is_accelerate_available():
- from accelerate import cpu_offload
- else:
- raise ImportError("Please install accelerate via `pip install accelerate`")
-
- device = torch.device(f"cuda:{gpu_id}")
-
- # TODO: self.prior.post_process_latents is not covered by the offload hooks, so it fails if added to the list
- models = [
- self.decoder,
- self.text_proj,
- self.text_encoder,
- self.super_res_first,
- self.super_res_last,
- ]
- for cpu_offloaded_model in models:
- if cpu_offloaded_model is not None:
- cpu_offload(cpu_offloaded_model, device)
-
- @property
- # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline._execution_device
- def _execution_device(self):
- r"""
- Returns the device on which the pipeline's models will be executed. After calling
- `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
- hooks.
- """
- if self.device != torch.device("meta") or not hasattr(self.decoder, "_hf_hook"):
- return self.device
- for module in self.decoder.modules():
- if (
- hasattr(module, "_hf_hook")
- and hasattr(module._hf_hook, "execution_device")
- and module._hf_hook.execution_device is not None
- ):
- return torch.device(module._hf_hook.execution_device)
- return self.device
-
- @torch.no_grad()
- def __call__(
- self,
- start_prompt: str,
- end_prompt: str,
- steps: int = 5,
- prior_num_inference_steps: int = 25,
- decoder_num_inference_steps: int = 25,
- super_res_num_inference_steps: int = 7,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- prior_guidance_scale: float = 4.0,
- decoder_guidance_scale: float = 8.0,
- enable_sequential_cpu_offload=True,
- gpu_id=0,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- ):
- """
- Function invoked when calling the pipeline for generation.
-
- Args:
- start_prompt (`str`):
- The prompt to start the image generation interpolation from.
- end_prompt (`str`):
- The prompt to end the image generation interpolation at.
- steps (`int`, *optional*, defaults to 5):
- The number of steps over which to interpolate from start_prompt to end_prompt. The pipeline returns
- the same number of images as this value.
- prior_num_inference_steps (`int`, *optional*, defaults to 25):
- The number of denoising steps for the prior. More denoising steps usually lead to a higher quality
- image at the expense of slower inference.
- decoder_num_inference_steps (`int`, *optional*, defaults to 25):
- The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality
- image at the expense of slower inference.
- super_res_num_inference_steps (`int`, *optional*, defaults to 7):
- The number of denoising steps for super resolution. More denoising steps usually lead to a higher
- quality image at the expense of slower inference.
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
- to make generation deterministic.
- prior_guidance_scale (`float`, *optional*, defaults to 4.0):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- decoder_guidance_scale (`float`, *optional*, defaults to 4.0):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generated image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- enable_sequential_cpu_offload (`bool`, *optional*, defaults to `True`):
- If True, offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline's
- models have their state dicts saved to CPU and then are moved to a `torch.device('meta') and loaded to GPU only
- when their specific submodule has its `forward` method called.
- gpu_id (`int`, *optional*, defaults to `0`):
- The gpu_id to be passed to enable_sequential_cpu_offload. Only works when enable_sequential_cpu_offload is set to True.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
- """
-
- if not isinstance(start_prompt, str) or not isinstance(end_prompt, str):
- raise ValueError(
- f"`start_prompt` and `end_prompt` should be of type `str` but got {type(start_prompt)} and"
- f" {type(end_prompt)} instead"
- )
-
- if enable_sequential_cpu_offload:
- self.enable_sequential_cpu_offload(gpu_id=gpu_id)
-
- device = self._execution_device
-
- # Turn the prompts into embeddings.
- inputs = self.tokenizer(
- [start_prompt, end_prompt],
- padding="max_length",
- truncation=True,
- max_length=self.tokenizer.model_max_length,
- return_tensors="pt",
- )
- inputs.to(device)
- text_model_output = self.text_encoder(**inputs)
-
- text_attention_mask = torch.max(inputs.attention_mask[0], inputs.attention_mask[1])
- text_attention_mask = torch.cat([text_attention_mask.unsqueeze(0)] * steps).to(device)
-
- # Interpolate from the start to end prompt using slerp and add the generated images to an image output pipeline
- batch_text_embeds = []
- batch_last_hidden_state = []
-
- for interp_val in torch.linspace(0, 1, steps):
- text_embeds = slerp(interp_val, text_model_output.text_embeds[0], text_model_output.text_embeds[1])
- last_hidden_state = slerp(
- interp_val, text_model_output.last_hidden_state[0], text_model_output.last_hidden_state[1]
- )
- batch_text_embeds.append(text_embeds.unsqueeze(0))
- batch_last_hidden_state.append(last_hidden_state.unsqueeze(0))
-
- batch_text_embeds = torch.cat(batch_text_embeds)
- batch_last_hidden_state = torch.cat(batch_last_hidden_state)
-
- text_model_output = CLIPTextModelOutput(
- text_embeds=batch_text_embeds, last_hidden_state=batch_last_hidden_state
- )
-
- batch_size = text_model_output[0].shape[0]
-
- do_classifier_free_guidance = prior_guidance_scale > 1.0 or decoder_guidance_scale > 1.0
-
- prompt_embeds, text_encoder_hidden_states, text_mask = self._encode_prompt(
- prompt=None,
- device=device,
- num_images_per_prompt=1,
- do_classifier_free_guidance=do_classifier_free_guidance,
- text_model_output=text_model_output,
- text_attention_mask=text_attention_mask,
- )
-
- # prior
-
- self.prior_scheduler.set_timesteps(prior_num_inference_steps, device=device)
- prior_timesteps_tensor = self.prior_scheduler.timesteps
-
- embedding_dim = self.prior.config.embedding_dim
-
- prior_latents = self.prepare_latents(
- (batch_size, embedding_dim),
- prompt_embeds.dtype,
- device,
- generator,
- None,
- self.prior_scheduler,
- )
-
- for i, t in enumerate(self.progress_bar(prior_timesteps_tensor)):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([prior_latents] * 2) if do_classifier_free_guidance else prior_latents
-
- predicted_image_embedding = self.prior(
- latent_model_input,
- timestep=t,
- proj_embedding=prompt_embeds,
- encoder_hidden_states=text_encoder_hidden_states,
- attention_mask=text_mask,
- ).predicted_image_embedding
-
- if do_classifier_free_guidance:
- predicted_image_embedding_uncond, predicted_image_embedding_text = predicted_image_embedding.chunk(2)
- predicted_image_embedding = predicted_image_embedding_uncond + prior_guidance_scale * (
- predicted_image_embedding_text - predicted_image_embedding_uncond
- )
-
- if i + 1 == prior_timesteps_tensor.shape[0]:
- prev_timestep = None
- else:
- prev_timestep = prior_timesteps_tensor[i + 1]
-
- prior_latents = self.prior_scheduler.step(
- predicted_image_embedding,
- timestep=t,
- sample=prior_latents,
- generator=generator,
- prev_timestep=prev_timestep,
- ).prev_sample
-
- prior_latents = self.prior.post_process_latents(prior_latents)
-
- image_embeddings = prior_latents
-
- # done prior
-
- # decoder
-
- text_encoder_hidden_states, additive_clip_time_embeddings = self.text_proj(
- image_embeddings=image_embeddings,
- prompt_embeds=prompt_embeds,
- text_encoder_hidden_states=text_encoder_hidden_states,
- do_classifier_free_guidance=do_classifier_free_guidance,
- )
-
- if device.type == "mps":
- # HACK: MPS: There is a panic when padding bool tensors,
- # so cast to int tensor for the pad and back to bool afterwards
- text_mask = text_mask.type(torch.int)
- decoder_text_mask = F.pad(text_mask, (self.text_proj.clip_extra_context_tokens, 0), value=1)
- decoder_text_mask = decoder_text_mask.type(torch.bool)
- else:
- decoder_text_mask = F.pad(text_mask, (self.text_proj.clip_extra_context_tokens, 0), value=True)
-
- self.decoder_scheduler.set_timesteps(decoder_num_inference_steps, device=device)
- decoder_timesteps_tensor = self.decoder_scheduler.timesteps
-
- num_channels_latents = self.decoder.config.in_channels
- height = self.decoder.config.sample_size
- width = self.decoder.config.sample_size
-
- decoder_latents = self.prepare_latents(
- (batch_size, num_channels_latents, height, width),
- text_encoder_hidden_states.dtype,
- device,
- generator,
- None,
- self.decoder_scheduler,
- )
-
- for i, t in enumerate(self.progress_bar(decoder_timesteps_tensor)):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([decoder_latents] * 2) if do_classifier_free_guidance else decoder_latents
-
- noise_pred = self.decoder(
- sample=latent_model_input,
- timestep=t,
- encoder_hidden_states=text_encoder_hidden_states,
- class_labels=additive_clip_time_embeddings,
- attention_mask=decoder_text_mask,
- ).sample
-
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred_uncond, _ = noise_pred_uncond.split(latent_model_input.shape[1], dim=1)
- noise_pred_text, predicted_variance = noise_pred_text.split(latent_model_input.shape[1], dim=1)
- noise_pred = noise_pred_uncond + decoder_guidance_scale * (noise_pred_text - noise_pred_uncond)
- noise_pred = torch.cat([noise_pred, predicted_variance], dim=1)
-
- if i + 1 == decoder_timesteps_tensor.shape[0]:
- prev_timestep = None
- else:
- prev_timestep = decoder_timesteps_tensor[i + 1]
-
- # compute the previous noisy sample x_t -> x_t-1
- decoder_latents = self.decoder_scheduler.step(
- noise_pred, t, decoder_latents, prev_timestep=prev_timestep, generator=generator
- ).prev_sample
-
- decoder_latents = decoder_latents.clamp(-1, 1)
-
- image_small = decoder_latents
-
- # done decoder
-
- # super res
-
- self.super_res_scheduler.set_timesteps(super_res_num_inference_steps, device=device)
- super_res_timesteps_tensor = self.super_res_scheduler.timesteps
-
- channels = self.super_res_first.config.in_channels // 2
- height = self.super_res_first.config.sample_size
- width = self.super_res_first.config.sample_size
-
- super_res_latents = self.prepare_latents(
- (batch_size, channels, height, width),
- image_small.dtype,
- device,
- generator,
- None,
- self.super_res_scheduler,
- )
-
- if device.type == "mps":
- # MPS does not support many interpolations
- image_upscaled = F.interpolate(image_small, size=[height, width])
- else:
- interpolate_antialias = {}
- if "antialias" in inspect.signature(F.interpolate).parameters:
- interpolate_antialias["antialias"] = True
-
- image_upscaled = F.interpolate(
- image_small, size=[height, width], mode="bicubic", align_corners=False, **interpolate_antialias
- )
-
- for i, t in enumerate(self.progress_bar(super_res_timesteps_tensor)):
- # no classifier free guidance
-
- if i == super_res_timesteps_tensor.shape[0] - 1:
- unet = self.super_res_last
- else:
- unet = self.super_res_first
-
- latent_model_input = torch.cat([super_res_latents, image_upscaled], dim=1)
-
- noise_pred = unet(
- sample=latent_model_input,
- timestep=t,
- ).sample
-
- if i + 1 == super_res_timesteps_tensor.shape[0]:
- prev_timestep = None
- else:
- prev_timestep = super_res_timesteps_tensor[i + 1]
-
- # compute the previous noisy sample x_t -> x_t-1
- super_res_latents = self.super_res_scheduler.step(
- noise_pred, t, super_res_latents, prev_timestep=prev_timestep, generator=generator
- ).prev_sample
-
- image = super_res_latents
- # done super res
-
- # post processing
-
- image = image * 0.5 + 0.5
- image = image.clamp(0, 1)
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
-
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return (image,)
-
- return ImagePipelineOutput(images=image)
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/detr.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/detr.py
deleted file mode 100644
index 5ff82a280daa0a015f662bdf2509fa11542d46d4..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/detr.py
+++ /dev/null
@@ -1,46 +0,0 @@
-from mmdet.core import bbox2result
-from ..builder import DETECTORS
-from .single_stage import SingleStageDetector
-
-
-@DETECTORS.register_module()
-class DETR(SingleStageDetector):
- r"""Implementation of `DETR: End-to-End Object Detection with
- Transformers `_"""
-
- def __init__(self,
- backbone,
- bbox_head,
- train_cfg=None,
- test_cfg=None,
- pretrained=None):
- super(DETR, self).__init__(backbone, None, bbox_head, train_cfg,
- test_cfg, pretrained)
-
- def simple_test(self, img, img_metas, rescale=False):
- """Test function without test time augmentation.
-
- Args:
- imgs (list[torch.Tensor]): List of multiple images
- img_metas (list[dict]): List of image information.
- rescale (bool, optional): Whether to rescale the results.
- Defaults to False.
-
- Returns:
- list[list[np.ndarray]]: BBox results of each image and classes.
- The outer list corresponds to each image. The inner list
- corresponds to each class.
- """
- batch_size = len(img_metas)
- assert batch_size == 1, 'Currently only batch_size 1 for inference ' \
- f'mode is supported. Found batch_size {batch_size}.'
- x = self.extract_feat(img)
- outs = self.bbox_head(x, img_metas)
- bbox_list = self.bbox_head.get_bboxes(
- *outs, img_metas, rescale=rescale)
-
- bbox_results = [
- bbox2result(det_bboxes, det_labels, self.bbox_head.num_classes)
- for det_bboxes, det_labels in bbox_list
- ]
- return bbox_results
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r18-d8_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r18-d8_512x1024_80k_cityscapes.py
deleted file mode 100644
index 5a1d29e480cb46a763cb17d2105b3f040153d417..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r18-d8_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,9 +0,0 @@
-_base_ = './fcn_r50-d8_512x1024_80k_cityscapes.py'
-model = dict(
- pretrained='open-mmlab://resnet18_v1c',
- backbone=dict(depth=18),
- decode_head=dict(
- in_channels=512,
- channels=128,
- ),
- auxiliary_head=dict(in_channels=256, channels=64))
diff --git a/spaces/Andy1621/uniformerv2_demo/uniformerv2.py b/spaces/Andy1621/uniformerv2_demo/uniformerv2.py
deleted file mode 100644
index 5ca7c3d511f4e3c2c8c6e89ace89e2ad8680d34f..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformerv2_demo/uniformerv2.py
+++ /dev/null
@@ -1,510 +0,0 @@
-#!/usr/bin/env python
-import os
-from collections import OrderedDict
-
-from timm.models.layers import DropPath
-import torch
-from torch import nn
-from torch.nn import MultiheadAttention
-import torch.nn.functional as F
-import torch.utils.checkpoint as checkpoint
-
-
-MODEL_PATH = './'
-_MODELS = {
- "ViT-B/16": os.path.join(MODEL_PATH, "vit_b16.pth"),
- "ViT-L/14": os.path.join(MODEL_PATH, "vit_l14.pth"),
- "ViT-L/14_336": os.path.join(MODEL_PATH, "vit_l14_336.pth"),
-}
-
-
-class LayerNorm(nn.LayerNorm):
- """Subclass torch's LayerNorm to handle fp16."""
-
- def forward(self, x):
- orig_type = x.dtype
- ret = super().forward(x.type(torch.float32))
- return ret.type(orig_type)
-
-
-class QuickGELU(nn.Module):
- def forward(self, x):
- return x * torch.sigmoid(1.702 * x)
-
-
-class Local_MHRA(nn.Module):
- def __init__(self, d_model, dw_reduction=1.5, pos_kernel_size=3):
- super().__init__()
-
- padding = pos_kernel_size // 2
- re_d_model = int(d_model // dw_reduction)
- self.pos_embed = nn.Sequential(
- nn.BatchNorm3d(d_model),
- nn.Conv3d(d_model, re_d_model, kernel_size=1, stride=1, padding=0),
- nn.Conv3d(re_d_model, re_d_model, kernel_size=(pos_kernel_size, 1, 1), stride=(1, 1, 1), padding=(padding, 0, 0), groups=re_d_model),
- nn.Conv3d(re_d_model, d_model, kernel_size=1, stride=1, padding=0),
- )
-
- # init zero
- print('Init zero for Conv in pos_emb')
- nn.init.constant_(self.pos_embed[3].weight, 0)
- nn.init.constant_(self.pos_embed[3].bias, 0)
-
- def forward(self, x):
- return self.pos_embed(x)
-
-
-class ResidualAttentionBlock(nn.Module):
- def __init__(
- self, d_model, n_head, attn_mask=None, drop_path=0.0,
- dw_reduction=1.5, no_lmhra=False, double_lmhra=True
- ):
- super().__init__()
-
- self.n_head = n_head
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- print(f'Drop path rate: {drop_path}')
-
- self.no_lmhra = no_lmhra
- self.double_lmhra = double_lmhra
- print(f'No L_MHRA: {no_lmhra}')
- print(f'Double L_MHRA: {double_lmhra}')
- if not no_lmhra:
- self.lmhra1 = Local_MHRA(d_model, dw_reduction=dw_reduction)
- if double_lmhra:
- self.lmhra2 = Local_MHRA(d_model, dw_reduction=dw_reduction)
-
- # spatial
- self.attn = MultiheadAttention(d_model, n_head)
- self.ln_1 = LayerNorm(d_model)
- self.mlp = nn.Sequential(OrderedDict([
- ("c_fc", nn.Linear(d_model, d_model * 4)),
- ("gelu", QuickGELU()),
- ("c_proj", nn.Linear(d_model * 4, d_model))
- ]))
- self.ln_2 = LayerNorm(d_model)
- self.attn_mask = attn_mask
-
- def attention(self, x):
- self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None
- return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0]
-
- def forward(self, x, T=8, use_checkpoint=False):
- # x: 1+HW, NT, C
- if not self.no_lmhra:
- # Local MHRA
- tmp_x = x[1:, :, :]
- L, NT, C = tmp_x.shape
- N = NT // T
- H = W = int(L ** 0.5)
- tmp_x = tmp_x.view(H, W, N, T, C).permute(2, 4, 3, 0, 1).contiguous()
- tmp_x = tmp_x + self.drop_path(self.lmhra1(tmp_x))
- tmp_x = tmp_x.view(N, C, T, L).permute(3, 0, 2, 1).contiguous().view(L, NT, C)
- x = torch.cat([x[:1, :, :], tmp_x], dim=0)
- # MHSA
- if use_checkpoint:
- attn_out = checkpoint.checkpoint(self.attention, self.ln_1(x))
- x = x + self.drop_path(attn_out)
- else:
- x = x + self.drop_path(self.attention(self.ln_1(x)))
- # Local MHRA
- if not self.no_lmhra and self.double_lmhra:
- tmp_x = x[1:, :, :]
- tmp_x = tmp_x.view(H, W, N, T, C).permute(2, 4, 3, 0, 1).contiguous()
- tmp_x = tmp_x + self.drop_path(self.lmhra2(tmp_x))
- tmp_x = tmp_x.view(N, C, T, L).permute(3, 0, 2, 1).contiguous().view(L, NT, C)
- x = torch.cat([x[:1, :, :], tmp_x], dim=0)
- # FFN
- if use_checkpoint:
- mlp_out = checkpoint.checkpoint(self.mlp, self.ln_2(x))
- x = x + self.drop_path(mlp_out)
- else:
- x = x + self.drop_path(self.mlp(self.ln_2(x)))
- return x
-
-
-class Extractor(nn.Module):
- def __init__(
- self, d_model, n_head, attn_mask=None,
- mlp_factor=4.0, dropout=0.0, drop_path=0.0,
- ):
- super().__init__()
-
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- print(f'Drop path rate: {drop_path}')
- self.attn = nn.MultiheadAttention(d_model, n_head)
- self.ln_1 = nn.LayerNorm(d_model)
- d_mlp = round(mlp_factor * d_model)
- self.mlp = nn.Sequential(OrderedDict([
- ("c_fc", nn.Linear(d_model, d_mlp)),
- ("gelu", QuickGELU()),
- ("dropout", nn.Dropout(dropout)),
- ("c_proj", nn.Linear(d_mlp, d_model))
- ]))
- self.ln_2 = nn.LayerNorm(d_model)
- self.ln_3 = nn.LayerNorm(d_model)
- self.attn_mask = attn_mask
-
- # zero init
- nn.init.xavier_uniform_(self.attn.in_proj_weight)
- nn.init.constant_(self.attn.out_proj.weight, 0.)
- nn.init.constant_(self.attn.out_proj.bias, 0.)
- nn.init.xavier_uniform_(self.mlp[0].weight)
- nn.init.constant_(self.mlp[-1].weight, 0.)
- nn.init.constant_(self.mlp[-1].bias, 0.)
-
- def attention(self, x, y):
- d_model = self.ln_1.weight.size(0)
- q = (x @ self.attn.in_proj_weight[:d_model].T) + self.attn.in_proj_bias[:d_model]
-
- k = (y @ self.attn.in_proj_weight[d_model:-d_model].T) + self.attn.in_proj_bias[d_model:-d_model]
- v = (y @ self.attn.in_proj_weight[-d_model:].T) + self.attn.in_proj_bias[-d_model:]
- Tx, Ty, N = q.size(0), k.size(0), q.size(1)
- q = q.view(Tx, N, self.attn.num_heads, self.attn.head_dim).permute(1, 2, 0, 3)
- k = k.view(Ty, N, self.attn.num_heads, self.attn.head_dim).permute(1, 2, 0, 3)
- v = v.view(Ty, N, self.attn.num_heads, self.attn.head_dim).permute(1, 2, 0, 3)
- aff = (q @ k.transpose(-2, -1) / (self.attn.head_dim ** 0.5))
-
- aff = aff.softmax(dim=-1)
- out = aff @ v
- out = out.permute(2, 0, 1, 3).flatten(2)
- out = self.attn.out_proj(out)
- return out
-
- def forward(self, x, y):
- x = x + self.drop_path(self.attention(self.ln_1(x), self.ln_3(y)))
- x = x + self.drop_path(self.mlp(self.ln_2(x)))
- return x
-
-
-class Transformer(nn.Module):
- def __init__(
- self, width, layers, heads, attn_mask=None, backbone_drop_path_rate=0.,
- use_checkpoint=False, checkpoint_num=[0], t_size=8, dw_reduction=2,
- no_lmhra=False, double_lmhra=True,
- return_list=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11],
- n_layers=12, n_dim=768, n_head=12, mlp_factor=4.0, drop_path_rate=0.,
- mlp_dropout=[0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5],
- cls_dropout=0.5, num_classes=400,
- ):
- super().__init__()
- self.T = t_size
- self.return_list = return_list
- # backbone
- b_dpr = [x.item() for x in torch.linspace(0, backbone_drop_path_rate, layers)]
- self.resblocks = nn.ModuleList([
- ResidualAttentionBlock(
- width, heads, attn_mask,
- drop_path=b_dpr[i],
- dw_reduction=dw_reduction,
- no_lmhra=no_lmhra,
- double_lmhra=double_lmhra,
- ) for i in range(layers)
- ])
- # checkpoint
- self.use_checkpoint = use_checkpoint
- self.checkpoint_num = checkpoint_num
- self.n_layers = n_layers
- print(f'Use checkpoint: {self.use_checkpoint}')
- print(f'Checkpoint number: {self.checkpoint_num}')
-
- # global block
- assert n_layers == len(return_list)
- if n_layers > 0:
- self.temporal_cls_token = nn.Parameter(torch.zeros(1, 1, n_dim))
- self.dpe = nn.ModuleList([
- nn.Conv3d(n_dim, n_dim, kernel_size=3, stride=1, padding=1, bias=True, groups=n_dim)
- for i in range(n_layers)
- ])
- for m in self.dpe:
- nn.init.constant_(m.bias, 0.)
- dpr = [x.item() for x in torch.linspace(0, drop_path_rate, n_layers)]
- self.dec = nn.ModuleList([
- Extractor(
- n_dim, n_head, mlp_factor=mlp_factor,
- dropout=mlp_dropout[i], drop_path=dpr[i],
- ) for i in range(n_layers)
- ])
- self.balance = nn.Parameter(torch.zeros((n_dim)))
- self.sigmoid = nn.Sigmoid()
- # projection
- self.proj = nn.Sequential(
- nn.LayerNorm(n_dim),
- nn.Dropout(cls_dropout),
- nn.Linear(n_dim, num_classes),
- )
-
- def forward(self, x):
- T_down = self.T
- L, NT, C = x.shape
- N = NT // T_down
- H = W = int((L - 1) ** 0.5)
-
- if self.n_layers > 0:
- cls_token = self.temporal_cls_token.repeat(1, N, 1)
-
- j = -1
- for i, resblock in enumerate(self.resblocks):
- if self.use_checkpoint and i < self.checkpoint_num[0]:
- x = resblock(x, self.T, use_checkpoint=True)
- else:
- x = resblock(x, T_down)
- if i in self.return_list:
- j += 1
- tmp_x = x.clone()
- tmp_x = tmp_x.view(L, N, T_down, C)
- # dpe
- _, tmp_feats = tmp_x[:1], tmp_x[1:]
- tmp_feats = tmp_feats.permute(1, 3, 2, 0).reshape(N, C, T_down, H, W)
- tmp_feats = self.dpe[j](tmp_feats).view(N, C, T_down, L - 1).permute(3, 0, 2, 1).contiguous()
- tmp_x[1:] = tmp_x[1:] + tmp_feats
- # global block
- tmp_x = tmp_x.permute(2, 0, 1, 3).flatten(0, 1) # T * L, N, C
- cls_token = self.dec[j](cls_token, tmp_x)
-
- if self.n_layers > 0:
- weight = self.sigmoid(self.balance)
- residual = x.view(L, N, T_down, C)[0].mean(1) # L, N, T, C
- return self.proj((1 - weight) * cls_token[0, :, :] + weight * residual)
- else:
- residual = x.view(L, N, T_down, C)[0].mean(1) # L, N, T, C
- return self.proj(residual)
-
-
-class VisionTransformer(nn.Module):
- def __init__(
- self,
- # backbone
- input_resolution, patch_size, width, layers, heads, output_dim, backbone_drop_path_rate=0.,
- use_checkpoint=False, checkpoint_num=[0], t_size=8, kernel_size=3, dw_reduction=1.5,
- temporal_downsample=True,
- no_lmhra=-False, double_lmhra=True,
- # global block
- return_list=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11],
- n_layers=12, n_dim=768, n_head=12, mlp_factor=4.0, drop_path_rate=0.,
- mlp_dropout=[0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5],
- cls_dropout=0.5, num_classes=400,
- ):
- super().__init__()
- self.input_resolution = input_resolution
- self.output_dim = output_dim
- padding = (kernel_size - 1) // 2
- if temporal_downsample:
- self.conv1 = nn.Conv3d(3, width, (kernel_size, patch_size, patch_size), (2, patch_size, patch_size), (padding, 0, 0), bias=False)
- t_size = t_size // 2
- else:
- self.conv1 = nn.Conv3d(3, width, (1, patch_size, patch_size), (1, patch_size, patch_size), (0, 0, 0), bias=False)
-
- scale = width ** -0.5
- self.class_embedding = nn.Parameter(scale * torch.randn(width))
- self.positional_embedding = nn.Parameter(scale * torch.randn((input_resolution // patch_size) ** 2 + 1, width))
- self.ln_pre = LayerNorm(width)
-
- self.transformer = Transformer(
- width, layers, heads, dw_reduction=dw_reduction,
- backbone_drop_path_rate=backbone_drop_path_rate,
- use_checkpoint=use_checkpoint, checkpoint_num=checkpoint_num, t_size=t_size,
- no_lmhra=no_lmhra, double_lmhra=double_lmhra,
- return_list=return_list, n_layers=n_layers, n_dim=n_dim, n_head=n_head,
- mlp_factor=mlp_factor, drop_path_rate=drop_path_rate, mlp_dropout=mlp_dropout,
- cls_dropout=cls_dropout, num_classes=num_classes,
- )
-
- def forward(self, x):
- x = self.conv1(x) # shape = [*, width, grid, grid]
- N, C, T, H, W = x.shape
- x = x.permute(0, 2, 3, 4, 1).reshape(N * T, H * W, C)
-
- x = torch.cat([self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width]
- x = x + self.positional_embedding.to(x.dtype)
- x = self.ln_pre(x)
-
- x = x.permute(1, 0, 2) # NLD -> LND
- out = self.transformer(x)
- return out
-
-
-def inflate_weight(weight_2d, time_dim, center=True):
- print(f'Init center: {center}')
- if center:
- weight_3d = torch.zeros(*weight_2d.shape)
- weight_3d = weight_3d.unsqueeze(2).repeat(1, 1, time_dim, 1, 1)
- middle_idx = time_dim // 2
- weight_3d[:, :, middle_idx, :, :] = weight_2d
- else:
- weight_3d = weight_2d.unsqueeze(2).repeat(1, 1, time_dim, 1, 1)
- weight_3d = weight_3d / time_dim
- return weight_3d
-
-
-def load_state_dict(model, state_dict):
- state_dict_3d = model.state_dict()
- for k in state_dict.keys():
- if state_dict[k].shape != state_dict_3d[k].shape:
- if len(state_dict_3d[k].shape) <= 2:
- print(f'Ignore: {k}')
- continue
- print(f'Inflate: {k}, {state_dict[k].shape} => {state_dict_3d[k].shape}')
- time_dim = state_dict_3d[k].shape[2]
- state_dict[k] = inflate_weight(state_dict[k], time_dim)
- model.load_state_dict(state_dict, strict=False)
-
-
-def uniformerv2_b16(
- pretrained=True, use_checkpoint=False, checkpoint_num=[0],
- t_size=16, dw_reduction=1.5, backbone_drop_path_rate=0.,
- temporal_downsample=True,
- no_lmhra=False, double_lmhra=True,
- return_list=[8, 9, 10, 11],
- n_layers=4, n_dim=768, n_head=12, mlp_factor=4.0, drop_path_rate=0.,
- mlp_dropout=[0.5, 0.5, 0.5, 0.5],
- cls_dropout=0.5, num_classes=400,
-):
- model = VisionTransformer(
- input_resolution=224,
- patch_size=16,
- width=768,
- layers=12,
- heads=12,
- output_dim=512,
- use_checkpoint=use_checkpoint,
- checkpoint_num=checkpoint_num,
- t_size=t_size,
- dw_reduction=dw_reduction,
- backbone_drop_path_rate=backbone_drop_path_rate,
- temporal_downsample=temporal_downsample,
- no_lmhra=no_lmhra,
- double_lmhra=double_lmhra,
- return_list=return_list,
- n_layers=n_layers,
- n_dim=n_dim,
- n_head=n_head,
- mlp_factor=mlp_factor,
- drop_path_rate=drop_path_rate,
- mlp_dropout=mlp_dropout,
- cls_dropout=cls_dropout,
- num_classes=num_classes,
- )
-
- if pretrained:
- print('load pretrained weights')
- state_dict = torch.load(_MODELS["ViT-B/16"], map_location='cpu')
- load_state_dict(model, state_dict)
- return model.eval()
-
-
-def uniformerv2_l14(
- pretrained=True, use_checkpoint=False, checkpoint_num=[0],
- t_size=16, dw_reduction=1.5, backbone_drop_path_rate=0.,
- temporal_downsample=True,
- no_lmhra=False, double_lmhra=True,
- return_list=[20, 21, 22, 23],
- n_layers=4, n_dim=1024, n_head=16, mlp_factor=4.0, drop_path_rate=0.,
- mlp_dropout=[0.5, 0.5, 0.5, 0.5],
- cls_dropout=0.5, num_classes=400,
-):
- model = VisionTransformer(
- input_resolution=224,
- patch_size=14,
- width=1024,
- layers=24,
- heads=16,
- output_dim=768,
- use_checkpoint=use_checkpoint,
- checkpoint_num=checkpoint_num,
- t_size=t_size,
- dw_reduction=dw_reduction,
- backbone_drop_path_rate=backbone_drop_path_rate,
- temporal_downsample=temporal_downsample,
- no_lmhra=no_lmhra,
- double_lmhra=double_lmhra,
- return_list=return_list,
- n_layers=n_layers,
- n_dim=n_dim,
- n_head=n_head,
- mlp_factor=mlp_factor,
- drop_path_rate=drop_path_rate,
- mlp_dropout=mlp_dropout,
- cls_dropout=cls_dropout,
- num_classes=num_classes,
- )
-
- if pretrained:
- print('load pretrained weights')
- state_dict = torch.load(_MODELS["ViT-L/14"], map_location='cpu')
- load_state_dict(model, state_dict)
- return model.eval()
-
-
-def uniformerv2_l14_336(
- pretrained=True, use_checkpoint=False, checkpoint_num=[0],
- t_size=16, dw_reduction=1.5, backbone_drop_path_rate=0.,
- no_temporal_downsample=True,
- no_lmhra=False, double_lmhra=True,
- return_list=[20, 21, 22, 23],
- n_layers=4, n_dim=1024, n_head=16, mlp_factor=4.0, drop_path_rate=0.,
- mlp_dropout=[0.5, 0.5, 0.5, 0.5],
- cls_dropout=0.5, num_classes=400,
-):
- model = VisionTransformer(
- input_resolution=336,
- patch_size=14,
- width=1024,
- layers=24,
- heads=16,
- output_dim=768,
- use_checkpoint=use_checkpoint,
- checkpoint_num=checkpoint_num,
- t_size=t_size,
- dw_reduction=dw_reduction,
- backbone_drop_path_rate=backbone_drop_path_rate,
- no_temporal_downsample=no_temporal_downsample,
- no_lmhra=no_lmhra,
- double_lmhra=double_lmhra,
- return_list=return_list,
- n_layers=n_layers,
- n_dim=n_dim,
- n_head=n_head,
- mlp_factor=mlp_factor,
- drop_path_rate=drop_path_rate,
- mlp_dropout=mlp_dropout,
- cls_dropout=cls_dropout,
- num_classes=num_classes,
- )
-
- if pretrained:
- print('load pretrained weights')
- state_dict = torch.load(_MODELS["ViT-L/14_336"], map_location='cpu')
- load_state_dict(model, state_dict)
- return model.eval()
-
-
-if __name__ == '__main__':
- import time
- from fvcore.nn import FlopCountAnalysis
- from fvcore.nn import flop_count_table
- import numpy as np
-
- seed = 4217
- np.random.seed(seed)
- torch.manual_seed(seed)
- torch.cuda.manual_seed(seed)
- torch.cuda.manual_seed_all(seed)
- num_frames = 16
-
- model = uniformerv2_l14(
- pretrained=False,
- t_size=num_frames, backbone_drop_path_rate=0., drop_path_rate=0.,
- dw_reduction=1.5,
- no_lmhra=False,
- temporal_downsample=True,
- return_list=[8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
- mlp_dropout=[0.5]*16,
- n_layers=16
- )
- print(model)
-
- flops = FlopCountAnalysis(model, torch.rand(1, 3, num_frames, 224, 224))
- s = time.time()
- print(flop_count_table(flops, max_depth=1))
- print(time.time()-s)
\ No newline at end of file
diff --git a/spaces/AntiUser/DeepDanbooru_string/app.py b/spaces/AntiUser/DeepDanbooru_string/app.py
deleted file mode 100644
index 49019837c9207cc68cb37be0342f3bc44fd0decb..0000000000000000000000000000000000000000
--- a/spaces/AntiUser/DeepDanbooru_string/app.py
+++ /dev/null
@@ -1,185 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-
-import argparse
-import functools
-import os
-import html
-import pathlib
-import tarfile
-
-import deepdanbooru as dd
-import gradio as gr
-import huggingface_hub
-import numpy as np
-import PIL.Image
-import tensorflow as tf
-import piexif
-import piexif.helper
-
-TITLE = 'DeepDanbooru String'
-
-TOKEN = os.environ['TOKEN']
-MODEL_REPO = 'CikeyQI/DeepDanbooru_string'
-MODEL_FILENAME = 'model-resnet_custom_v3.h5'
-LABEL_FILENAME = 'tags.txt'
-
-
-def parse_args() -> argparse.Namespace:
- parser = argparse.ArgumentParser()
- parser.add_argument('--score-slider-step', type=float, default=0.05)
- parser.add_argument('--score-threshold', type=float, default=0.5)
- parser.add_argument('--theme', type=str, default='dark-grass')
- parser.add_argument('--live', action='store_true')
- parser.add_argument('--share', action='store_true')
- parser.add_argument('--port', type=int)
- parser.add_argument('--disable-queue',
- dest='enable_queue',
- action='store_false')
- parser.add_argument('--allow-flagging', type=str, default='never')
- return parser.parse_args()
-
-
-def load_sample_image_paths() -> list[pathlib.Path]:
- image_dir = pathlib.Path('images')
- if not image_dir.exists():
- dataset_repo = 'hysts/sample-images-TADNE'
- path = huggingface_hub.hf_hub_download(dataset_repo,
- 'images.tar.gz',
- repo_type='dataset',
- use_auth_token=TOKEN)
- with tarfile.open(path) as f:
- f.extractall()
- return sorted(image_dir.glob('*'))
-
-
-def load_model() -> tf.keras.Model:
- path = huggingface_hub.hf_hub_download(MODEL_REPO,
- MODEL_FILENAME,
- use_auth_token=TOKEN)
- model = tf.keras.models.load_model(path)
- return model
-
-
-def load_labels() -> list[str]:
- path = huggingface_hub.hf_hub_download(MODEL_REPO,
- LABEL_FILENAME,
- use_auth_token=TOKEN)
- with open(path) as f:
- labels = [line.strip() for line in f.readlines()]
- return labels
-
-def plaintext_to_html(text):
- text = "
" + " \n".join([f"{html.escape(x)}" for x in text.split('\n')]) + "
-"""
- for key, text in items.items():
- info += f"""
-
-
{plaintext_to_html(str(key))}
-
{plaintext_to_html(str(text))}
-
-""".strip()+"\n"
-
- if len(info) == 0:
- message = "Nothing found in the image."
- info = f"
{message}
"
-
- return (a,c,res,info)
-
-
-def main():
- args = parse_args()
- model = load_model()
- labels = load_labels()
-
- func = functools.partial(predict, model=model, labels=labels)
- func = functools.update_wrapper(func, predict)
-
- gr.Interface(
- func,
- [
- gr.inputs.Image(type='pil', label='Input'),
- gr.inputs.Slider(0,
- 1,
- step=args.score_slider_step,
- default=args.score_threshold,
- label='Score Threshold'),
- ],
- [
- gr.outputs.Textbox(label='Output (string)'),
- gr.outputs.Textbox(label='Output (raw string)'),
- gr.outputs.Label(label='Output (label)'),
- gr.outputs.HTML()
- ],
- examples=[
- ['miku.jpg',0.5],
- ['miku2.jpg',0.5]
- ],
- title=TITLE,
- description='''
-Demo for [KichangKim/DeepDanbooru](https://github.com/KichangKim/DeepDanbooru) with "ready to copy" prompt and a prompt analyzer.
-
-Modified from [hysts/DeepDanbooru](https://huggingface.co/spaces/hysts/DeepDanbooru)
-
-PNG Info code forked from [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
- ''',
- theme=args.theme,
- allow_flagging=args.allow_flagging,
- live=args.live,
- ).launch(
- enable_queue=args.enable_queue,
- server_port=args.port,
- share=args.share,
- )
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/Ariharasudhan/YoloV5/utils/loggers/wandb/wandb_utils.py b/spaces/Ariharasudhan/YoloV5/utils/loggers/wandb/wandb_utils.py
deleted file mode 100644
index 238f4edbf2a0ddf34c024fbb6775c71dd19e18aa..0000000000000000000000000000000000000000
--- a/spaces/Ariharasudhan/YoloV5/utils/loggers/wandb/wandb_utils.py
+++ /dev/null
@@ -1,589 +0,0 @@
-"""Utilities and tools for tracking runs with Weights & Biases."""
-
-import logging
-import os
-import sys
-from contextlib import contextmanager
-from pathlib import Path
-from typing import Dict
-
-import yaml
-from tqdm import tqdm
-
-FILE = Path(__file__).resolve()
-ROOT = FILE.parents[3] # YOLOv5 root directory
-if str(ROOT) not in sys.path:
- sys.path.append(str(ROOT)) # add ROOT to PATH
-
-from utils.dataloaders import LoadImagesAndLabels, img2label_paths
-from utils.general import LOGGER, check_dataset, check_file
-
-try:
- import wandb
-
- assert hasattr(wandb, '__version__') # verify package import not local dir
-except (ImportError, AssertionError):
- wandb = None
-
-RANK = int(os.getenv('RANK', -1))
-WANDB_ARTIFACT_PREFIX = 'wandb-artifact://'
-
-
-def remove_prefix(from_string, prefix=WANDB_ARTIFACT_PREFIX):
- return from_string[len(prefix):]
-
-
-def check_wandb_config_file(data_config_file):
- wandb_config = '_wandb.'.join(data_config_file.rsplit('.', 1)) # updated data.yaml path
- if Path(wandb_config).is_file():
- return wandb_config
- return data_config_file
-
-
-def check_wandb_dataset(data_file):
- is_trainset_wandb_artifact = False
- is_valset_wandb_artifact = False
- if isinstance(data_file, dict):
- # In that case another dataset manager has already processed it and we don't have to
- return data_file
- if check_file(data_file) and data_file.endswith('.yaml'):
- with open(data_file, errors='ignore') as f:
- data_dict = yaml.safe_load(f)
- is_trainset_wandb_artifact = isinstance(data_dict['train'],
- str) and data_dict['train'].startswith(WANDB_ARTIFACT_PREFIX)
- is_valset_wandb_artifact = isinstance(data_dict['val'],
- str) and data_dict['val'].startswith(WANDB_ARTIFACT_PREFIX)
- if is_trainset_wandb_artifact or is_valset_wandb_artifact:
- return data_dict
- else:
- return check_dataset(data_file)
-
-
-def get_run_info(run_path):
- run_path = Path(remove_prefix(run_path, WANDB_ARTIFACT_PREFIX))
- run_id = run_path.stem
- project = run_path.parent.stem
- entity = run_path.parent.parent.stem
- model_artifact_name = 'run_' + run_id + '_model'
- return entity, project, run_id, model_artifact_name
-
-
-def check_wandb_resume(opt):
- process_wandb_config_ddp_mode(opt) if RANK not in [-1, 0] else None
- if isinstance(opt.resume, str):
- if opt.resume.startswith(WANDB_ARTIFACT_PREFIX):
- if RANK not in [-1, 0]: # For resuming DDP runs
- entity, project, run_id, model_artifact_name = get_run_info(opt.resume)
- api = wandb.Api()
- artifact = api.artifact(entity + '/' + project + '/' + model_artifact_name + ':latest')
- modeldir = artifact.download()
- opt.weights = str(Path(modeldir) / "last.pt")
- return True
- return None
-
-
-def process_wandb_config_ddp_mode(opt):
- with open(check_file(opt.data), errors='ignore') as f:
- data_dict = yaml.safe_load(f) # data dict
- train_dir, val_dir = None, None
- if isinstance(data_dict['train'], str) and data_dict['train'].startswith(WANDB_ARTIFACT_PREFIX):
- api = wandb.Api()
- train_artifact = api.artifact(remove_prefix(data_dict['train']) + ':' + opt.artifact_alias)
- train_dir = train_artifact.download()
- train_path = Path(train_dir) / 'data/images/'
- data_dict['train'] = str(train_path)
-
- if isinstance(data_dict['val'], str) and data_dict['val'].startswith(WANDB_ARTIFACT_PREFIX):
- api = wandb.Api()
- val_artifact = api.artifact(remove_prefix(data_dict['val']) + ':' + opt.artifact_alias)
- val_dir = val_artifact.download()
- val_path = Path(val_dir) / 'data/images/'
- data_dict['val'] = str(val_path)
- if train_dir or val_dir:
- ddp_data_path = str(Path(val_dir) / 'wandb_local_data.yaml')
- with open(ddp_data_path, 'w') as f:
- yaml.safe_dump(data_dict, f)
- opt.data = ddp_data_path
-
-
-class WandbLogger():
- """Log training runs, datasets, models, and predictions to Weights & Biases.
-
- This logger sends information to W&B at wandb.ai. By default, this information
- includes hyperparameters, system configuration and metrics, model metrics,
- and basic data metrics and analyses.
-
- By providing additional command line arguments to train.py, datasets,
- models and predictions can also be logged.
-
- For more on how this logger is used, see the Weights & Biases documentation:
- https://docs.wandb.com/guides/integrations/yolov5
- """
-
- def __init__(self, opt, run_id=None, job_type='Training'):
- """
- - Initialize WandbLogger instance
- - Upload dataset if opt.upload_dataset is True
- - Setup training processes if job_type is 'Training'
-
- arguments:
- opt (namespace) -- Commandline arguments for this run
- run_id (str) -- Run ID of W&B run to be resumed
- job_type (str) -- To set the job_type for this run
-
- """
- # Temporary-fix
- if opt.upload_dataset:
- opt.upload_dataset = False
- # LOGGER.info("Uploading Dataset functionality is not being supported temporarily due to a bug.")
-
- # Pre-training routine --
- self.job_type = job_type
- self.wandb, self.wandb_run = wandb, None if not wandb else wandb.run
- self.val_artifact, self.train_artifact = None, None
- self.train_artifact_path, self.val_artifact_path = None, None
- self.result_artifact = None
- self.val_table, self.result_table = None, None
- self.bbox_media_panel_images = []
- self.val_table_path_map = None
- self.max_imgs_to_log = 16
- self.wandb_artifact_data_dict = None
- self.data_dict = None
- # It's more elegant to stick to 1 wandb.init call,
- # but useful config data is overwritten in the WandbLogger's wandb.init call
- if isinstance(opt.resume, str): # checks resume from artifact
- if opt.resume.startswith(WANDB_ARTIFACT_PREFIX):
- entity, project, run_id, model_artifact_name = get_run_info(opt.resume)
- model_artifact_name = WANDB_ARTIFACT_PREFIX + model_artifact_name
- assert wandb, 'install wandb to resume wandb runs'
- # Resume wandb-artifact:// runs here| workaround for not overwriting wandb.config
- self.wandb_run = wandb.init(id=run_id,
- project=project,
- entity=entity,
- resume='allow',
- allow_val_change=True)
- opt.resume = model_artifact_name
- elif self.wandb:
- self.wandb_run = wandb.init(config=opt,
- resume="allow",
- project='YOLOv5' if opt.project == 'runs/train' else Path(opt.project).stem,
- entity=opt.entity,
- name=opt.name if opt.name != 'exp' else None,
- job_type=job_type,
- id=run_id,
- allow_val_change=True) if not wandb.run else wandb.run
- if self.wandb_run:
- if self.job_type == 'Training':
- if opt.upload_dataset:
- if not opt.resume:
- self.wandb_artifact_data_dict = self.check_and_upload_dataset(opt)
-
- if isinstance(opt.data, dict):
- # This means another dataset manager has already processed the dataset info (e.g. ClearML)
- # and they will have stored the already processed dict in opt.data
- self.data_dict = opt.data
- elif opt.resume:
- # resume from artifact
- if isinstance(opt.resume, str) and opt.resume.startswith(WANDB_ARTIFACT_PREFIX):
- self.data_dict = dict(self.wandb_run.config.data_dict)
- else: # local resume
- self.data_dict = check_wandb_dataset(opt.data)
- else:
- self.data_dict = check_wandb_dataset(opt.data)
- self.wandb_artifact_data_dict = self.wandb_artifact_data_dict or self.data_dict
-
- # write data_dict to config. useful for resuming from artifacts. Do this only when not resuming.
- self.wandb_run.config.update({'data_dict': self.wandb_artifact_data_dict}, allow_val_change=True)
- self.setup_training(opt)
-
- if self.job_type == 'Dataset Creation':
- self.wandb_run.config.update({"upload_dataset": True})
- self.data_dict = self.check_and_upload_dataset(opt)
-
- def check_and_upload_dataset(self, opt):
- """
- Check if the dataset format is compatible and upload it as W&B artifact
-
- arguments:
- opt (namespace)-- Commandline arguments for current run
-
- returns:
- Updated dataset info dictionary where local dataset paths are replaced by WAND_ARFACT_PREFIX links.
- """
- assert wandb, 'Install wandb to upload dataset'
- config_path = self.log_dataset_artifact(opt.data, opt.single_cls,
- 'YOLOv5' if opt.project == 'runs/train' else Path(opt.project).stem)
- with open(config_path, errors='ignore') as f:
- wandb_data_dict = yaml.safe_load(f)
- return wandb_data_dict
-
- def setup_training(self, opt):
- """
- Setup the necessary processes for training YOLO models:
- - Attempt to download model checkpoint and dataset artifacts if opt.resume stats with WANDB_ARTIFACT_PREFIX
- - Update data_dict, to contain info of previous run if resumed and the paths of dataset artifact if downloaded
- - Setup log_dict, initialize bbox_interval
-
- arguments:
- opt (namespace) -- commandline arguments for this run
-
- """
- self.log_dict, self.current_epoch = {}, 0
- self.bbox_interval = opt.bbox_interval
- if isinstance(opt.resume, str):
- modeldir, _ = self.download_model_artifact(opt)
- if modeldir:
- self.weights = Path(modeldir) / "last.pt"
- config = self.wandb_run.config
- opt.weights, opt.save_period, opt.batch_size, opt.bbox_interval, opt.epochs, opt.hyp, opt.imgsz = str(
- self.weights), config.save_period, config.batch_size, config.bbox_interval, config.epochs,\
- config.hyp, config.imgsz
- data_dict = self.data_dict
- if self.val_artifact is None: # If --upload_dataset is set, use the existing artifact, don't download
- self.train_artifact_path, self.train_artifact = self.download_dataset_artifact(
- data_dict.get('train'), opt.artifact_alias)
- self.val_artifact_path, self.val_artifact = self.download_dataset_artifact(
- data_dict.get('val'), opt.artifact_alias)
-
- if self.train_artifact_path is not None:
- train_path = Path(self.train_artifact_path) / 'data/images/'
- data_dict['train'] = str(train_path)
- if self.val_artifact_path is not None:
- val_path = Path(self.val_artifact_path) / 'data/images/'
- data_dict['val'] = str(val_path)
-
- if self.val_artifact is not None:
- self.result_artifact = wandb.Artifact("run_" + wandb.run.id + "_progress", "evaluation")
- columns = ["epoch", "id", "ground truth", "prediction"]
- columns.extend(self.data_dict['names'])
- self.result_table = wandb.Table(columns)
- self.val_table = self.val_artifact.get("val")
- if self.val_table_path_map is None:
- self.map_val_table_path()
- if opt.bbox_interval == -1:
- self.bbox_interval = opt.bbox_interval = (opt.epochs // 10) if opt.epochs > 10 else 1
- if opt.evolve or opt.noplots:
- self.bbox_interval = opt.bbox_interval = opt.epochs + 1 # disable bbox_interval
- train_from_artifact = self.train_artifact_path is not None and self.val_artifact_path is not None
- # Update the the data_dict to point to local artifacts dir
- if train_from_artifact:
- self.data_dict = data_dict
-
- def download_dataset_artifact(self, path, alias):
- """
- download the model checkpoint artifact if the path starts with WANDB_ARTIFACT_PREFIX
-
- arguments:
- path -- path of the dataset to be used for training
- alias (str)-- alias of the artifact to be download/used for training
-
- returns:
- (str, wandb.Artifact) -- path of the downladed dataset and it's corresponding artifact object if dataset
- is found otherwise returns (None, None)
- """
- if isinstance(path, str) and path.startswith(WANDB_ARTIFACT_PREFIX):
- artifact_path = Path(remove_prefix(path, WANDB_ARTIFACT_PREFIX) + ":" + alias)
- dataset_artifact = wandb.use_artifact(artifact_path.as_posix().replace("\\", "/"))
- assert dataset_artifact is not None, "'Error: W&B dataset artifact doesn\'t exist'"
- datadir = dataset_artifact.download()
- return datadir, dataset_artifact
- return None, None
-
- def download_model_artifact(self, opt):
- """
- download the model checkpoint artifact if the resume path starts with WANDB_ARTIFACT_PREFIX
-
- arguments:
- opt (namespace) -- Commandline arguments for this run
- """
- if opt.resume.startswith(WANDB_ARTIFACT_PREFIX):
- model_artifact = wandb.use_artifact(remove_prefix(opt.resume, WANDB_ARTIFACT_PREFIX) + ":latest")
- assert model_artifact is not None, 'Error: W&B model artifact doesn\'t exist'
- modeldir = model_artifact.download()
- # epochs_trained = model_artifact.metadata.get('epochs_trained')
- total_epochs = model_artifact.metadata.get('total_epochs')
- is_finished = total_epochs is None
- assert not is_finished, 'training is finished, can only resume incomplete runs.'
- return modeldir, model_artifact
- return None, None
-
- def log_model(self, path, opt, epoch, fitness_score, best_model=False):
- """
- Log the model checkpoint as W&B artifact
-
- arguments:
- path (Path) -- Path of directory containing the checkpoints
- opt (namespace) -- Command line arguments for this run
- epoch (int) -- Current epoch number
- fitness_score (float) -- fitness score for current epoch
- best_model (boolean) -- Boolean representing if the current checkpoint is the best yet.
- """
- model_artifact = wandb.Artifact('run_' + wandb.run.id + '_model',
- type='model',
- metadata={
- 'original_url': str(path),
- 'epochs_trained': epoch + 1,
- 'save period': opt.save_period,
- 'project': opt.project,
- 'total_epochs': opt.epochs,
- 'fitness_score': fitness_score})
- model_artifact.add_file(str(path / 'last.pt'), name='last.pt')
- wandb.log_artifact(model_artifact,
- aliases=['latest', 'last', 'epoch ' + str(self.current_epoch), 'best' if best_model else ''])
- LOGGER.info(f"Saving model artifact on epoch {epoch + 1}")
-
- def log_dataset_artifact(self, data_file, single_cls, project, overwrite_config=False):
- """
- Log the dataset as W&B artifact and return the new data file with W&B links
-
- arguments:
- data_file (str) -- the .yaml file with information about the dataset like - path, classes etc.
- single_class (boolean) -- train multi-class data as single-class
- project (str) -- project name. Used to construct the artifact path
- overwrite_config (boolean) -- overwrites the data.yaml file if set to true otherwise creates a new
- file with _wandb postfix. Eg -> data_wandb.yaml
-
- returns:
- the new .yaml file with artifact links. it can be used to start training directly from artifacts
- """
- upload_dataset = self.wandb_run.config.upload_dataset
- log_val_only = isinstance(upload_dataset, str) and upload_dataset == 'val'
- self.data_dict = check_dataset(data_file) # parse and check
- data = dict(self.data_dict)
- nc, names = (1, ['item']) if single_cls else (int(data['nc']), data['names'])
- names = {k: v for k, v in enumerate(names)} # to index dictionary
-
- # log train set
- if not log_val_only:
- self.train_artifact = self.create_dataset_table(LoadImagesAndLabels(data['train'], rect=True, batch_size=1),
- names,
- name='train') if data.get('train') else None
- if data.get('train'):
- data['train'] = WANDB_ARTIFACT_PREFIX + str(Path(project) / 'train')
-
- self.val_artifact = self.create_dataset_table(
- LoadImagesAndLabels(data['val'], rect=True, batch_size=1), names, name='val') if data.get('val') else None
- if data.get('val'):
- data['val'] = WANDB_ARTIFACT_PREFIX + str(Path(project) / 'val')
-
- path = Path(data_file)
- # create a _wandb.yaml file with artifacts links if both train and test set are logged
- if not log_val_only:
- path = (path.stem if overwrite_config else path.stem + '_wandb') + '.yaml' # updated data.yaml path
- path = ROOT / 'data' / path
- data.pop('download', None)
- data.pop('path', None)
- with open(path, 'w') as f:
- yaml.safe_dump(data, f)
- LOGGER.info(f"Created dataset config file {path}")
-
- if self.job_type == 'Training': # builds correct artifact pipeline graph
- if not log_val_only:
- self.wandb_run.log_artifact(
- self.train_artifact) # calling use_artifact downloads the dataset. NOT NEEDED!
- self.wandb_run.use_artifact(self.val_artifact)
- self.val_artifact.wait()
- self.val_table = self.val_artifact.get('val')
- self.map_val_table_path()
- else:
- self.wandb_run.log_artifact(self.train_artifact)
- self.wandb_run.log_artifact(self.val_artifact)
- return path
-
- def map_val_table_path(self):
- """
- Map the validation dataset Table like name of file -> it's id in the W&B Table.
- Useful for - referencing artifacts for evaluation.
- """
- self.val_table_path_map = {}
- LOGGER.info("Mapping dataset")
- for i, data in enumerate(tqdm(self.val_table.data)):
- self.val_table_path_map[data[3]] = data[0]
-
- def create_dataset_table(self, dataset: LoadImagesAndLabels, class_to_id: Dict[int, str], name: str = 'dataset'):
- """
- Create and return W&B artifact containing W&B Table of the dataset.
-
- arguments:
- dataset -- instance of LoadImagesAndLabels class used to iterate over the data to build Table
- class_to_id -- hash map that maps class ids to labels
- name -- name of the artifact
-
- returns:
- dataset artifact to be logged or used
- """
- # TODO: Explore multiprocessing to slpit this loop parallely| This is essential for speeding up the the logging
- artifact = wandb.Artifact(name=name, type="dataset")
- img_files = tqdm([dataset.path]) if isinstance(dataset.path, str) and Path(dataset.path).is_dir() else None
- img_files = tqdm(dataset.im_files) if not img_files else img_files
- for img_file in img_files:
- if Path(img_file).is_dir():
- artifact.add_dir(img_file, name='data/images')
- labels_path = 'labels'.join(dataset.path.rsplit('images', 1))
- artifact.add_dir(labels_path, name='data/labels')
- else:
- artifact.add_file(img_file, name='data/images/' + Path(img_file).name)
- label_file = Path(img2label_paths([img_file])[0])
- artifact.add_file(str(label_file), name='data/labels/' +
- label_file.name) if label_file.exists() else None
- table = wandb.Table(columns=["id", "train_image", "Classes", "name"])
- class_set = wandb.Classes([{'id': id, 'name': name} for id, name in class_to_id.items()])
- for si, (img, labels, paths, shapes) in enumerate(tqdm(dataset)):
- box_data, img_classes = [], {}
- for cls, *xywh in labels[:, 1:].tolist():
- cls = int(cls)
- box_data.append({
- "position": {
- "middle": [xywh[0], xywh[1]],
- "width": xywh[2],
- "height": xywh[3]},
- "class_id": cls,
- "box_caption": "%s" % (class_to_id[cls])})
- img_classes[cls] = class_to_id[cls]
- boxes = {"ground_truth": {"box_data": box_data, "class_labels": class_to_id}} # inference-space
- table.add_data(si, wandb.Image(paths, classes=class_set, boxes=boxes), list(img_classes.values()),
- Path(paths).name)
- artifact.add(table, name)
- return artifact
-
- def log_training_progress(self, predn, path, names):
- """
- Build evaluation Table. Uses reference from validation dataset table.
-
- arguments:
- predn (list): list of predictions in the native space in the format - [xmin, ymin, xmax, ymax, confidence, class]
- path (str): local path of the current evaluation image
- names (dict(int, str)): hash map that maps class ids to labels
- """
- class_set = wandb.Classes([{'id': id, 'name': name} for id, name in names.items()])
- box_data = []
- avg_conf_per_class = [0] * len(self.data_dict['names'])
- pred_class_count = {}
- for *xyxy, conf, cls in predn.tolist():
- if conf >= 0.25:
- cls = int(cls)
- box_data.append({
- "position": {
- "minX": xyxy[0],
- "minY": xyxy[1],
- "maxX": xyxy[2],
- "maxY": xyxy[3]},
- "class_id": cls,
- "box_caption": f"{names[cls]} {conf:.3f}",
- "scores": {
- "class_score": conf},
- "domain": "pixel"})
- avg_conf_per_class[cls] += conf
-
- if cls in pred_class_count:
- pred_class_count[cls] += 1
- else:
- pred_class_count[cls] = 1
-
- for pred_class in pred_class_count.keys():
- avg_conf_per_class[pred_class] = avg_conf_per_class[pred_class] / pred_class_count[pred_class]
-
- boxes = {"predictions": {"box_data": box_data, "class_labels": names}} # inference-space
- id = self.val_table_path_map[Path(path).name]
- self.result_table.add_data(self.current_epoch, id, self.val_table.data[id][1],
- wandb.Image(self.val_table.data[id][1], boxes=boxes, classes=class_set),
- *avg_conf_per_class)
-
- def val_one_image(self, pred, predn, path, names, im):
- """
- Log validation data for one image. updates the result Table if validation dataset is uploaded and log bbox media panel
-
- arguments:
- pred (list): list of scaled predictions in the format - [xmin, ymin, xmax, ymax, confidence, class]
- predn (list): list of predictions in the native space - [xmin, ymin, xmax, ymax, confidence, class]
- path (str): local path of the current evaluation image
- """
- if self.val_table and self.result_table: # Log Table if Val dataset is uploaded as artifact
- self.log_training_progress(predn, path, names)
-
- if len(self.bbox_media_panel_images) < self.max_imgs_to_log and self.current_epoch > 0:
- if self.current_epoch % self.bbox_interval == 0:
- box_data = [{
- "position": {
- "minX": xyxy[0],
- "minY": xyxy[1],
- "maxX": xyxy[2],
- "maxY": xyxy[3]},
- "class_id": int(cls),
- "box_caption": f"{names[int(cls)]} {conf:.3f}",
- "scores": {
- "class_score": conf},
- "domain": "pixel"} for *xyxy, conf, cls in pred.tolist()]
- boxes = {"predictions": {"box_data": box_data, "class_labels": names}} # inference-space
- self.bbox_media_panel_images.append(wandb.Image(im, boxes=boxes, caption=path.name))
-
- def log(self, log_dict):
- """
- save the metrics to the logging dictionary
-
- arguments:
- log_dict (Dict) -- metrics/media to be logged in current step
- """
- if self.wandb_run:
- for key, value in log_dict.items():
- self.log_dict[key] = value
-
- def end_epoch(self, best_result=False):
- """
- commit the log_dict, model artifacts and Tables to W&B and flush the log_dict.
-
- arguments:
- best_result (boolean): Boolean representing if the result of this evaluation is best or not
- """
- if self.wandb_run:
- with all_logging_disabled():
- if self.bbox_media_panel_images:
- self.log_dict["BoundingBoxDebugger"] = self.bbox_media_panel_images
- try:
- wandb.log(self.log_dict)
- except BaseException as e:
- LOGGER.info(
- f"An error occurred in wandb logger. The training will proceed without interruption. More info\n{e}"
- )
- self.wandb_run.finish()
- self.wandb_run = None
-
- self.log_dict = {}
- self.bbox_media_panel_images = []
- if self.result_artifact:
- self.result_artifact.add(self.result_table, 'result')
- wandb.log_artifact(self.result_artifact,
- aliases=[
- 'latest', 'last', 'epoch ' + str(self.current_epoch),
- ('best' if best_result else '')])
-
- wandb.log({"evaluation": self.result_table})
- columns = ["epoch", "id", "ground truth", "prediction"]
- columns.extend(self.data_dict['names'])
- self.result_table = wandb.Table(columns)
- self.result_artifact = wandb.Artifact("run_" + wandb.run.id + "_progress", "evaluation")
-
- def finish_run(self):
- """
- Log metrics if any and finish the current W&B run
- """
- if self.wandb_run:
- if self.log_dict:
- with all_logging_disabled():
- wandb.log(self.log_dict)
- wandb.run.finish()
-
-
-@contextmanager
-def all_logging_disabled(highest_level=logging.CRITICAL):
- """ source - https://gist.github.com/simon-weber/7853144
- A context manager that will prevent any logging messages triggered during the body from being processed.
- :param highest_level: the maximum logging level in use.
- This would only need to be changed if a custom level greater than CRITICAL is defined.
- """
- previous_level = logging.root.manager.disable
- logging.disable(highest_level)
- try:
- yield
- finally:
- logging.disable(previous_level)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/distlib/util.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/distlib/util.py
deleted file mode 100644
index dd01849d997e5ae9dc9809295e29ceb871b14216..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/distlib/util.py
+++ /dev/null
@@ -1,1932 +0,0 @@
-#
-# Copyright (C) 2012-2021 The Python Software Foundation.
-# See LICENSE.txt and CONTRIBUTORS.txt.
-#
-import codecs
-from collections import deque
-import contextlib
-import csv
-from glob import iglob as std_iglob
-import io
-import json
-import logging
-import os
-import py_compile
-import re
-import socket
-try:
- import ssl
-except ImportError: # pragma: no cover
- ssl = None
-import subprocess
-import sys
-import tarfile
-import tempfile
-import textwrap
-
-try:
- import threading
-except ImportError: # pragma: no cover
- import dummy_threading as threading
-import time
-
-from . import DistlibException
-from .compat import (string_types, text_type, shutil, raw_input, StringIO,
- cache_from_source, urlopen, urljoin, httplib, xmlrpclib,
- splittype, HTTPHandler, BaseConfigurator, valid_ident,
- Container, configparser, URLError, ZipFile, fsdecode,
- unquote, urlparse)
-
-logger = logging.getLogger(__name__)
-
-#
-# Requirement parsing code as per PEP 508
-#
-
-IDENTIFIER = re.compile(r'^([\w\.-]+)\s*')
-VERSION_IDENTIFIER = re.compile(r'^([\w\.*+-]+)\s*')
-COMPARE_OP = re.compile(r'^(<=?|>=?|={2,3}|[~!]=)\s*')
-MARKER_OP = re.compile(r'^((<=?)|(>=?)|={2,3}|[~!]=|in|not\s+in)\s*')
-OR = re.compile(r'^or\b\s*')
-AND = re.compile(r'^and\b\s*')
-NON_SPACE = re.compile(r'(\S+)\s*')
-STRING_CHUNK = re.compile(r'([\s\w\.{}()*+#:;,/?!~`@$%^&=|<>\[\]-]+)')
-
-
-def parse_marker(marker_string):
- """
- Parse a marker string and return a dictionary containing a marker expression.
-
- The dictionary will contain keys "op", "lhs" and "rhs" for non-terminals in
- the expression grammar, or strings. A string contained in quotes is to be
- interpreted as a literal string, and a string not contained in quotes is a
- variable (such as os_name).
- """
- def marker_var(remaining):
- # either identifier, or literal string
- m = IDENTIFIER.match(remaining)
- if m:
- result = m.groups()[0]
- remaining = remaining[m.end():]
- elif not remaining:
- raise SyntaxError('unexpected end of input')
- else:
- q = remaining[0]
- if q not in '\'"':
- raise SyntaxError('invalid expression: %s' % remaining)
- oq = '\'"'.replace(q, '')
- remaining = remaining[1:]
- parts = [q]
- while remaining:
- # either a string chunk, or oq, or q to terminate
- if remaining[0] == q:
- break
- elif remaining[0] == oq:
- parts.append(oq)
- remaining = remaining[1:]
- else:
- m = STRING_CHUNK.match(remaining)
- if not m:
- raise SyntaxError('error in string literal: %s' % remaining)
- parts.append(m.groups()[0])
- remaining = remaining[m.end():]
- else:
- s = ''.join(parts)
- raise SyntaxError('unterminated string: %s' % s)
- parts.append(q)
- result = ''.join(parts)
- remaining = remaining[1:].lstrip() # skip past closing quote
- return result, remaining
-
- def marker_expr(remaining):
- if remaining and remaining[0] == '(':
- result, remaining = marker(remaining[1:].lstrip())
- if remaining[0] != ')':
- raise SyntaxError('unterminated parenthesis: %s' % remaining)
- remaining = remaining[1:].lstrip()
- else:
- lhs, remaining = marker_var(remaining)
- while remaining:
- m = MARKER_OP.match(remaining)
- if not m:
- break
- op = m.groups()[0]
- remaining = remaining[m.end():]
- rhs, remaining = marker_var(remaining)
- lhs = {'op': op, 'lhs': lhs, 'rhs': rhs}
- result = lhs
- return result, remaining
-
- def marker_and(remaining):
- lhs, remaining = marker_expr(remaining)
- while remaining:
- m = AND.match(remaining)
- if not m:
- break
- remaining = remaining[m.end():]
- rhs, remaining = marker_expr(remaining)
- lhs = {'op': 'and', 'lhs': lhs, 'rhs': rhs}
- return lhs, remaining
-
- def marker(remaining):
- lhs, remaining = marker_and(remaining)
- while remaining:
- m = OR.match(remaining)
- if not m:
- break
- remaining = remaining[m.end():]
- rhs, remaining = marker_and(remaining)
- lhs = {'op': 'or', 'lhs': lhs, 'rhs': rhs}
- return lhs, remaining
-
- return marker(marker_string)
-
-
-def parse_requirement(req):
- """
- Parse a requirement passed in as a string. Return a Container
- whose attributes contain the various parts of the requirement.
- """
- remaining = req.strip()
- if not remaining or remaining.startswith('#'):
- return None
- m = IDENTIFIER.match(remaining)
- if not m:
- raise SyntaxError('name expected: %s' % remaining)
- distname = m.groups()[0]
- remaining = remaining[m.end():]
- extras = mark_expr = versions = uri = None
- if remaining and remaining[0] == '[':
- i = remaining.find(']', 1)
- if i < 0:
- raise SyntaxError('unterminated extra: %s' % remaining)
- s = remaining[1:i]
- remaining = remaining[i + 1:].lstrip()
- extras = []
- while s:
- m = IDENTIFIER.match(s)
- if not m:
- raise SyntaxError('malformed extra: %s' % s)
- extras.append(m.groups()[0])
- s = s[m.end():]
- if not s:
- break
- if s[0] != ',':
- raise SyntaxError('comma expected in extras: %s' % s)
- s = s[1:].lstrip()
- if not extras:
- extras = None
- if remaining:
- if remaining[0] == '@':
- # it's a URI
- remaining = remaining[1:].lstrip()
- m = NON_SPACE.match(remaining)
- if not m:
- raise SyntaxError('invalid URI: %s' % remaining)
- uri = m.groups()[0]
- t = urlparse(uri)
- # there are issues with Python and URL parsing, so this test
- # is a bit crude. See bpo-20271, bpo-23505. Python doesn't
- # always parse invalid URLs correctly - it should raise
- # exceptions for malformed URLs
- if not (t.scheme and t.netloc):
- raise SyntaxError('Invalid URL: %s' % uri)
- remaining = remaining[m.end():].lstrip()
- else:
-
- def get_versions(ver_remaining):
- """
- Return a list of operator, version tuples if any are
- specified, else None.
- """
- m = COMPARE_OP.match(ver_remaining)
- versions = None
- if m:
- versions = []
- while True:
- op = m.groups()[0]
- ver_remaining = ver_remaining[m.end():]
- m = VERSION_IDENTIFIER.match(ver_remaining)
- if not m:
- raise SyntaxError('invalid version: %s' % ver_remaining)
- v = m.groups()[0]
- versions.append((op, v))
- ver_remaining = ver_remaining[m.end():]
- if not ver_remaining or ver_remaining[0] != ',':
- break
- ver_remaining = ver_remaining[1:].lstrip()
- # Some packages have a trailing comma which would break things
- # See issue #148
- if not ver_remaining:
- break
- m = COMPARE_OP.match(ver_remaining)
- if not m:
- raise SyntaxError('invalid constraint: %s' % ver_remaining)
- if not versions:
- versions = None
- return versions, ver_remaining
-
- if remaining[0] != '(':
- versions, remaining = get_versions(remaining)
- else:
- i = remaining.find(')', 1)
- if i < 0:
- raise SyntaxError('unterminated parenthesis: %s' % remaining)
- s = remaining[1:i]
- remaining = remaining[i + 1:].lstrip()
- # As a special diversion from PEP 508, allow a version number
- # a.b.c in parentheses as a synonym for ~= a.b.c (because this
- # is allowed in earlier PEPs)
- if COMPARE_OP.match(s):
- versions, _ = get_versions(s)
- else:
- m = VERSION_IDENTIFIER.match(s)
- if not m:
- raise SyntaxError('invalid constraint: %s' % s)
- v = m.groups()[0]
- s = s[m.end():].lstrip()
- if s:
- raise SyntaxError('invalid constraint: %s' % s)
- versions = [('~=', v)]
-
- if remaining:
- if remaining[0] != ';':
- raise SyntaxError('invalid requirement: %s' % remaining)
- remaining = remaining[1:].lstrip()
-
- mark_expr, remaining = parse_marker(remaining)
-
- if remaining and remaining[0] != '#':
- raise SyntaxError('unexpected trailing data: %s' % remaining)
-
- if not versions:
- rs = distname
- else:
- rs = '%s %s' % (distname, ', '.join(['%s %s' % con for con in versions]))
- return Container(name=distname, extras=extras, constraints=versions,
- marker=mark_expr, url=uri, requirement=rs)
-
-
-def get_resources_dests(resources_root, rules):
- """Find destinations for resources files"""
-
- def get_rel_path(root, path):
- # normalizes and returns a lstripped-/-separated path
- root = root.replace(os.path.sep, '/')
- path = path.replace(os.path.sep, '/')
- assert path.startswith(root)
- return path[len(root):].lstrip('/')
-
- destinations = {}
- for base, suffix, dest in rules:
- prefix = os.path.join(resources_root, base)
- for abs_base in iglob(prefix):
- abs_glob = os.path.join(abs_base, suffix)
- for abs_path in iglob(abs_glob):
- resource_file = get_rel_path(resources_root, abs_path)
- if dest is None: # remove the entry if it was here
- destinations.pop(resource_file, None)
- else:
- rel_path = get_rel_path(abs_base, abs_path)
- rel_dest = dest.replace(os.path.sep, '/').rstrip('/')
- destinations[resource_file] = rel_dest + '/' + rel_path
- return destinations
-
-
-def in_venv():
- if hasattr(sys, 'real_prefix'):
- # virtualenv venvs
- result = True
- else:
- # PEP 405 venvs
- result = sys.prefix != getattr(sys, 'base_prefix', sys.prefix)
- return result
-
-
-def get_executable():
-# The __PYVENV_LAUNCHER__ dance is apparently no longer needed, as
-# changes to the stub launcher mean that sys.executable always points
-# to the stub on OS X
-# if sys.platform == 'darwin' and ('__PYVENV_LAUNCHER__'
-# in os.environ):
-# result = os.environ['__PYVENV_LAUNCHER__']
-# else:
-# result = sys.executable
-# return result
- # Avoid normcasing: see issue #143
- # result = os.path.normcase(sys.executable)
- result = sys.executable
- if not isinstance(result, text_type):
- result = fsdecode(result)
- return result
-
-
-def proceed(prompt, allowed_chars, error_prompt=None, default=None):
- p = prompt
- while True:
- s = raw_input(p)
- p = prompt
- if not s and default:
- s = default
- if s:
- c = s[0].lower()
- if c in allowed_chars:
- break
- if error_prompt:
- p = '%c: %s\n%s' % (c, error_prompt, prompt)
- return c
-
-
-def extract_by_key(d, keys):
- if isinstance(keys, string_types):
- keys = keys.split()
- result = {}
- for key in keys:
- if key in d:
- result[key] = d[key]
- return result
-
-def read_exports(stream):
- if sys.version_info[0] >= 3:
- # needs to be a text stream
- stream = codecs.getreader('utf-8')(stream)
- # Try to load as JSON, falling back on legacy format
- data = stream.read()
- stream = StringIO(data)
- try:
- jdata = json.load(stream)
- result = jdata['extensions']['python.exports']['exports']
- for group, entries in result.items():
- for k, v in entries.items():
- s = '%s = %s' % (k, v)
- entry = get_export_entry(s)
- assert entry is not None
- entries[k] = entry
- return result
- except Exception:
- stream.seek(0, 0)
-
- def read_stream(cp, stream):
- if hasattr(cp, 'read_file'):
- cp.read_file(stream)
- else:
- cp.readfp(stream)
-
- cp = configparser.ConfigParser()
- try:
- read_stream(cp, stream)
- except configparser.MissingSectionHeaderError:
- stream.close()
- data = textwrap.dedent(data)
- stream = StringIO(data)
- read_stream(cp, stream)
-
- result = {}
- for key in cp.sections():
- result[key] = entries = {}
- for name, value in cp.items(key):
- s = '%s = %s' % (name, value)
- entry = get_export_entry(s)
- assert entry is not None
- #entry.dist = self
- entries[name] = entry
- return result
-
-
-def write_exports(exports, stream):
- if sys.version_info[0] >= 3:
- # needs to be a text stream
- stream = codecs.getwriter('utf-8')(stream)
- cp = configparser.ConfigParser()
- for k, v in exports.items():
- # TODO check k, v for valid values
- cp.add_section(k)
- for entry in v.values():
- if entry.suffix is None:
- s = entry.prefix
- else:
- s = '%s:%s' % (entry.prefix, entry.suffix)
- if entry.flags:
- s = '%s [%s]' % (s, ', '.join(entry.flags))
- cp.set(k, entry.name, s)
- cp.write(stream)
-
-
-@contextlib.contextmanager
-def tempdir():
- td = tempfile.mkdtemp()
- try:
- yield td
- finally:
- shutil.rmtree(td)
-
-@contextlib.contextmanager
-def chdir(d):
- cwd = os.getcwd()
- try:
- os.chdir(d)
- yield
- finally:
- os.chdir(cwd)
-
-
-@contextlib.contextmanager
-def socket_timeout(seconds=15):
- cto = socket.getdefaulttimeout()
- try:
- socket.setdefaulttimeout(seconds)
- yield
- finally:
- socket.setdefaulttimeout(cto)
-
-
-class cached_property(object):
- def __init__(self, func):
- self.func = func
- #for attr in ('__name__', '__module__', '__doc__'):
- # setattr(self, attr, getattr(func, attr, None))
-
- def __get__(self, obj, cls=None):
- if obj is None:
- return self
- value = self.func(obj)
- object.__setattr__(obj, self.func.__name__, value)
- #obj.__dict__[self.func.__name__] = value = self.func(obj)
- return value
-
-def convert_path(pathname):
- """Return 'pathname' as a name that will work on the native filesystem.
-
- The path is split on '/' and put back together again using the current
- directory separator. Needed because filenames in the setup script are
- always supplied in Unix style, and have to be converted to the local
- convention before we can actually use them in the filesystem. Raises
- ValueError on non-Unix-ish systems if 'pathname' either starts or
- ends with a slash.
- """
- if os.sep == '/':
- return pathname
- if not pathname:
- return pathname
- if pathname[0] == '/':
- raise ValueError("path '%s' cannot be absolute" % pathname)
- if pathname[-1] == '/':
- raise ValueError("path '%s' cannot end with '/'" % pathname)
-
- paths = pathname.split('/')
- while os.curdir in paths:
- paths.remove(os.curdir)
- if not paths:
- return os.curdir
- return os.path.join(*paths)
-
-
-class FileOperator(object):
- def __init__(self, dry_run=False):
- self.dry_run = dry_run
- self.ensured = set()
- self._init_record()
-
- def _init_record(self):
- self.record = False
- self.files_written = set()
- self.dirs_created = set()
-
- def record_as_written(self, path):
- if self.record:
- self.files_written.add(path)
-
- def newer(self, source, target):
- """Tell if the target is newer than the source.
-
- Returns true if 'source' exists and is more recently modified than
- 'target', or if 'source' exists and 'target' doesn't.
-
- Returns false if both exist and 'target' is the same age or younger
- than 'source'. Raise PackagingFileError if 'source' does not exist.
-
- Note that this test is not very accurate: files created in the same
- second will have the same "age".
- """
- if not os.path.exists(source):
- raise DistlibException("file '%r' does not exist" %
- os.path.abspath(source))
- if not os.path.exists(target):
- return True
-
- return os.stat(source).st_mtime > os.stat(target).st_mtime
-
- def copy_file(self, infile, outfile, check=True):
- """Copy a file respecting dry-run and force flags.
- """
- self.ensure_dir(os.path.dirname(outfile))
- logger.info('Copying %s to %s', infile, outfile)
- if not self.dry_run:
- msg = None
- if check:
- if os.path.islink(outfile):
- msg = '%s is a symlink' % outfile
- elif os.path.exists(outfile) and not os.path.isfile(outfile):
- msg = '%s is a non-regular file' % outfile
- if msg:
- raise ValueError(msg + ' which would be overwritten')
- shutil.copyfile(infile, outfile)
- self.record_as_written(outfile)
-
- def copy_stream(self, instream, outfile, encoding=None):
- assert not os.path.isdir(outfile)
- self.ensure_dir(os.path.dirname(outfile))
- logger.info('Copying stream %s to %s', instream, outfile)
- if not self.dry_run:
- if encoding is None:
- outstream = open(outfile, 'wb')
- else:
- outstream = codecs.open(outfile, 'w', encoding=encoding)
- try:
- shutil.copyfileobj(instream, outstream)
- finally:
- outstream.close()
- self.record_as_written(outfile)
-
- def write_binary_file(self, path, data):
- self.ensure_dir(os.path.dirname(path))
- if not self.dry_run:
- if os.path.exists(path):
- os.remove(path)
- with open(path, 'wb') as f:
- f.write(data)
- self.record_as_written(path)
-
- def write_text_file(self, path, data, encoding):
- self.write_binary_file(path, data.encode(encoding))
-
- def set_mode(self, bits, mask, files):
- if os.name == 'posix' or (os.name == 'java' and os._name == 'posix'):
- # Set the executable bits (owner, group, and world) on
- # all the files specified.
- for f in files:
- if self.dry_run:
- logger.info("changing mode of %s", f)
- else:
- mode = (os.stat(f).st_mode | bits) & mask
- logger.info("changing mode of %s to %o", f, mode)
- os.chmod(f, mode)
-
- set_executable_mode = lambda s, f: s.set_mode(0o555, 0o7777, f)
-
- def ensure_dir(self, path):
- path = os.path.abspath(path)
- if path not in self.ensured and not os.path.exists(path):
- self.ensured.add(path)
- d, f = os.path.split(path)
- self.ensure_dir(d)
- logger.info('Creating %s' % path)
- if not self.dry_run:
- os.mkdir(path)
- if self.record:
- self.dirs_created.add(path)
-
- def byte_compile(self, path, optimize=False, force=False, prefix=None, hashed_invalidation=False):
- dpath = cache_from_source(path, not optimize)
- logger.info('Byte-compiling %s to %s', path, dpath)
- if not self.dry_run:
- if force or self.newer(path, dpath):
- if not prefix:
- diagpath = None
- else:
- assert path.startswith(prefix)
- diagpath = path[len(prefix):]
- compile_kwargs = {}
- if hashed_invalidation and hasattr(py_compile, 'PycInvalidationMode'):
- compile_kwargs['invalidation_mode'] = py_compile.PycInvalidationMode.CHECKED_HASH
- py_compile.compile(path, dpath, diagpath, True, **compile_kwargs) # raise error
- self.record_as_written(dpath)
- return dpath
-
- def ensure_removed(self, path):
- if os.path.exists(path):
- if os.path.isdir(path) and not os.path.islink(path):
- logger.debug('Removing directory tree at %s', path)
- if not self.dry_run:
- shutil.rmtree(path)
- if self.record:
- if path in self.dirs_created:
- self.dirs_created.remove(path)
- else:
- if os.path.islink(path):
- s = 'link'
- else:
- s = 'file'
- logger.debug('Removing %s %s', s, path)
- if not self.dry_run:
- os.remove(path)
- if self.record:
- if path in self.files_written:
- self.files_written.remove(path)
-
- def is_writable(self, path):
- result = False
- while not result:
- if os.path.exists(path):
- result = os.access(path, os.W_OK)
- break
- parent = os.path.dirname(path)
- if parent == path:
- break
- path = parent
- return result
-
- def commit(self):
- """
- Commit recorded changes, turn off recording, return
- changes.
- """
- assert self.record
- result = self.files_written, self.dirs_created
- self._init_record()
- return result
-
- def rollback(self):
- if not self.dry_run:
- for f in list(self.files_written):
- if os.path.exists(f):
- os.remove(f)
- # dirs should all be empty now, except perhaps for
- # __pycache__ subdirs
- # reverse so that subdirs appear before their parents
- dirs = sorted(self.dirs_created, reverse=True)
- for d in dirs:
- flist = os.listdir(d)
- if flist:
- assert flist == ['__pycache__']
- sd = os.path.join(d, flist[0])
- os.rmdir(sd)
- os.rmdir(d) # should fail if non-empty
- self._init_record()
-
-def resolve(module_name, dotted_path):
- if module_name in sys.modules:
- mod = sys.modules[module_name]
- else:
- mod = __import__(module_name)
- if dotted_path is None:
- result = mod
- else:
- parts = dotted_path.split('.')
- result = getattr(mod, parts.pop(0))
- for p in parts:
- result = getattr(result, p)
- return result
-
-
-class ExportEntry(object):
- def __init__(self, name, prefix, suffix, flags):
- self.name = name
- self.prefix = prefix
- self.suffix = suffix
- self.flags = flags
-
- @cached_property
- def value(self):
- return resolve(self.prefix, self.suffix)
-
- def __repr__(self): # pragma: no cover
- return '' % (self.name, self.prefix,
- self.suffix, self.flags)
-
- def __eq__(self, other):
- if not isinstance(other, ExportEntry):
- result = False
- else:
- result = (self.name == other.name and
- self.prefix == other.prefix and
- self.suffix == other.suffix and
- self.flags == other.flags)
- return result
-
- __hash__ = object.__hash__
-
-
-ENTRY_RE = re.compile(r'''(?P(\w|[-.+])+)
- \s*=\s*(?P(\w+)([:\.]\w+)*)
- \s*(\[\s*(?P[\w-]+(=\w+)?(,\s*\w+(=\w+)?)*)\s*\])?
- ''', re.VERBOSE)
-
-def get_export_entry(specification):
- m = ENTRY_RE.search(specification)
- if not m:
- result = None
- if '[' in specification or ']' in specification:
- raise DistlibException("Invalid specification "
- "'%s'" % specification)
- else:
- d = m.groupdict()
- name = d['name']
- path = d['callable']
- colons = path.count(':')
- if colons == 0:
- prefix, suffix = path, None
- else:
- if colons != 1:
- raise DistlibException("Invalid specification "
- "'%s'" % specification)
- prefix, suffix = path.split(':')
- flags = d['flags']
- if flags is None:
- if '[' in specification or ']' in specification:
- raise DistlibException("Invalid specification "
- "'%s'" % specification)
- flags = []
- else:
- flags = [f.strip() for f in flags.split(',')]
- result = ExportEntry(name, prefix, suffix, flags)
- return result
-
-
-def get_cache_base(suffix=None):
- """
- Return the default base location for distlib caches. If the directory does
- not exist, it is created. Use the suffix provided for the base directory,
- and default to '.distlib' if it isn't provided.
-
- On Windows, if LOCALAPPDATA is defined in the environment, then it is
- assumed to be a directory, and will be the parent directory of the result.
- On POSIX, and on Windows if LOCALAPPDATA is not defined, the user's home
- directory - using os.expanduser('~') - will be the parent directory of
- the result.
-
- The result is just the directory '.distlib' in the parent directory as
- determined above, or with the name specified with ``suffix``.
- """
- if suffix is None:
- suffix = '.distlib'
- if os.name == 'nt' and 'LOCALAPPDATA' in os.environ:
- result = os.path.expandvars('$localappdata')
- else:
- # Assume posix, or old Windows
- result = os.path.expanduser('~')
- # we use 'isdir' instead of 'exists', because we want to
- # fail if there's a file with that name
- if os.path.isdir(result):
- usable = os.access(result, os.W_OK)
- if not usable:
- logger.warning('Directory exists but is not writable: %s', result)
- else:
- try:
- os.makedirs(result)
- usable = True
- except OSError:
- logger.warning('Unable to create %s', result, exc_info=True)
- usable = False
- if not usable:
- result = tempfile.mkdtemp()
- logger.warning('Default location unusable, using %s', result)
- return os.path.join(result, suffix)
-
-
-def path_to_cache_dir(path):
- """
- Convert an absolute path to a directory name for use in a cache.
-
- The algorithm used is:
-
- #. On Windows, any ``':'`` in the drive is replaced with ``'---'``.
- #. Any occurrence of ``os.sep`` is replaced with ``'--'``.
- #. ``'.cache'`` is appended.
- """
- d, p = os.path.splitdrive(os.path.abspath(path))
- if d:
- d = d.replace(':', '---')
- p = p.replace(os.sep, '--')
- return d + p + '.cache'
-
-
-def ensure_slash(s):
- if not s.endswith('/'):
- return s + '/'
- return s
-
-
-def parse_credentials(netloc):
- username = password = None
- if '@' in netloc:
- prefix, netloc = netloc.rsplit('@', 1)
- if ':' not in prefix:
- username = prefix
- else:
- username, password = prefix.split(':', 1)
- if username:
- username = unquote(username)
- if password:
- password = unquote(password)
- return username, password, netloc
-
-
-def get_process_umask():
- result = os.umask(0o22)
- os.umask(result)
- return result
-
-def is_string_sequence(seq):
- result = True
- i = None
- for i, s in enumerate(seq):
- if not isinstance(s, string_types):
- result = False
- break
- assert i is not None
- return result
-
-PROJECT_NAME_AND_VERSION = re.compile('([a-z0-9_]+([.-][a-z_][a-z0-9_]*)*)-'
- '([a-z0-9_.+-]+)', re.I)
-PYTHON_VERSION = re.compile(r'-py(\d\.?\d?)')
-
-
-def split_filename(filename, project_name=None):
- """
- Extract name, version, python version from a filename (no extension)
-
- Return name, version, pyver or None
- """
- result = None
- pyver = None
- filename = unquote(filename).replace(' ', '-')
- m = PYTHON_VERSION.search(filename)
- if m:
- pyver = m.group(1)
- filename = filename[:m.start()]
- if project_name and len(filename) > len(project_name) + 1:
- m = re.match(re.escape(project_name) + r'\b', filename)
- if m:
- n = m.end()
- result = filename[:n], filename[n + 1:], pyver
- if result is None:
- m = PROJECT_NAME_AND_VERSION.match(filename)
- if m:
- result = m.group(1), m.group(3), pyver
- return result
-
-# Allow spaces in name because of legacy dists like "Twisted Core"
-NAME_VERSION_RE = re.compile(r'(?P[\w .-]+)\s*'
- r'\(\s*(?P[^\s)]+)\)$')
-
-def parse_name_and_version(p):
- """
- A utility method used to get name and version from a string.
-
- From e.g. a Provides-Dist value.
-
- :param p: A value in a form 'foo (1.0)'
- :return: The name and version as a tuple.
- """
- m = NAME_VERSION_RE.match(p)
- if not m:
- raise DistlibException('Ill-formed name/version string: \'%s\'' % p)
- d = m.groupdict()
- return d['name'].strip().lower(), d['ver']
-
-def get_extras(requested, available):
- result = set()
- requested = set(requested or [])
- available = set(available or [])
- if '*' in requested:
- requested.remove('*')
- result |= available
- for r in requested:
- if r == '-':
- result.add(r)
- elif r.startswith('-'):
- unwanted = r[1:]
- if unwanted not in available:
- logger.warning('undeclared extra: %s' % unwanted)
- if unwanted in result:
- result.remove(unwanted)
- else:
- if r not in available:
- logger.warning('undeclared extra: %s' % r)
- result.add(r)
- return result
-#
-# Extended metadata functionality
-#
-
-def _get_external_data(url):
- result = {}
- try:
- # urlopen might fail if it runs into redirections,
- # because of Python issue #13696. Fixed in locators
- # using a custom redirect handler.
- resp = urlopen(url)
- headers = resp.info()
- ct = headers.get('Content-Type')
- if not ct.startswith('application/json'):
- logger.debug('Unexpected response for JSON request: %s', ct)
- else:
- reader = codecs.getreader('utf-8')(resp)
- #data = reader.read().decode('utf-8')
- #result = json.loads(data)
- result = json.load(reader)
- except Exception as e:
- logger.exception('Failed to get external data for %s: %s', url, e)
- return result
-
-_external_data_base_url = 'https://www.red-dove.com/pypi/projects/'
-
-def get_project_data(name):
- url = '%s/%s/project.json' % (name[0].upper(), name)
- url = urljoin(_external_data_base_url, url)
- result = _get_external_data(url)
- return result
-
-def get_package_data(name, version):
- url = '%s/%s/package-%s.json' % (name[0].upper(), name, version)
- url = urljoin(_external_data_base_url, url)
- return _get_external_data(url)
-
-
-class Cache(object):
- """
- A class implementing a cache for resources that need to live in the file system
- e.g. shared libraries. This class was moved from resources to here because it
- could be used by other modules, e.g. the wheel module.
- """
-
- def __init__(self, base):
- """
- Initialise an instance.
-
- :param base: The base directory where the cache should be located.
- """
- # we use 'isdir' instead of 'exists', because we want to
- # fail if there's a file with that name
- if not os.path.isdir(base): # pragma: no cover
- os.makedirs(base)
- if (os.stat(base).st_mode & 0o77) != 0:
- logger.warning('Directory \'%s\' is not private', base)
- self.base = os.path.abspath(os.path.normpath(base))
-
- def prefix_to_dir(self, prefix):
- """
- Converts a resource prefix to a directory name in the cache.
- """
- return path_to_cache_dir(prefix)
-
- def clear(self):
- """
- Clear the cache.
- """
- not_removed = []
- for fn in os.listdir(self.base):
- fn = os.path.join(self.base, fn)
- try:
- if os.path.islink(fn) or os.path.isfile(fn):
- os.remove(fn)
- elif os.path.isdir(fn):
- shutil.rmtree(fn)
- except Exception:
- not_removed.append(fn)
- return not_removed
-
-
-class EventMixin(object):
- """
- A very simple publish/subscribe system.
- """
- def __init__(self):
- self._subscribers = {}
-
- def add(self, event, subscriber, append=True):
- """
- Add a subscriber for an event.
-
- :param event: The name of an event.
- :param subscriber: The subscriber to be added (and called when the
- event is published).
- :param append: Whether to append or prepend the subscriber to an
- existing subscriber list for the event.
- """
- subs = self._subscribers
- if event not in subs:
- subs[event] = deque([subscriber])
- else:
- sq = subs[event]
- if append:
- sq.append(subscriber)
- else:
- sq.appendleft(subscriber)
-
- def remove(self, event, subscriber):
- """
- Remove a subscriber for an event.
-
- :param event: The name of an event.
- :param subscriber: The subscriber to be removed.
- """
- subs = self._subscribers
- if event not in subs:
- raise ValueError('No subscribers: %r' % event)
- subs[event].remove(subscriber)
-
- def get_subscribers(self, event):
- """
- Return an iterator for the subscribers for an event.
- :param event: The event to return subscribers for.
- """
- return iter(self._subscribers.get(event, ()))
-
- def publish(self, event, *args, **kwargs):
- """
- Publish a event and return a list of values returned by its
- subscribers.
-
- :param event: The event to publish.
- :param args: The positional arguments to pass to the event's
- subscribers.
- :param kwargs: The keyword arguments to pass to the event's
- subscribers.
- """
- result = []
- for subscriber in self.get_subscribers(event):
- try:
- value = subscriber(event, *args, **kwargs)
- except Exception:
- logger.exception('Exception during event publication')
- value = None
- result.append(value)
- logger.debug('publish %s: args = %s, kwargs = %s, result = %s',
- event, args, kwargs, result)
- return result
-
-#
-# Simple sequencing
-#
-class Sequencer(object):
- def __init__(self):
- self._preds = {}
- self._succs = {}
- self._nodes = set() # nodes with no preds/succs
-
- def add_node(self, node):
- self._nodes.add(node)
-
- def remove_node(self, node, edges=False):
- if node in self._nodes:
- self._nodes.remove(node)
- if edges:
- for p in set(self._preds.get(node, ())):
- self.remove(p, node)
- for s in set(self._succs.get(node, ())):
- self.remove(node, s)
- # Remove empties
- for k, v in list(self._preds.items()):
- if not v:
- del self._preds[k]
- for k, v in list(self._succs.items()):
- if not v:
- del self._succs[k]
-
- def add(self, pred, succ):
- assert pred != succ
- self._preds.setdefault(succ, set()).add(pred)
- self._succs.setdefault(pred, set()).add(succ)
-
- def remove(self, pred, succ):
- assert pred != succ
- try:
- preds = self._preds[succ]
- succs = self._succs[pred]
- except KeyError: # pragma: no cover
- raise ValueError('%r not a successor of anything' % succ)
- try:
- preds.remove(pred)
- succs.remove(succ)
- except KeyError: # pragma: no cover
- raise ValueError('%r not a successor of %r' % (succ, pred))
-
- def is_step(self, step):
- return (step in self._preds or step in self._succs or
- step in self._nodes)
-
- def get_steps(self, final):
- if not self.is_step(final):
- raise ValueError('Unknown: %r' % final)
- result = []
- todo = []
- seen = set()
- todo.append(final)
- while todo:
- step = todo.pop(0)
- if step in seen:
- # if a step was already seen,
- # move it to the end (so it will appear earlier
- # when reversed on return) ... but not for the
- # final step, as that would be confusing for
- # users
- if step != final:
- result.remove(step)
- result.append(step)
- else:
- seen.add(step)
- result.append(step)
- preds = self._preds.get(step, ())
- todo.extend(preds)
- return reversed(result)
-
- @property
- def strong_connections(self):
- #http://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm
- index_counter = [0]
- stack = []
- lowlinks = {}
- index = {}
- result = []
-
- graph = self._succs
-
- def strongconnect(node):
- # set the depth index for this node to the smallest unused index
- index[node] = index_counter[0]
- lowlinks[node] = index_counter[0]
- index_counter[0] += 1
- stack.append(node)
-
- # Consider successors
- try:
- successors = graph[node]
- except Exception:
- successors = []
- for successor in successors:
- if successor not in lowlinks:
- # Successor has not yet been visited
- strongconnect(successor)
- lowlinks[node] = min(lowlinks[node],lowlinks[successor])
- elif successor in stack:
- # the successor is in the stack and hence in the current
- # strongly connected component (SCC)
- lowlinks[node] = min(lowlinks[node],index[successor])
-
- # If `node` is a root node, pop the stack and generate an SCC
- if lowlinks[node] == index[node]:
- connected_component = []
-
- while True:
- successor = stack.pop()
- connected_component.append(successor)
- if successor == node: break
- component = tuple(connected_component)
- # storing the result
- result.append(component)
-
- for node in graph:
- if node not in lowlinks:
- strongconnect(node)
-
- return result
-
- @property
- def dot(self):
- result = ['digraph G {']
- for succ in self._preds:
- preds = self._preds[succ]
- for pred in preds:
- result.append(' %s -> %s;' % (pred, succ))
- for node in self._nodes:
- result.append(' %s;' % node)
- result.append('}')
- return '\n'.join(result)
-
-#
-# Unarchiving functionality for zip, tar, tgz, tbz, whl
-#
-
-ARCHIVE_EXTENSIONS = ('.tar.gz', '.tar.bz2', '.tar', '.zip',
- '.tgz', '.tbz', '.whl')
-
-def unarchive(archive_filename, dest_dir, format=None, check=True):
-
- def check_path(path):
- if not isinstance(path, text_type):
- path = path.decode('utf-8')
- p = os.path.abspath(os.path.join(dest_dir, path))
- if not p.startswith(dest_dir) or p[plen] != os.sep:
- raise ValueError('path outside destination: %r' % p)
-
- dest_dir = os.path.abspath(dest_dir)
- plen = len(dest_dir)
- archive = None
- if format is None:
- if archive_filename.endswith(('.zip', '.whl')):
- format = 'zip'
- elif archive_filename.endswith(('.tar.gz', '.tgz')):
- format = 'tgz'
- mode = 'r:gz'
- elif archive_filename.endswith(('.tar.bz2', '.tbz')):
- format = 'tbz'
- mode = 'r:bz2'
- elif archive_filename.endswith('.tar'):
- format = 'tar'
- mode = 'r'
- else: # pragma: no cover
- raise ValueError('Unknown format for %r' % archive_filename)
- try:
- if format == 'zip':
- archive = ZipFile(archive_filename, 'r')
- if check:
- names = archive.namelist()
- for name in names:
- check_path(name)
- else:
- archive = tarfile.open(archive_filename, mode)
- if check:
- names = archive.getnames()
- for name in names:
- check_path(name)
- if format != 'zip' and sys.version_info[0] < 3:
- # See Python issue 17153. If the dest path contains Unicode,
- # tarfile extraction fails on Python 2.x if a member path name
- # contains non-ASCII characters - it leads to an implicit
- # bytes -> unicode conversion using ASCII to decode.
- for tarinfo in archive.getmembers():
- if not isinstance(tarinfo.name, text_type):
- tarinfo.name = tarinfo.name.decode('utf-8')
- archive.extractall(dest_dir)
-
- finally:
- if archive:
- archive.close()
-
-
-def zip_dir(directory):
- """zip a directory tree into a BytesIO object"""
- result = io.BytesIO()
- dlen = len(directory)
- with ZipFile(result, "w") as zf:
- for root, dirs, files in os.walk(directory):
- for name in files:
- full = os.path.join(root, name)
- rel = root[dlen:]
- dest = os.path.join(rel, name)
- zf.write(full, dest)
- return result
-
-#
-# Simple progress bar
-#
-
-UNITS = ('', 'K', 'M', 'G','T','P')
-
-
-class Progress(object):
- unknown = 'UNKNOWN'
-
- def __init__(self, minval=0, maxval=100):
- assert maxval is None or maxval >= minval
- self.min = self.cur = minval
- self.max = maxval
- self.started = None
- self.elapsed = 0
- self.done = False
-
- def update(self, curval):
- assert self.min <= curval
- assert self.max is None or curval <= self.max
- self.cur = curval
- now = time.time()
- if self.started is None:
- self.started = now
- else:
- self.elapsed = now - self.started
-
- def increment(self, incr):
- assert incr >= 0
- self.update(self.cur + incr)
-
- def start(self):
- self.update(self.min)
- return self
-
- def stop(self):
- if self.max is not None:
- self.update(self.max)
- self.done = True
-
- @property
- def maximum(self):
- return self.unknown if self.max is None else self.max
-
- @property
- def percentage(self):
- if self.done:
- result = '100 %'
- elif self.max is None:
- result = ' ?? %'
- else:
- v = 100.0 * (self.cur - self.min) / (self.max - self.min)
- result = '%3d %%' % v
- return result
-
- def format_duration(self, duration):
- if (duration <= 0) and self.max is None or self.cur == self.min:
- result = '??:??:??'
- #elif duration < 1:
- # result = '--:--:--'
- else:
- result = time.strftime('%H:%M:%S', time.gmtime(duration))
- return result
-
- @property
- def ETA(self):
- if self.done:
- prefix = 'Done'
- t = self.elapsed
- #import pdb; pdb.set_trace()
- else:
- prefix = 'ETA '
- if self.max is None:
- t = -1
- elif self.elapsed == 0 or (self.cur == self.min):
- t = 0
- else:
- #import pdb; pdb.set_trace()
- t = float(self.max - self.min)
- t /= self.cur - self.min
- t = (t - 1) * self.elapsed
- return '%s: %s' % (prefix, self.format_duration(t))
-
- @property
- def speed(self):
- if self.elapsed == 0:
- result = 0.0
- else:
- result = (self.cur - self.min) / self.elapsed
- for unit in UNITS:
- if result < 1000:
- break
- result /= 1000.0
- return '%d %sB/s' % (result, unit)
-
-#
-# Glob functionality
-#
-
-RICH_GLOB = re.compile(r'\{([^}]*)\}')
-_CHECK_RECURSIVE_GLOB = re.compile(r'[^/\\,{]\*\*|\*\*[^/\\,}]')
-_CHECK_MISMATCH_SET = re.compile(r'^[^{]*\}|\{[^}]*$')
-
-
-def iglob(path_glob):
- """Extended globbing function that supports ** and {opt1,opt2,opt3}."""
- if _CHECK_RECURSIVE_GLOB.search(path_glob):
- msg = """invalid glob %r: recursive glob "**" must be used alone"""
- raise ValueError(msg % path_glob)
- if _CHECK_MISMATCH_SET.search(path_glob):
- msg = """invalid glob %r: mismatching set marker '{' or '}'"""
- raise ValueError(msg % path_glob)
- return _iglob(path_glob)
-
-
-def _iglob(path_glob):
- rich_path_glob = RICH_GLOB.split(path_glob, 1)
- if len(rich_path_glob) > 1:
- assert len(rich_path_glob) == 3, rich_path_glob
- prefix, set, suffix = rich_path_glob
- for item in set.split(','):
- for path in _iglob(''.join((prefix, item, suffix))):
- yield path
- else:
- if '**' not in path_glob:
- for item in std_iglob(path_glob):
- yield item
- else:
- prefix, radical = path_glob.split('**', 1)
- if prefix == '':
- prefix = '.'
- if radical == '':
- radical = '*'
- else:
- # we support both
- radical = radical.lstrip('/')
- radical = radical.lstrip('\\')
- for path, dir, files in os.walk(prefix):
- path = os.path.normpath(path)
- for fn in _iglob(os.path.join(path, radical)):
- yield fn
-
-if ssl:
- from .compat import (HTTPSHandler as BaseHTTPSHandler, match_hostname,
- CertificateError)
-
-
-#
-# HTTPSConnection which verifies certificates/matches domains
-#
-
- class HTTPSConnection(httplib.HTTPSConnection):
- ca_certs = None # set this to the path to the certs file (.pem)
- check_domain = True # only used if ca_certs is not None
-
- # noinspection PyPropertyAccess
- def connect(self):
- sock = socket.create_connection((self.host, self.port), self.timeout)
- if getattr(self, '_tunnel_host', False):
- self.sock = sock
- self._tunnel()
-
- context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
- if hasattr(ssl, 'OP_NO_SSLv2'):
- context.options |= ssl.OP_NO_SSLv2
- if self.cert_file:
- context.load_cert_chain(self.cert_file, self.key_file)
- kwargs = {}
- if self.ca_certs:
- context.verify_mode = ssl.CERT_REQUIRED
- context.load_verify_locations(cafile=self.ca_certs)
- if getattr(ssl, 'HAS_SNI', False):
- kwargs['server_hostname'] = self.host
-
- self.sock = context.wrap_socket(sock, **kwargs)
- if self.ca_certs and self.check_domain:
- try:
- match_hostname(self.sock.getpeercert(), self.host)
- logger.debug('Host verified: %s', self.host)
- except CertificateError: # pragma: no cover
- self.sock.shutdown(socket.SHUT_RDWR)
- self.sock.close()
- raise
-
- class HTTPSHandler(BaseHTTPSHandler):
- def __init__(self, ca_certs, check_domain=True):
- BaseHTTPSHandler.__init__(self)
- self.ca_certs = ca_certs
- self.check_domain = check_domain
-
- def _conn_maker(self, *args, **kwargs):
- """
- This is called to create a connection instance. Normally you'd
- pass a connection class to do_open, but it doesn't actually check for
- a class, and just expects a callable. As long as we behave just as a
- constructor would have, we should be OK. If it ever changes so that
- we *must* pass a class, we'll create an UnsafeHTTPSConnection class
- which just sets check_domain to False in the class definition, and
- choose which one to pass to do_open.
- """
- result = HTTPSConnection(*args, **kwargs)
- if self.ca_certs:
- result.ca_certs = self.ca_certs
- result.check_domain = self.check_domain
- return result
-
- def https_open(self, req):
- try:
- return self.do_open(self._conn_maker, req)
- except URLError as e:
- if 'certificate verify failed' in str(e.reason):
- raise CertificateError('Unable to verify server certificate '
- 'for %s' % req.host)
- else:
- raise
-
- #
- # To prevent against mixing HTTP traffic with HTTPS (examples: A Man-In-The-
- # Middle proxy using HTTP listens on port 443, or an index mistakenly serves
- # HTML containing a http://xyz link when it should be https://xyz),
- # you can use the following handler class, which does not allow HTTP traffic.
- #
- # It works by inheriting from HTTPHandler - so build_opener won't add a
- # handler for HTTP itself.
- #
- class HTTPSOnlyHandler(HTTPSHandler, HTTPHandler):
- def http_open(self, req):
- raise URLError('Unexpected HTTP request on what should be a secure '
- 'connection: %s' % req)
-
-#
-# XML-RPC with timeouts
-#
-class Transport(xmlrpclib.Transport):
- def __init__(self, timeout, use_datetime=0):
- self.timeout = timeout
- xmlrpclib.Transport.__init__(self, use_datetime)
-
- def make_connection(self, host):
- h, eh, x509 = self.get_host_info(host)
- if not self._connection or host != self._connection[0]:
- self._extra_headers = eh
- self._connection = host, httplib.HTTPConnection(h)
- return self._connection[1]
-
-if ssl:
- class SafeTransport(xmlrpclib.SafeTransport):
- def __init__(self, timeout, use_datetime=0):
- self.timeout = timeout
- xmlrpclib.SafeTransport.__init__(self, use_datetime)
-
- def make_connection(self, host):
- h, eh, kwargs = self.get_host_info(host)
- if not kwargs:
- kwargs = {}
- kwargs['timeout'] = self.timeout
- if not self._connection or host != self._connection[0]:
- self._extra_headers = eh
- self._connection = host, httplib.HTTPSConnection(h, None,
- **kwargs)
- return self._connection[1]
-
-
-class ServerProxy(xmlrpclib.ServerProxy):
- def __init__(self, uri, **kwargs):
- self.timeout = timeout = kwargs.pop('timeout', None)
- # The above classes only come into play if a timeout
- # is specified
- if timeout is not None:
- # scheme = splittype(uri) # deprecated as of Python 3.8
- scheme = urlparse(uri)[0]
- use_datetime = kwargs.get('use_datetime', 0)
- if scheme == 'https':
- tcls = SafeTransport
- else:
- tcls = Transport
- kwargs['transport'] = t = tcls(timeout, use_datetime=use_datetime)
- self.transport = t
- xmlrpclib.ServerProxy.__init__(self, uri, **kwargs)
-
-#
-# CSV functionality. This is provided because on 2.x, the csv module can't
-# handle Unicode. However, we need to deal with Unicode in e.g. RECORD files.
-#
-
-def _csv_open(fn, mode, **kwargs):
- if sys.version_info[0] < 3:
- mode += 'b'
- else:
- kwargs['newline'] = ''
- # Python 3 determines encoding from locale. Force 'utf-8'
- # file encoding to match other forced utf-8 encoding
- kwargs['encoding'] = 'utf-8'
- return open(fn, mode, **kwargs)
-
-
-class CSVBase(object):
- defaults = {
- 'delimiter': str(','), # The strs are used because we need native
- 'quotechar': str('"'), # str in the csv API (2.x won't take
- 'lineterminator': str('\n') # Unicode)
- }
-
- def __enter__(self):
- return self
-
- def __exit__(self, *exc_info):
- self.stream.close()
-
-
-class CSVReader(CSVBase):
- def __init__(self, **kwargs):
- if 'stream' in kwargs:
- stream = kwargs['stream']
- if sys.version_info[0] >= 3:
- # needs to be a text stream
- stream = codecs.getreader('utf-8')(stream)
- self.stream = stream
- else:
- self.stream = _csv_open(kwargs['path'], 'r')
- self.reader = csv.reader(self.stream, **self.defaults)
-
- def __iter__(self):
- return self
-
- def next(self):
- result = next(self.reader)
- if sys.version_info[0] < 3:
- for i, item in enumerate(result):
- if not isinstance(item, text_type):
- result[i] = item.decode('utf-8')
- return result
-
- __next__ = next
-
-class CSVWriter(CSVBase):
- def __init__(self, fn, **kwargs):
- self.stream = _csv_open(fn, 'w')
- self.writer = csv.writer(self.stream, **self.defaults)
-
- def writerow(self, row):
- if sys.version_info[0] < 3:
- r = []
- for item in row:
- if isinstance(item, text_type):
- item = item.encode('utf-8')
- r.append(item)
- row = r
- self.writer.writerow(row)
-
-#
-# Configurator functionality
-#
-
-class Configurator(BaseConfigurator):
-
- value_converters = dict(BaseConfigurator.value_converters)
- value_converters['inc'] = 'inc_convert'
-
- def __init__(self, config, base=None):
- super(Configurator, self).__init__(config)
- self.base = base or os.getcwd()
-
- def configure_custom(self, config):
- def convert(o):
- if isinstance(o, (list, tuple)):
- result = type(o)([convert(i) for i in o])
- elif isinstance(o, dict):
- if '()' in o:
- result = self.configure_custom(o)
- else:
- result = {}
- for k in o:
- result[k] = convert(o[k])
- else:
- result = self.convert(o)
- return result
-
- c = config.pop('()')
- if not callable(c):
- c = self.resolve(c)
- props = config.pop('.', None)
- # Check for valid identifiers
- args = config.pop('[]', ())
- if args:
- args = tuple([convert(o) for o in args])
- items = [(k, convert(config[k])) for k in config if valid_ident(k)]
- kwargs = dict(items)
- result = c(*args, **kwargs)
- if props:
- for n, v in props.items():
- setattr(result, n, convert(v))
- return result
-
- def __getitem__(self, key):
- result = self.config[key]
- if isinstance(result, dict) and '()' in result:
- self.config[key] = result = self.configure_custom(result)
- return result
-
- def inc_convert(self, value):
- """Default converter for the inc:// protocol."""
- if not os.path.isabs(value):
- value = os.path.join(self.base, value)
- with codecs.open(value, 'r', encoding='utf-8') as f:
- result = json.load(f)
- return result
-
-
-class SubprocessMixin(object):
- """
- Mixin for running subprocesses and capturing their output
- """
- def __init__(self, verbose=False, progress=None):
- self.verbose = verbose
- self.progress = progress
-
- def reader(self, stream, context):
- """
- Read lines from a subprocess' output stream and either pass to a progress
- callable (if specified) or write progress information to sys.stderr.
- """
- progress = self.progress
- verbose = self.verbose
- while True:
- s = stream.readline()
- if not s:
- break
- if progress is not None:
- progress(s, context)
- else:
- if not verbose:
- sys.stderr.write('.')
- else:
- sys.stderr.write(s.decode('utf-8'))
- sys.stderr.flush()
- stream.close()
-
- def run_command(self, cmd, **kwargs):
- p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
- stderr=subprocess.PIPE, **kwargs)
- t1 = threading.Thread(target=self.reader, args=(p.stdout, 'stdout'))
- t1.start()
- t2 = threading.Thread(target=self.reader, args=(p.stderr, 'stderr'))
- t2.start()
- p.wait()
- t1.join()
- t2.join()
- if self.progress is not None:
- self.progress('done.', 'main')
- elif self.verbose:
- sys.stderr.write('done.\n')
- return p
-
-
-def normalize_name(name):
- """Normalize a python package name a la PEP 503"""
- # https://www.python.org/dev/peps/pep-0503/#normalized-names
- return re.sub('[-_.]+', '-', name).lower()
-
-# def _get_pypirc_command():
- # """
- # Get the distutils command for interacting with PyPI configurations.
- # :return: the command.
- # """
- # from distutils.core import Distribution
- # from distutils.config import PyPIRCCommand
- # d = Distribution()
- # return PyPIRCCommand(d)
-
-class PyPIRCFile(object):
-
- DEFAULT_REPOSITORY = 'https://upload.pypi.org/legacy/'
- DEFAULT_REALM = 'pypi'
-
- def __init__(self, fn=None, url=None):
- if fn is None:
- fn = os.path.join(os.path.expanduser('~'), '.pypirc')
- self.filename = fn
- self.url = url
-
- def read(self):
- result = {}
-
- if os.path.exists(self.filename):
- repository = self.url or self.DEFAULT_REPOSITORY
-
- config = configparser.RawConfigParser()
- config.read(self.filename)
- sections = config.sections()
- if 'distutils' in sections:
- # let's get the list of servers
- index_servers = config.get('distutils', 'index-servers')
- _servers = [server.strip() for server in
- index_servers.split('\n')
- if server.strip() != '']
- if _servers == []:
- # nothing set, let's try to get the default pypi
- if 'pypi' in sections:
- _servers = ['pypi']
- else:
- for server in _servers:
- result = {'server': server}
- result['username'] = config.get(server, 'username')
-
- # optional params
- for key, default in (('repository', self.DEFAULT_REPOSITORY),
- ('realm', self.DEFAULT_REALM),
- ('password', None)):
- if config.has_option(server, key):
- result[key] = config.get(server, key)
- else:
- result[key] = default
-
- # work around people having "repository" for the "pypi"
- # section of their config set to the HTTP (rather than
- # HTTPS) URL
- if (server == 'pypi' and
- repository in (self.DEFAULT_REPOSITORY, 'pypi')):
- result['repository'] = self.DEFAULT_REPOSITORY
- elif (result['server'] != repository and
- result['repository'] != repository):
- result = {}
- elif 'server-login' in sections:
- # old format
- server = 'server-login'
- if config.has_option(server, 'repository'):
- repository = config.get(server, 'repository')
- else:
- repository = self.DEFAULT_REPOSITORY
- result = {
- 'username': config.get(server, 'username'),
- 'password': config.get(server, 'password'),
- 'repository': repository,
- 'server': server,
- 'realm': self.DEFAULT_REALM
- }
- return result
-
- def update(self, username, password):
- # import pdb; pdb.set_trace()
- config = configparser.RawConfigParser()
- fn = self.filename
- config.read(fn)
- if not config.has_section('pypi'):
- config.add_section('pypi')
- config.set('pypi', 'username', username)
- config.set('pypi', 'password', password)
- with open(fn, 'w') as f:
- config.write(f)
-
-def _load_pypirc(index):
- """
- Read the PyPI access configuration as supported by distutils.
- """
- return PyPIRCFile(url=index.url).read()
-
-def _store_pypirc(index):
- PyPIRCFile().update(index.username, index.password)
-
-#
-# get_platform()/get_host_platform() copied from Python 3.10.a0 source, with some minor
-# tweaks
-#
-
-def get_host_platform():
- """Return a string that identifies the current platform. This is used mainly to
- distinguish platform-specific build directories and platform-specific built
- distributions. Typically includes the OS name and version and the
- architecture (as supplied by 'os.uname()'), although the exact information
- included depends on the OS; eg. on Linux, the kernel version isn't
- particularly important.
-
- Examples of returned values:
- linux-i586
- linux-alpha (?)
- solaris-2.6-sun4u
-
- Windows will return one of:
- win-amd64 (64bit Windows on AMD64 (aka x86_64, Intel64, EM64T, etc)
- win32 (all others - specifically, sys.platform is returned)
-
- For other non-POSIX platforms, currently just returns 'sys.platform'.
-
- """
- if os.name == 'nt':
- if 'amd64' in sys.version.lower():
- return 'win-amd64'
- if '(arm)' in sys.version.lower():
- return 'win-arm32'
- if '(arm64)' in sys.version.lower():
- return 'win-arm64'
- return sys.platform
-
- # Set for cross builds explicitly
- if "_PYTHON_HOST_PLATFORM" in os.environ:
- return os.environ["_PYTHON_HOST_PLATFORM"]
-
- if os.name != 'posix' or not hasattr(os, 'uname'):
- # XXX what about the architecture? NT is Intel or Alpha,
- # Mac OS is M68k or PPC, etc.
- return sys.platform
-
- # Try to distinguish various flavours of Unix
-
- (osname, host, release, version, machine) = os.uname()
-
- # Convert the OS name to lowercase, remove '/' characters, and translate
- # spaces (for "Power Macintosh")
- osname = osname.lower().replace('/', '')
- machine = machine.replace(' ', '_').replace('/', '-')
-
- if osname[:5] == 'linux':
- # At least on Linux/Intel, 'machine' is the processor --
- # i386, etc.
- # XXX what about Alpha, SPARC, etc?
- return "%s-%s" % (osname, machine)
-
- elif osname[:5] == 'sunos':
- if release[0] >= '5': # SunOS 5 == Solaris 2
- osname = 'solaris'
- release = '%d.%s' % (int(release[0]) - 3, release[2:])
- # We can't use 'platform.architecture()[0]' because a
- # bootstrap problem. We use a dict to get an error
- # if some suspicious happens.
- bitness = {2147483647:'32bit', 9223372036854775807:'64bit'}
- machine += '.%s' % bitness[sys.maxsize]
- # fall through to standard osname-release-machine representation
- elif osname[:3] == 'aix':
- from _aix_support import aix_platform
- return aix_platform()
- elif osname[:6] == 'cygwin':
- osname = 'cygwin'
- rel_re = re.compile (r'[\d.]+', re.ASCII)
- m = rel_re.match(release)
- if m:
- release = m.group()
- elif osname[:6] == 'darwin':
- import _osx_support, distutils.sysconfig
- osname, release, machine = _osx_support.get_platform_osx(
- distutils.sysconfig.get_config_vars(),
- osname, release, machine)
-
- return '%s-%s-%s' % (osname, release, machine)
-
-
-_TARGET_TO_PLAT = {
- 'x86' : 'win32',
- 'x64' : 'win-amd64',
- 'arm' : 'win-arm32',
-}
-
-
-def get_platform():
- if os.name != 'nt':
- return get_host_platform()
- cross_compilation_target = os.environ.get('VSCMD_ARG_TGT_ARCH')
- if cross_compilation_target not in _TARGET_TO_PLAT:
- return get_host_platform()
- return _TARGET_TO_PLAT[cross_compilation_target]
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/live.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/live.py
deleted file mode 100644
index 3ebbbc4ccbe47043eb62f8dd770f079745d3b743..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/live.py
+++ /dev/null
@@ -1,375 +0,0 @@
-import sys
-from threading import Event, RLock, Thread
-from types import TracebackType
-from typing import IO, Any, Callable, List, Optional, TextIO, Type, cast
-
-from . import get_console
-from .console import Console, ConsoleRenderable, RenderableType, RenderHook
-from .control import Control
-from .file_proxy import FileProxy
-from .jupyter import JupyterMixin
-from .live_render import LiveRender, VerticalOverflowMethod
-from .screen import Screen
-from .text import Text
-
-
-class _RefreshThread(Thread):
- """A thread that calls refresh() at regular intervals."""
-
- def __init__(self, live: "Live", refresh_per_second: float) -> None:
- self.live = live
- self.refresh_per_second = refresh_per_second
- self.done = Event()
- super().__init__(daemon=True)
-
- def stop(self) -> None:
- self.done.set()
-
- def run(self) -> None:
- while not self.done.wait(1 / self.refresh_per_second):
- with self.live._lock:
- if not self.done.is_set():
- self.live.refresh()
-
-
-class Live(JupyterMixin, RenderHook):
- """Renders an auto-updating live display of any given renderable.
-
- Args:
- renderable (RenderableType, optional): The renderable to live display. Defaults to displaying nothing.
- console (Console, optional): Optional Console instance. Default will an internal Console instance writing to stdout.
- screen (bool, optional): Enable alternate screen mode. Defaults to False.
- auto_refresh (bool, optional): Enable auto refresh. If disabled, you will need to call `refresh()` or `update()` with refresh flag. Defaults to True
- refresh_per_second (float, optional): Number of times per second to refresh the live display. Defaults to 4.
- transient (bool, optional): Clear the renderable on exit (has no effect when screen=True). Defaults to False.
- redirect_stdout (bool, optional): Enable redirection of stdout, so ``print`` may be used. Defaults to True.
- redirect_stderr (bool, optional): Enable redirection of stderr. Defaults to True.
- vertical_overflow (VerticalOverflowMethod, optional): How to handle renderable when it is too tall for the console. Defaults to "ellipsis".
- get_renderable (Callable[[], RenderableType], optional): Optional callable to get renderable. Defaults to None.
- """
-
- def __init__(
- self,
- renderable: Optional[RenderableType] = None,
- *,
- console: Optional[Console] = None,
- screen: bool = False,
- auto_refresh: bool = True,
- refresh_per_second: float = 4,
- transient: bool = False,
- redirect_stdout: bool = True,
- redirect_stderr: bool = True,
- vertical_overflow: VerticalOverflowMethod = "ellipsis",
- get_renderable: Optional[Callable[[], RenderableType]] = None,
- ) -> None:
- assert refresh_per_second > 0, "refresh_per_second must be > 0"
- self._renderable = renderable
- self.console = console if console is not None else get_console()
- self._screen = screen
- self._alt_screen = False
-
- self._redirect_stdout = redirect_stdout
- self._redirect_stderr = redirect_stderr
- self._restore_stdout: Optional[IO[str]] = None
- self._restore_stderr: Optional[IO[str]] = None
-
- self._lock = RLock()
- self.ipy_widget: Optional[Any] = None
- self.auto_refresh = auto_refresh
- self._started: bool = False
- self.transient = True if screen else transient
-
- self._refresh_thread: Optional[_RefreshThread] = None
- self.refresh_per_second = refresh_per_second
-
- self.vertical_overflow = vertical_overflow
- self._get_renderable = get_renderable
- self._live_render = LiveRender(
- self.get_renderable(), vertical_overflow=vertical_overflow
- )
-
- @property
- def is_started(self) -> bool:
- """Check if live display has been started."""
- return self._started
-
- def get_renderable(self) -> RenderableType:
- renderable = (
- self._get_renderable()
- if self._get_renderable is not None
- else self._renderable
- )
- return renderable or ""
-
- def start(self, refresh: bool = False) -> None:
- """Start live rendering display.
-
- Args:
- refresh (bool, optional): Also refresh. Defaults to False.
- """
- with self._lock:
- if self._started:
- return
- self.console.set_live(self)
- self._started = True
- if self._screen:
- self._alt_screen = self.console.set_alt_screen(True)
- self.console.show_cursor(False)
- self._enable_redirect_io()
- self.console.push_render_hook(self)
- if refresh:
- try:
- self.refresh()
- except Exception:
- # If refresh fails, we want to stop the redirection of sys.stderr,
- # so the error stacktrace is properly displayed in the terminal.
- # (or, if the code that calls Rich captures the exception and wants to display something,
- # let this be displayed in the terminal).
- self.stop()
- raise
- if self.auto_refresh:
- self._refresh_thread = _RefreshThread(self, self.refresh_per_second)
- self._refresh_thread.start()
-
- def stop(self) -> None:
- """Stop live rendering display."""
- with self._lock:
- if not self._started:
- return
- self.console.clear_live()
- self._started = False
-
- if self.auto_refresh and self._refresh_thread is not None:
- self._refresh_thread.stop()
- self._refresh_thread = None
- # allow it to fully render on the last even if overflow
- self.vertical_overflow = "visible"
- with self.console:
- try:
- if not self._alt_screen and not self.console.is_jupyter:
- self.refresh()
- finally:
- self._disable_redirect_io()
- self.console.pop_render_hook()
- if not self._alt_screen and self.console.is_terminal:
- self.console.line()
- self.console.show_cursor(True)
- if self._alt_screen:
- self.console.set_alt_screen(False)
-
- if self.transient and not self._alt_screen:
- self.console.control(self._live_render.restore_cursor())
- if self.ipy_widget is not None and self.transient:
- self.ipy_widget.close() # pragma: no cover
-
- def __enter__(self) -> "Live":
- self.start(refresh=self._renderable is not None)
- return self
-
- def __exit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_val: Optional[BaseException],
- exc_tb: Optional[TracebackType],
- ) -> None:
- self.stop()
-
- def _enable_redirect_io(self) -> None:
- """Enable redirecting of stdout / stderr."""
- if self.console.is_terminal or self.console.is_jupyter:
- if self._redirect_stdout and not isinstance(sys.stdout, FileProxy):
- self._restore_stdout = sys.stdout
- sys.stdout = cast("TextIO", FileProxy(self.console, sys.stdout))
- if self._redirect_stderr and not isinstance(sys.stderr, FileProxy):
- self._restore_stderr = sys.stderr
- sys.stderr = cast("TextIO", FileProxy(self.console, sys.stderr))
-
- def _disable_redirect_io(self) -> None:
- """Disable redirecting of stdout / stderr."""
- if self._restore_stdout:
- sys.stdout = cast("TextIO", self._restore_stdout)
- self._restore_stdout = None
- if self._restore_stderr:
- sys.stderr = cast("TextIO", self._restore_stderr)
- self._restore_stderr = None
-
- @property
- def renderable(self) -> RenderableType:
- """Get the renderable that is being displayed
-
- Returns:
- RenderableType: Displayed renderable.
- """
- renderable = self.get_renderable()
- return Screen(renderable) if self._alt_screen else renderable
-
- def update(self, renderable: RenderableType, *, refresh: bool = False) -> None:
- """Update the renderable that is being displayed
-
- Args:
- renderable (RenderableType): New renderable to use.
- refresh (bool, optional): Refresh the display. Defaults to False.
- """
- if isinstance(renderable, str):
- renderable = self.console.render_str(renderable)
- with self._lock:
- self._renderable = renderable
- if refresh:
- self.refresh()
-
- def refresh(self) -> None:
- """Update the display of the Live Render."""
- with self._lock:
- self._live_render.set_renderable(self.renderable)
- if self.console.is_jupyter: # pragma: no cover
- try:
- from IPython.display import display
- from ipywidgets import Output
- except ImportError:
- import warnings
-
- warnings.warn('install "ipywidgets" for Jupyter support')
- else:
- if self.ipy_widget is None:
- self.ipy_widget = Output()
- display(self.ipy_widget)
-
- with self.ipy_widget:
- self.ipy_widget.clear_output(wait=True)
- self.console.print(self._live_render.renderable)
- elif self.console.is_terminal and not self.console.is_dumb_terminal:
- with self.console:
- self.console.print(Control())
- elif (
- not self._started and not self.transient
- ): # if it is finished allow files or dumb-terminals to see final result
- with self.console:
- self.console.print(Control())
-
- def process_renderables(
- self, renderables: List[ConsoleRenderable]
- ) -> List[ConsoleRenderable]:
- """Process renderables to restore cursor and display progress."""
- self._live_render.vertical_overflow = self.vertical_overflow
- if self.console.is_interactive:
- # lock needs acquiring as user can modify live_render renderable at any time unlike in Progress.
- with self._lock:
- reset = (
- Control.home()
- if self._alt_screen
- else self._live_render.position_cursor()
- )
- renderables = [reset, *renderables, self._live_render]
- elif (
- not self._started and not self.transient
- ): # if it is finished render the final output for files or dumb_terminals
- renderables = [*renderables, self._live_render]
-
- return renderables
-
-
-if __name__ == "__main__": # pragma: no cover
- import random
- import time
- from itertools import cycle
- from typing import Dict, List, Tuple
-
- from .align import Align
- from .console import Console
- from .live import Live as Live
- from .panel import Panel
- from .rule import Rule
- from .syntax import Syntax
- from .table import Table
-
- console = Console()
-
- syntax = Syntax(
- '''def loop_last(values: Iterable[T]) -> Iterable[Tuple[bool, T]]:
- """Iterate and generate a tuple with a flag for last value."""
- iter_values = iter(values)
- try:
- previous_value = next(iter_values)
- except StopIteration:
- return
- for value in iter_values:
- yield False, previous_value
- previous_value = value
- yield True, previous_value''',
- "python",
- line_numbers=True,
- )
-
- table = Table("foo", "bar", "baz")
- table.add_row("1", "2", "3")
-
- progress_renderables = [
- "You can make the terminal shorter and taller to see the live table hide"
- "Text may be printed while the progress bars are rendering.",
- Panel("In fact, [i]any[/i] renderable will work"),
- "Such as [magenta]tables[/]...",
- table,
- "Pretty printed structures...",
- {"type": "example", "text": "Pretty printed"},
- "Syntax...",
- syntax,
- Rule("Give it a try!"),
- ]
-
- examples = cycle(progress_renderables)
-
- exchanges = [
- "SGD",
- "MYR",
- "EUR",
- "USD",
- "AUD",
- "JPY",
- "CNH",
- "HKD",
- "CAD",
- "INR",
- "DKK",
- "GBP",
- "RUB",
- "NZD",
- "MXN",
- "IDR",
- "TWD",
- "THB",
- "VND",
- ]
- with Live(console=console) as live_table:
- exchange_rate_dict: Dict[Tuple[str, str], float] = {}
-
- for index in range(100):
- select_exchange = exchanges[index % len(exchanges)]
-
- for exchange in exchanges:
- if exchange == select_exchange:
- continue
- time.sleep(0.4)
- if random.randint(0, 10) < 1:
- console.log(next(examples))
- exchange_rate_dict[(select_exchange, exchange)] = 200 / (
- (random.random() * 320) + 1
- )
- if len(exchange_rate_dict) > len(exchanges) - 1:
- exchange_rate_dict.pop(list(exchange_rate_dict.keys())[0])
- table = Table(title="Exchange Rates")
-
- table.add_column("Source Currency")
- table.add_column("Destination Currency")
- table.add_column("Exchange Rate")
-
- for ((source, dest), exchange_rate) in exchange_rate_dict.items():
- table.add_row(
- source,
- dest,
- Text(
- f"{exchange_rate:.4f}",
- style="red" if exchange_rate < 1.0 else "green",
- ),
- )
-
- live_table.update(Align.center(table))
diff --git a/spaces/Awesimo/jojogan/e4e/models/stylegan2/op/upfirdn2d.py b/spaces/Awesimo/jojogan/e4e/models/stylegan2/op/upfirdn2d.py
deleted file mode 100644
index 7bc5a1e331c2bbb1893ac748cfd0f144ff0651b4..0000000000000000000000000000000000000000
--- a/spaces/Awesimo/jojogan/e4e/models/stylegan2/op/upfirdn2d.py
+++ /dev/null
@@ -1,184 +0,0 @@
-import os
-
-import torch
-from torch.autograd import Function
-from torch.utils.cpp_extension import load
-
-module_path = os.path.dirname(__file__)
-upfirdn2d_op = load(
- 'upfirdn2d',
- sources=[
- os.path.join(module_path, 'upfirdn2d.cpp'),
- os.path.join(module_path, 'upfirdn2d_kernel.cu'),
- ],
-)
-
-
-class UpFirDn2dBackward(Function):
- @staticmethod
- def forward(
- ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, in_size, out_size
- ):
- up_x, up_y = up
- down_x, down_y = down
- g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad
-
- grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1)
-
- grad_input = upfirdn2d_op.upfirdn2d(
- grad_output,
- grad_kernel,
- down_x,
- down_y,
- up_x,
- up_y,
- g_pad_x0,
- g_pad_x1,
- g_pad_y0,
- g_pad_y1,
- )
- grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], in_size[3])
-
- ctx.save_for_backward(kernel)
-
- pad_x0, pad_x1, pad_y0, pad_y1 = pad
-
- ctx.up_x = up_x
- ctx.up_y = up_y
- ctx.down_x = down_x
- ctx.down_y = down_y
- ctx.pad_x0 = pad_x0
- ctx.pad_x1 = pad_x1
- ctx.pad_y0 = pad_y0
- ctx.pad_y1 = pad_y1
- ctx.in_size = in_size
- ctx.out_size = out_size
-
- return grad_input
-
- @staticmethod
- def backward(ctx, gradgrad_input):
- kernel, = ctx.saved_tensors
-
- gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], ctx.in_size[3], 1)
-
- gradgrad_out = upfirdn2d_op.upfirdn2d(
- gradgrad_input,
- kernel,
- ctx.up_x,
- ctx.up_y,
- ctx.down_x,
- ctx.down_y,
- ctx.pad_x0,
- ctx.pad_x1,
- ctx.pad_y0,
- ctx.pad_y1,
- )
- # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], ctx.out_size[1], ctx.in_size[3])
- gradgrad_out = gradgrad_out.view(
- ctx.in_size[0], ctx.in_size[1], ctx.out_size[0], ctx.out_size[1]
- )
-
- return gradgrad_out, None, None, None, None, None, None, None, None
-
-
-class UpFirDn2d(Function):
- @staticmethod
- def forward(ctx, input, kernel, up, down, pad):
- up_x, up_y = up
- down_x, down_y = down
- pad_x0, pad_x1, pad_y0, pad_y1 = pad
-
- kernel_h, kernel_w = kernel.shape
- batch, channel, in_h, in_w = input.shape
- ctx.in_size = input.shape
-
- input = input.reshape(-1, in_h, in_w, 1)
-
- ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1]))
-
- out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1
- out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1
- ctx.out_size = (out_h, out_w)
-
- ctx.up = (up_x, up_y)
- ctx.down = (down_x, down_y)
- ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1)
-
- g_pad_x0 = kernel_w - pad_x0 - 1
- g_pad_y0 = kernel_h - pad_y0 - 1
- g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1
- g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1
-
- ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1)
-
- out = upfirdn2d_op.upfirdn2d(
- input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
- )
- # out = out.view(major, out_h, out_w, minor)
- out = out.view(-1, channel, out_h, out_w)
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- kernel, grad_kernel = ctx.saved_tensors
-
- grad_input = UpFirDn2dBackward.apply(
- grad_output,
- kernel,
- grad_kernel,
- ctx.up,
- ctx.down,
- ctx.pad,
- ctx.g_pad,
- ctx.in_size,
- ctx.out_size,
- )
-
- return grad_input, None, None, None, None
-
-
-def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)):
- out = UpFirDn2d.apply(
- input, kernel, (up, up), (down, down), (pad[0], pad[1], pad[0], pad[1])
- )
-
- return out
-
-
-def upfirdn2d_native(
- input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
-):
- _, in_h, in_w, minor = input.shape
- kernel_h, kernel_w = kernel.shape
-
- out = input.view(-1, in_h, 1, in_w, 1, minor)
- out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1])
- out = out.view(-1, in_h * up_y, in_w * up_x, minor)
-
- out = F.pad(
- out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)]
- )
- out = out[
- :,
- max(-pad_y0, 0): out.shape[1] - max(-pad_y1, 0),
- max(-pad_x0, 0): out.shape[2] - max(-pad_x1, 0),
- :,
- ]
-
- out = out.permute(0, 3, 1, 2)
- out = out.reshape(
- [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1]
- )
- w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w)
- out = F.conv2d(out, w)
- out = out.reshape(
- -1,
- minor,
- in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1,
- in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1,
- )
- out = out.permute(0, 2, 3, 1)
-
- return out[:, ::down_y, ::down_x, :]
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/c10.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/c10.py
deleted file mode 100644
index 25ee23009547913733dc528fb8a39ca995fd9e31..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/c10.py
+++ /dev/null
@@ -1,534 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import math
-import torch
-import torch.nn.functional as F
-
-from detectron2.layers import cat
-from detectron2.layers.roi_align_rotated import ROIAlignRotated
-from detectron2.modeling import poolers
-from detectron2.modeling.proposal_generator import rpn
-from detectron2.modeling.roi_heads.mask_head import mask_rcnn_inference
-from detectron2.structures import Boxes, ImageList, Instances, Keypoints
-
-from .shared import alias, to_device
-
-
-"""
-This file contains caffe2-compatible implementation of several detectron2 components.
-"""
-
-
-class Caffe2Boxes(Boxes):
- """
- Representing a list of detectron2.structures.Boxes from minibatch, each box
- is represented by a 5d vector (batch index + 4 coordinates), or a 6d vector
- (batch index + 5 coordinates) for RotatedBoxes.
- """
-
- def __init__(self, tensor):
- assert isinstance(tensor, torch.Tensor)
- assert tensor.dim() == 2 and tensor.size(-1) in [4, 5, 6], tensor.size()
- # TODO: make tensor immutable when dim is Nx5 for Boxes,
- # and Nx6 for RotatedBoxes?
- self.tensor = tensor
-
-
-# TODO clean up this class, maybe just extend Instances
-class InstancesList(object):
- """
- Tensor representation of a list of Instances object for a batch of images.
-
- When dealing with a batch of images with Caffe2 ops, a list of bboxes
- (instances) are usually represented by single Tensor with size
- (sigma(Ni), 5) or (sigma(Ni), 4) plus a batch split Tensor. This class is
- for providing common functions to convert between these two representations.
- """
-
- def __init__(self, im_info, indices, extra_fields=None):
- # [N, 3] -> (H, W, Scale)
- self.im_info = im_info
- # [N,] -> indice of batch to which the instance belongs
- self.indices = indices
- # [N, ...]
- self.batch_extra_fields = extra_fields or {}
-
- self.image_size = self.im_info
-
- def get_fields(self):
- """like `get_fields` in the Instances object,
- but return each field in tensor representations"""
- ret = {}
- for k, v in self.batch_extra_fields.items():
- # if isinstance(v, torch.Tensor):
- # tensor_rep = v
- # elif isinstance(v, (Boxes, Keypoints)):
- # tensor_rep = v.tensor
- # else:
- # raise ValueError("Can't find tensor representation for: {}".format())
- ret[k] = v
- return ret
-
- def has(self, name):
- return name in self.batch_extra_fields
-
- def set(self, name, value):
- data_len = len(value)
- if len(self.batch_extra_fields):
- assert (
- len(self) == data_len
- ), "Adding a field of length {} to a Instances of length {}".format(data_len, len(self))
- self.batch_extra_fields[name] = value
-
- def __setattr__(self, name, val):
- if name in ["im_info", "indices", "batch_extra_fields", "image_size"]:
- super().__setattr__(name, val)
- else:
- self.set(name, val)
-
- def __getattr__(self, name):
- if name not in self.batch_extra_fields:
- raise AttributeError("Cannot find field '{}' in the given Instances!".format(name))
- return self.batch_extra_fields[name]
-
- def __len__(self):
- return len(self.indices)
-
- def flatten(self):
- ret = []
- for _, v in self.batch_extra_fields.items():
- if isinstance(v, (Boxes, Keypoints)):
- ret.append(v.tensor)
- else:
- ret.append(v)
- return ret
-
- @staticmethod
- def to_d2_instances_list(instances_list):
- """
- Convert InstancesList to List[Instances]. The input `instances_list` can
- also be a List[Instances], in this case this method is a non-op.
- """
- if not isinstance(instances_list, InstancesList):
- assert all(isinstance(x, Instances) for x in instances_list)
- return instances_list
-
- ret = []
- for i, info in enumerate(instances_list.im_info):
- instances = Instances(torch.Size([int(info[0].item()), int(info[1].item())]))
-
- ids = instances_list.indices == i
- for k, v in instances_list.batch_extra_fields.items():
- if isinstance(v, torch.Tensor):
- instances.set(k, v[ids])
- continue
- elif isinstance(v, Boxes):
- instances.set(k, v[ids, -4:])
- continue
-
- target_type, tensor_source = v
- assert isinstance(tensor_source, torch.Tensor)
- assert tensor_source.shape[0] == instances_list.indices.shape[0]
- tensor_source = tensor_source[ids]
-
- if issubclass(target_type, Boxes):
- instances.set(k, Boxes(tensor_source[:, -4:]))
- elif issubclass(target_type, Keypoints):
- instances.set(k, Keypoints(tensor_source))
- elif issubclass(target_type, torch.Tensor):
- instances.set(k, tensor_source)
- else:
- raise ValueError("Can't handle targe type: {}".format(target_type))
-
- ret.append(instances)
- return ret
-
-
-class Caffe2Compatible(object):
- """
- A model can inherit this class to indicate that it can be traced and deployed with caffe2.
- """
-
- def _get_tensor_mode(self):
- return self._tensor_mode
-
- def _set_tensor_mode(self, v):
- self._tensor_mode = v
-
- tensor_mode = property(_get_tensor_mode, _set_tensor_mode)
- """
- If true, the model expects C2-style tensor only inputs/outputs format.
- """
-
-
-class Caffe2RPN(Caffe2Compatible, rpn.RPN):
- def _generate_proposals(
- self, images, objectness_logits_pred, anchor_deltas_pred, gt_instances=None
- ):
- assert isinstance(images, ImageList)
- if self.tensor_mode:
- im_info = images.image_sizes
- else:
- im_info = torch.tensor([[im_sz[0], im_sz[1], 1.0] for im_sz in images.image_sizes]).to(
- images.tensor.device
- )
- assert isinstance(im_info, torch.Tensor)
-
- rpn_rois_list = []
- rpn_roi_probs_list = []
- for scores, bbox_deltas, cell_anchors_tensor, feat_stride in zip(
- objectness_logits_pred,
- anchor_deltas_pred,
- iter(self.anchor_generator.cell_anchors),
- self.anchor_generator.strides,
- ):
- scores = scores.detach()
- bbox_deltas = bbox_deltas.detach()
-
- rpn_rois, rpn_roi_probs = torch.ops._caffe2.GenerateProposals(
- scores,
- bbox_deltas,
- im_info,
- cell_anchors_tensor,
- spatial_scale=1.0 / feat_stride,
- pre_nms_topN=self.pre_nms_topk[self.training],
- post_nms_topN=self.post_nms_topk[self.training],
- nms_thresh=self.nms_thresh,
- min_size=self.min_box_size,
- # correct_transform_coords=True, # deprecated argument
- angle_bound_on=True, # Default
- angle_bound_lo=-180,
- angle_bound_hi=180,
- clip_angle_thresh=1.0, # Default
- legacy_plus_one=False,
- )
- rpn_rois_list.append(rpn_rois)
- rpn_roi_probs_list.append(rpn_roi_probs)
-
- # For FPN in D2, in RPN all proposals from different levels are concated
- # together, ranked and picked by top post_nms_topk. Then in ROIPooler
- # it calculates level_assignments and calls the RoIAlign from
- # the corresponding level.
-
- if len(objectness_logits_pred) == 1:
- rpn_rois = rpn_rois_list[0]
- rpn_roi_probs = rpn_roi_probs_list[0]
- else:
- assert len(rpn_rois_list) == len(rpn_roi_probs_list)
- rpn_post_nms_topN = self.post_nms_topk[self.training]
-
- device = rpn_rois_list[0].device
- input_list = [to_device(x, "cpu") for x in (rpn_rois_list + rpn_roi_probs_list)]
-
- # TODO remove this after confirming rpn_max_level/rpn_min_level
- # is not needed in CollectRpnProposals.
- feature_strides = list(self.anchor_generator.strides)
- rpn_min_level = int(math.log2(feature_strides[0]))
- rpn_max_level = int(math.log2(feature_strides[-1]))
- assert (rpn_max_level - rpn_min_level + 1) == len(
- rpn_rois_list
- ), "CollectRpnProposals requires continuous levels"
-
- rpn_rois = torch.ops._caffe2.CollectRpnProposals(
- input_list,
- # NOTE: in current implementation, rpn_max_level and rpn_min_level
- # are not needed, only the subtraction of two matters and it
- # can be infer from the number of inputs. Keep them now for
- # consistency.
- rpn_max_level=2 + len(rpn_rois_list) - 1,
- rpn_min_level=2,
- rpn_post_nms_topN=rpn_post_nms_topN,
- )
- rpn_rois = to_device(rpn_rois, device)
- rpn_roi_probs = []
-
- proposals = self.c2_postprocess(im_info, rpn_rois, rpn_roi_probs, self.tensor_mode)
- return proposals, {}
-
- def forward(self, images, features, gt_instances=None):
- assert not self.training
- features = [features[f] for f in self.in_features]
- objectness_logits_pred, anchor_deltas_pred = self.rpn_head(features)
- return self._generate_proposals(
- images,
- objectness_logits_pred,
- anchor_deltas_pred,
- gt_instances,
- )
-
- @staticmethod
- def c2_postprocess(im_info, rpn_rois, rpn_roi_probs, tensor_mode):
- proposals = InstancesList(
- im_info=im_info,
- indices=rpn_rois[:, 0],
- extra_fields={
- "proposal_boxes": Caffe2Boxes(rpn_rois),
- "objectness_logits": (torch.Tensor, rpn_roi_probs),
- },
- )
- if not tensor_mode:
- proposals = InstancesList.to_d2_instances_list(proposals)
- else:
- proposals = [proposals]
- return proposals
-
-
-class Caffe2ROIPooler(Caffe2Compatible, poolers.ROIPooler):
- @staticmethod
- def c2_preprocess(box_lists):
- assert all(isinstance(x, Boxes) for x in box_lists)
- if all(isinstance(x, Caffe2Boxes) for x in box_lists):
- # input is pure-tensor based
- assert len(box_lists) == 1
- pooler_fmt_boxes = box_lists[0].tensor
- else:
- pooler_fmt_boxes = poolers.convert_boxes_to_pooler_format(box_lists)
- return pooler_fmt_boxes
-
- def forward(self, x, box_lists):
- assert not self.training
-
- pooler_fmt_boxes = self.c2_preprocess(box_lists)
- num_level_assignments = len(self.level_poolers)
-
- if num_level_assignments == 1:
- if isinstance(self.level_poolers[0], ROIAlignRotated):
- c2_roi_align = torch.ops._caffe2.RoIAlignRotated
- aligned = True
- else:
- c2_roi_align = torch.ops._caffe2.RoIAlign
- aligned = self.level_poolers[0].aligned
-
- x0 = x[0]
- if x0.is_quantized:
- x0 = x0.dequantize()
-
- out = c2_roi_align(
- x0,
- pooler_fmt_boxes,
- order="NCHW",
- spatial_scale=float(self.level_poolers[0].spatial_scale),
- pooled_h=int(self.output_size[0]),
- pooled_w=int(self.output_size[1]),
- sampling_ratio=int(self.level_poolers[0].sampling_ratio),
- aligned=aligned,
- )
- return out
-
- device = pooler_fmt_boxes.device
- assert (
- self.max_level - self.min_level + 1 == 4
- ), "Currently DistributeFpnProposals only support 4 levels"
- fpn_outputs = torch.ops._caffe2.DistributeFpnProposals(
- to_device(pooler_fmt_boxes, "cpu"),
- roi_canonical_scale=self.canonical_box_size,
- roi_canonical_level=self.canonical_level,
- roi_max_level=self.max_level,
- roi_min_level=self.min_level,
- legacy_plus_one=False,
- )
- fpn_outputs = [to_device(x, device) for x in fpn_outputs]
-
- rois_fpn_list = fpn_outputs[:-1]
- rois_idx_restore_int32 = fpn_outputs[-1]
-
- roi_feat_fpn_list = []
- for roi_fpn, x_level, pooler in zip(rois_fpn_list, x, self.level_poolers):
- if isinstance(pooler, ROIAlignRotated):
- c2_roi_align = torch.ops._caffe2.RoIAlignRotated
- aligned = True
- else:
- c2_roi_align = torch.ops._caffe2.RoIAlign
- aligned = bool(pooler.aligned)
-
- if x_level.is_quantized:
- x_level = x_level.dequantize()
-
- roi_feat_fpn = c2_roi_align(
- x_level,
- roi_fpn,
- order="NCHW",
- spatial_scale=float(pooler.spatial_scale),
- pooled_h=int(self.output_size[0]),
- pooled_w=int(self.output_size[1]),
- sampling_ratio=int(pooler.sampling_ratio),
- aligned=aligned,
- )
- roi_feat_fpn_list.append(roi_feat_fpn)
-
- roi_feat_shuffled = cat(roi_feat_fpn_list, dim=0)
- assert roi_feat_shuffled.numel() > 0 and rois_idx_restore_int32.numel() > 0, (
- "Caffe2 export requires tracing with a model checkpoint + input that can produce valid"
- " detections. But no detections were obtained with the given checkpoint and input!"
- )
- roi_feat = torch.ops._caffe2.BatchPermutation(roi_feat_shuffled, rois_idx_restore_int32)
- return roi_feat
-
-
-class Caffe2FastRCNNOutputsInference:
- def __init__(self, tensor_mode):
- self.tensor_mode = tensor_mode # whether the output is caffe2 tensor mode
-
- def __call__(self, box_predictor, predictions, proposals):
- """equivalent to FastRCNNOutputLayers.inference"""
- num_classes = box_predictor.num_classes
- score_thresh = box_predictor.test_score_thresh
- nms_thresh = box_predictor.test_nms_thresh
- topk_per_image = box_predictor.test_topk_per_image
- is_rotated = len(box_predictor.box2box_transform.weights) == 5
-
- if is_rotated:
- box_dim = 5
- assert box_predictor.box2box_transform.weights[4] == 1, (
- "The weights for Rotated BBoxTransform in C2 have only 4 dimensions,"
- + " thus enforcing the angle weight to be 1 for now"
- )
- box2box_transform_weights = box_predictor.box2box_transform.weights[:4]
- else:
- box_dim = 4
- box2box_transform_weights = box_predictor.box2box_transform.weights
-
- class_logits, box_regression = predictions
- if num_classes + 1 == class_logits.shape[1]:
- class_prob = F.softmax(class_logits, -1)
- else:
- assert num_classes == class_logits.shape[1]
- class_prob = F.sigmoid(class_logits)
- # BoxWithNMSLimit will infer num_classes from the shape of the class_prob
- # So append a zero column as placeholder for the background class
- class_prob = torch.cat((class_prob, torch.zeros(class_prob.shape[0], 1)), dim=1)
-
- assert box_regression.shape[1] % box_dim == 0
- cls_agnostic_bbox_reg = box_regression.shape[1] // box_dim == 1
-
- input_tensor_mode = proposals[0].proposal_boxes.tensor.shape[1] == box_dim + 1
-
- rois = type(proposals[0].proposal_boxes).cat([p.proposal_boxes for p in proposals])
- device, dtype = rois.tensor.device, rois.tensor.dtype
- if input_tensor_mode:
- im_info = proposals[0].image_size
- rois = rois.tensor
- else:
- im_info = torch.tensor(
- [[sz[0], sz[1], 1.0] for sz in [x.image_size for x in proposals]]
- )
- batch_ids = cat(
- [
- torch.full((b, 1), i, dtype=dtype, device=device)
- for i, b in enumerate(len(p) for p in proposals)
- ],
- dim=0,
- )
- rois = torch.cat([batch_ids, rois.tensor], dim=1)
-
- roi_pred_bbox, roi_batch_splits = torch.ops._caffe2.BBoxTransform(
- to_device(rois, "cpu"),
- to_device(box_regression, "cpu"),
- to_device(im_info, "cpu"),
- weights=box2box_transform_weights,
- apply_scale=True,
- rotated=is_rotated,
- angle_bound_on=True,
- angle_bound_lo=-180,
- angle_bound_hi=180,
- clip_angle_thresh=1.0,
- legacy_plus_one=False,
- )
- roi_pred_bbox = to_device(roi_pred_bbox, device)
- roi_batch_splits = to_device(roi_batch_splits, device)
-
- nms_outputs = torch.ops._caffe2.BoxWithNMSLimit(
- to_device(class_prob, "cpu"),
- to_device(roi_pred_bbox, "cpu"),
- to_device(roi_batch_splits, "cpu"),
- score_thresh=float(score_thresh),
- nms=float(nms_thresh),
- detections_per_im=int(topk_per_image),
- soft_nms_enabled=False,
- soft_nms_method="linear",
- soft_nms_sigma=0.5,
- soft_nms_min_score_thres=0.001,
- rotated=is_rotated,
- cls_agnostic_bbox_reg=cls_agnostic_bbox_reg,
- input_boxes_include_bg_cls=False,
- output_classes_include_bg_cls=False,
- legacy_plus_one=False,
- )
- roi_score_nms = to_device(nms_outputs[0], device)
- roi_bbox_nms = to_device(nms_outputs[1], device)
- roi_class_nms = to_device(nms_outputs[2], device)
- roi_batch_splits_nms = to_device(nms_outputs[3], device)
- roi_keeps_nms = to_device(nms_outputs[4], device)
- roi_keeps_size_nms = to_device(nms_outputs[5], device)
- if not self.tensor_mode:
- roi_class_nms = roi_class_nms.to(torch.int64)
-
- roi_batch_ids = cat(
- [
- torch.full((b, 1), i, dtype=dtype, device=device)
- for i, b in enumerate(int(x.item()) for x in roi_batch_splits_nms)
- ],
- dim=0,
- )
-
- roi_class_nms = alias(roi_class_nms, "class_nms")
- roi_score_nms = alias(roi_score_nms, "score_nms")
- roi_bbox_nms = alias(roi_bbox_nms, "bbox_nms")
- roi_batch_splits_nms = alias(roi_batch_splits_nms, "batch_splits_nms")
- roi_keeps_nms = alias(roi_keeps_nms, "keeps_nms")
- roi_keeps_size_nms = alias(roi_keeps_size_nms, "keeps_size_nms")
-
- results = InstancesList(
- im_info=im_info,
- indices=roi_batch_ids[:, 0],
- extra_fields={
- "pred_boxes": Caffe2Boxes(roi_bbox_nms),
- "scores": roi_score_nms,
- "pred_classes": roi_class_nms,
- },
- )
-
- if not self.tensor_mode:
- results = InstancesList.to_d2_instances_list(results)
- batch_splits = roi_batch_splits_nms.int().tolist()
- kept_indices = list(roi_keeps_nms.to(torch.int64).split(batch_splits))
- else:
- results = [results]
- kept_indices = [roi_keeps_nms]
-
- return results, kept_indices
-
-
-class Caffe2MaskRCNNInference:
- def __call__(self, pred_mask_logits, pred_instances):
- """equivalent to mask_head.mask_rcnn_inference"""
- if all(isinstance(x, InstancesList) for x in pred_instances):
- assert len(pred_instances) == 1
- mask_probs_pred = pred_mask_logits.sigmoid()
- mask_probs_pred = alias(mask_probs_pred, "mask_fcn_probs")
- pred_instances[0].pred_masks = mask_probs_pred
- else:
- mask_rcnn_inference(pred_mask_logits, pred_instances)
-
-
-class Caffe2KeypointRCNNInference:
- def __init__(self, use_heatmap_max_keypoint):
- self.use_heatmap_max_keypoint = use_heatmap_max_keypoint
-
- def __call__(self, pred_keypoint_logits, pred_instances):
- # just return the keypoint heatmap for now,
- # there will be option to call HeatmapMaxKeypointOp
- output = alias(pred_keypoint_logits, "kps_score")
- if all(isinstance(x, InstancesList) for x in pred_instances):
- assert len(pred_instances) == 1
- if self.use_heatmap_max_keypoint:
- device = output.device
- output = torch.ops._caffe2.HeatmapMaxKeypoint(
- to_device(output, "cpu"),
- pred_instances[0].pred_boxes.tensor,
- should_output_softmax=True, # worth make it configerable?
- )
- output = to_device(output, device)
- output = alias(output, "keypoints_out")
- pred_instances[0].pred_keypoints = output
- return pred_keypoint_logits
diff --git a/spaces/BAAI/AltDiffusion/README.md b/spaces/BAAI/AltDiffusion/README.md
deleted file mode 100644
index 9d335cabb273fee5c9d0cf59e538fd93bedc15a6..0000000000000000000000000000000000000000
--- a/spaces/BAAI/AltDiffusion/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: AltDiffusion
-emoji: ❤️
-colorFrom: purple
-colorTo: gray
-sdk: gradio
-sdk_version: 3.10.1
-app_file: app.py
-pinned: false
-license: creativeml-openrail-m
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Benson/text-generation/Examples/20 Minutos Hasta El Amanecer Descarga Gratuita.md b/spaces/Benson/text-generation/Examples/20 Minutos Hasta El Amanecer Descarga Gratuita.md
deleted file mode 100644
index 2f1caf7c693cbcf3d620eb7ecce7d63cceb58e58..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/20 Minutos Hasta El Amanecer Descarga Gratuita.md
+++ /dev/null
@@ -1,61 +0,0 @@
-
-
20 minutos hasta el amanecer: un juego de supervivencia Roguelite revisión
-
Si usted está buscando un ritmo rápido, lleno de acción, y desafiante juego que pondrá a prueba sus habilidades y reflejos, entonces es posible que desee echa un vistazo 20 Minutes Till Dawn. Este es un juego de supervivencia roguelite donde tienes que luchar contra hordas interminables de monstruos Lovecraftian y sobrevivir a la noche. En este artículo, revisaremos las características del juego, la jugabilidad, los gráficos, el sonido, los pros, los contras y más.
-
Introducción
-
20 Minutes Till Dawn es un videojuego roguelike shoot 'em up desarrollado y publicado por flanne. El juego fue lanzado en acceso temprano en Steam el 8 de junio de 2022, y fue portado a Android e iOS por Erabit Studios el 9 de septiembre, 2022. El juego salió de Steam con la versión 1.0 el 8 de junio de 2023.
El juego pertenece al género de la supervivencia roguelite, lo que significa que cuenta con permadeath, aleatorización y progresión a través de carreras. El objetivo del juego es sobrevivir durante 20 minutos hasta el amanecer, mientras se enfrenta a un ataque de monstruos que se vuelven más fuertes y más numerosos a medida que pasa el tiempo. El juego está inspirado en Vampire Survivors, pero con opciones de combate y personalización más activas.
-
El juego está disponible en Steam por $4.99, así como en Google Play, App Store y TapTap gratis. El juego ha recibido críticas muy positivas de jugadores y críticos por igual, con más de 20.000 comentarios en Steam y más de 6 millones de descargas en plataformas móviles. El juego también ha sido presentado por IGN, TheGamer, Level Winner y otros medios de comunicación.
-
Juego
-
El modo de juego de 20 Minutes Till Dawn es simple pero desafiante. Usted controla a un personaje que puede moverse con las teclas WASD o un joystick virtual, apuntar con el ratón o la pantalla táctil, y disparar con clic izquierdo o toque. También puedes usar el botón derecho o doble toque para activar tu habilidad especial, que varía dependiendo de tu personaje.
-
-
A medida que matas monstruos, ganas puntos de experiencia que te permiten subir de nivel. Cada vez que subes de nivel, puedes elegir una de las cuatro mejoras generadas al azar que mejoran tus estadísticas o habilidades. Estas mejoras pueden ir desde aumentar tu daño o salud, hasta agregar efectos como fuego, veneno o aturdimiento a tus ataques, hasta desbloquear nuevas habilidades como guion, escudo o invocación. Las actualizaciones son permanentes para la ejecución actual, pero se pierden cuando mueres o reinicias.
-
Para sobrevivir a la noche, tienes que seguir moviéndote y disparando, evitando los ataques de los enemigos y los peligros ambientales. Los enemigos vienen en diferentes formas y tamaños, cada uno con su propio comportamiento y patrón de ataque. Algunos de ellos son rápidos y ágiles, algunos son lentos y sucios, algunos son a distancia y explosivos, y algunos son sigilosos y mortales. También encontrarás jefes cada pocos minutos, que son mucho más fuertes y más duros que los enemigos normales. Los jefes tienen habilidades y debilidades únicas que tienes que explotar para derrotarlos.
-
El juego tiene cuatro modos de juego diferentes: Normal, Hardcore, Endless y Custom. El modo normal es el modo predeterminado, donde tienes que sobrevivir durante 20 minutos con tres vidas. El modo Hardcore es similar al modo Normal, pero solo tienes una vida y los enemigos son más agresivos. El modo sin fin es donde puedes jugar todo el tiempo que quieras, pero los enemigos se vuelven más difíciles y más frecuentes a medida que pasa el tiempo. El modo personalizado es donde puedes crear tus propias reglas y ajustes para el juego, como cambiar el límite de tiempo, la tasa de aparición de enemigos, el nivel de dificultad y más.
-
Gráficos y sonido
-
-
El sonido de 20 Minutes Till Dawn es envolvente y cautivador, con una banda sonora que coincide con el estado de ánimo y la intensidad del juego. El juego tiene una música estilo synthwave que es pegadiza y energética, con diferentes pistas para cada entorno y jefe. El juego también tiene efectos de sonido que son realistas y satisfactorios, como el sonido de disparos, explosiones, gritos, pasos y más. El juego no tiene voz ni diálogo, pero tiene mensajes de texto que aparecen en la pantalla para darte pistas o advertencias.
-
-
El juego funciona bien en la mayoría de los dispositivos y plataformas, con un juego suave y un retraso mínimo o problemas técnicos. El juego tiene bajos requisitos del sistema para los usuarios de PC, así como opciones para ajustar la calidad de los gráficos y la resolución para los usuarios móviles. El juego también es compatible con el ahorro de la nube , soporte de controlador, tablas de clasificación , logros , y cooperativo multijugador .
-
Pros y contras
-
20 Minutes Till Dawn es un juego divertido y adictivo que te mantendrá entretenido durante horas. Sin embargo, como cualquier otro juego, también tiene sus pros y sus contras. Aquí están algunos de ellos:
-
-
Pros
Contras
-
- Juego rápido y desafiante que requiere habilidad y estrategia
- Permadeath puede ser frustrante y desalentador para algunos jugadores
-
- Variedad de personajes, armas, mejoras, enemigos, jefes, entornos y modos de juego que ofrecen valor de reproducción
- La aleatorización puede ser injusta o desequilibrada a veces
-
- Gráficos de estilo retro que son coloridos y atmosféricos
- Los gráficos pixelados pueden no atraer a todos
-
- Música estilo synthwave que es pegadiza y energética
- La música puede ser repetitiva o molesta después de un rato
-
- Bajos requisitos del sistema y compatibilidad multiplataforma
- Algunos errores o fallos ocasionales pueden ocurrir
-
-
Conclusión
-
-
Si estás interesado en jugar 20 Minutes Till Dawn, puedes encontrar más información o descargar el juego desde los siguientes enlaces:
-
-
Vapor: [20 minutos hasta el amanecer en el vapor]
-
Google Play: [20 minutos hasta el amanecer - Aplicaciones en Google Play]
-
App Store: [ 20 minutos hasta el amanecer en la App Store]
-
TapTap: [20 minutos hasta el amanecer - TapTap]
-
-
También puede ver algunos videos de juego o leer algunos comentarios de las siguientes fuentes:
-
-
IGN: [20 minutos hasta el amanecer Revisión - IGN]
-
TheGamer: [20 minutos hasta el amanecer Revisión: Una Roguelite que te mantiene en sus dedos de los pies]
-
Nivel ganador: [20 minutos hasta el amanecer Guía para principiantes: Consejos, trucos y estrategias para sobrevivir la noche]
-
-
Preguntas frecuentes
-
Aquí están algunas de las preguntas más frecuentes sobre 20 minutos hasta el amanecer:
-
-
¿Cómo puedo desbloquear más personajes y armas?
-
Puedes desbloquear más personajes y armas gastando gemas, que se ganan matando monstruos o completando logros. También puedes encontrar algunas armas como botín gotas de enemigos o cofres.
-
¿Cómo puedo guardar mi progreso?
-
Puede guardar su progreso utilizando la función de almacenamiento en la nube, que está disponible en todas las plataformas. También puede utilizar la función de ahorro local, que está disponible en PC y plataformas móviles. Sin embargo, tenga en cuenta que su progreso solo se guarda entre ejecuciones, no durante las ejecuciones. Si muere o reinicia, perderá sus actualizaciones y elementos actuales.
-
¿Cómo puedo jugar con mis amigos?
-
Puedes jugar con tus amigos usando la función multijugador co-op, que está disponible en todas las plataformas. Puedes unirte o alojar un juego con hasta cuatro jugadores en línea o localmente. También puedes chatear con tus amigos usando la función de chat de voz o texto.
-
¿Cómo cambio la configuración del juego?
-
-
¿Cómo puedo contactar a los desarrolladores o reportar un error?
-
Puede ponerse en contacto con los desarrolladores o informar de un error mediante la función de retroalimentación, que está disponible en todas las plataformas. También puede visitar el sitio web oficial, el servidor de discordia, la página de Twitter o la página de Facebook del juego.
-
-
Espero que hayas disfrutado de este artículo y te haya resultado útil. Si tienes alguna pregunta o comentario, puedes dejarlos abajo. Gracias por leer y tener un gran día!
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Billyosoro/ESRGAN/realesrgan/__init__.py b/spaces/Billyosoro/ESRGAN/realesrgan/__init__.py
deleted file mode 100644
index bfea78f284116dee22510d4aa91f9e44afb7d472..0000000000000000000000000000000000000000
--- a/spaces/Billyosoro/ESRGAN/realesrgan/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# flake8: noqa
-from .archs import *
-from .data import *
-from .models import *
-from .utils import *
-#from .version import *
diff --git a/spaces/Bradjan310/ehartford-Wizard-Vicuna-30B-Uncensored/app.py b/spaces/Bradjan310/ehartford-Wizard-Vicuna-30B-Uncensored/app.py
deleted file mode 100644
index 4cdd13923578027e405184827b4f353131ce7341..0000000000000000000000000000000000000000
--- a/spaces/Bradjan310/ehartford-Wizard-Vicuna-30B-Uncensored/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/ehartford/Wizard-Vicuna-30B-Uncensored").launch()
\ No newline at end of file
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/.github/ISSUE_TEMPLATE/questions-help-support.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/.github/ISSUE_TEMPLATE/questions-help-support.md
deleted file mode 100644
index 4166219b7de584d26b3795e07162df0eff2733e3..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/.github/ISSUE_TEMPLATE/questions-help-support.md
+++ /dev/null
@@ -1,21 +0,0 @@
----
-name: "❓How to do something?"
-about: How to do X with detectron2? How detectron2 does X?
-
----
-
-## ❓ How to use Detectron2
-
-Questions like:
-
-1. How to do X with detectron2?
-2. How detectron2 does X?
-
-NOTE:
-
-1. If you met any unexpected issue when using detectron2 and wish to know why,
- please use the "Unexpected Problems / Bugs" issue template.
-
-2. We do not answer general machine learning / computer vision questions that are not specific to
- detectron2, such as how a model works, how to improve your training/make it converge, or what algorithm/methods can be
- used to achieve X.
diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/temporary_buffer.h b/spaces/CVPR/LIVE/thrust/thrust/detail/temporary_buffer.h
deleted file mode 100644
index 4dca3be3b9b0525aa01bcaa339a13782ac38272f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/detail/temporary_buffer.h
+++ /dev/null
@@ -1,76 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace detail
-{
-
-
-template
-__host__ __device__
- thrust::pair, typename thrust::pointer::difference_type>
- down_cast_pair(Pair p)
-{
- // XXX should use a hypothetical thrust::static_pointer_cast here
- thrust::pointer ptr = thrust::pointer(static_cast(thrust::raw_pointer_cast(p.first)));
-
- typedef thrust::pair, typename thrust::pointer::difference_type> result_type;
- return result_type(ptr, p.second);
-} // end down_cast_pair()
-
-
-} // end detail
-
-
-__thrust_exec_check_disable__
-template
-__host__ __device__
- thrust::pair, typename thrust::pointer::difference_type>
- get_temporary_buffer(const thrust::detail::execution_policy_base &exec, typename thrust::pointer::difference_type n)
-{
- using thrust::detail::get_temporary_buffer; // execute_with_allocator
- using thrust::system::detail::generic::get_temporary_buffer;
-
- return thrust::detail::down_cast_pair(get_temporary_buffer(thrust::detail::derived_cast(thrust::detail::strip_const(exec)), n));
-} // end get_temporary_buffer()
-
-
-__thrust_exec_check_disable__
-template
-__host__ __device__
- void return_temporary_buffer(const thrust::detail::execution_policy_base &exec, Pointer p, std::ptrdiff_t n)
-{
- using thrust::detail::return_temporary_buffer; // execute_with_allocator
- using thrust::system::detail::generic::return_temporary_buffer;
-
- return return_temporary_buffer(thrust::detail::derived_cast(thrust::detail::strip_const(exec)), p, n);
-} // end return_temporary_buffer()
-
-
-} // end thrust
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/generate.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/generate.h
deleted file mode 100644
index edc2cc5eb3582a11ab7afa0cd78030b2b26688f2..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/generate.h
+++ /dev/null
@@ -1,57 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-#pragma once
-
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace detail
-{
-namespace generic
-{
-
-template
-__host__ __device__
- void generate(thrust::execution_policy &exec,
- ForwardIterator first,
- ForwardIterator last,
- Generator gen);
-
-template
-__host__ __device__
- OutputIterator generate_n(thrust::execution_policy &exec,
- OutputIterator first,
- Size n,
- Generator gen);
-
-} // end namespace generic
-} // end namespace detail
-} // end namespace system
-} // end namespace thrust
-
-#include
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/scatter.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/scatter.h
deleted file mode 100644
index 4a65a4cc01ea23211330192f69999532f6d60575..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/scatter.h
+++ /dev/null
@@ -1,81 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-#pragma once
-
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace detail
-{
-namespace generic
-{
-
-
-template
-__host__ __device__
- void scatter(thrust::execution_policy &exec,
- InputIterator1 first,
- InputIterator1 last,
- InputIterator2 map,
- RandomAccessIterator output);
-
-
-template
-__host__ __device__
- void scatter_if(thrust::execution_policy &exec,
- InputIterator1 first,
- InputIterator1 last,
- InputIterator2 map,
- InputIterator3 stencil,
- RandomAccessIterator output);
-
-
-template
-__host__ __device__
- void scatter_if(thrust::execution_policy &exec,
- InputIterator1 first,
- InputIterator1 last,
- InputIterator2 map,
- InputIterator3 stencil,
- RandomAccessIterator output,
- Predicate pred);
-
-
-} // end namespace generic
-} // end namespace detail
-} // end namespace system
-} // end namespace thrust
-
-#include
-
diff --git a/spaces/CVPR/WALT/walt/datasets/pipelines/transforms.py b/spaces/CVPR/WALT/walt/datasets/pipelines/transforms.py
deleted file mode 100644
index 02fd63f2bfaac64fbf9495f2fe6ffe83dc9371e1..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/walt/datasets/pipelines/transforms.py
+++ /dev/null
@@ -1,1861 +0,0 @@
-import copy
-import inspect
-
-import mmcv
-import numpy as np
-from numpy import random
-
-from mmdet.core import PolygonMasks
-from mmdet.core.evaluation.bbox_overlaps import bbox_overlaps
-from ..builder import PIPELINES
-
-try:
- from imagecorruptions import corrupt
-except ImportError:
- corrupt = None
-
-try:
- import albumentations
- from albumentations import Compose
-except ImportError:
- albumentations = None
- Compose = None
-
-
-@PIPELINES.register_module()
-class Resize(object):
- """Resize images & bbox & mask.
-
- This transform resizes the input image to some scale. Bboxes and masks are
- then resized with the same scale factor. If the input dict contains the key
- "scale", then the scale in the input dict is used, otherwise the specified
- scale in the init method is used. If the input dict contains the key
- "scale_factor" (if MultiScaleFlipAug does not give img_scale but
- scale_factor), the actual scale will be computed by image shape and
- scale_factor.
-
- `img_scale` can either be a tuple (single-scale) or a list of tuple
- (multi-scale). There are 3 multiscale modes:
-
- - ``ratio_range is not None``: randomly sample a ratio from the ratio \
- range and multiply it with the image scale.
- - ``ratio_range is None`` and ``multiscale_mode == "range"``: randomly \
- sample a scale from the multiscale range.
- - ``ratio_range is None`` and ``multiscale_mode == "value"``: randomly \
- sample a scale from multiple scales.
-
- Args:
- img_scale (tuple or list[tuple]): Images scales for resizing.
- multiscale_mode (str): Either "range" or "value".
- ratio_range (tuple[float]): (min_ratio, max_ratio)
- keep_ratio (bool): Whether to keep the aspect ratio when resizing the
- image.
- bbox_clip_border (bool, optional): Whether clip the objects outside
- the border of the image. Defaults to True.
- backend (str): Image resize backend, choices are 'cv2' and 'pillow'.
- These two backends generates slightly different results. Defaults
- to 'cv2'.
- override (bool, optional): Whether to override `scale` and
- `scale_factor` so as to call resize twice. Default False. If True,
- after the first resizing, the existed `scale` and `scale_factor`
- will be ignored so the second resizing can be allowed.
- This option is a work-around for multiple times of resize in DETR.
- Defaults to False.
- """
-
- def __init__(self,
- img_scale=None,
- multiscale_mode='range',
- ratio_range=None,
- keep_ratio=True,
- bbox_clip_border=True,
- backend='cv2',
- override=False):
- if img_scale is None:
- self.img_scale = None
- else:
- if isinstance(img_scale, list):
- self.img_scale = img_scale
- else:
- self.img_scale = [img_scale]
- assert mmcv.is_list_of(self.img_scale, tuple)
-
- if ratio_range is not None:
- # mode 1: given a scale and a range of image ratio
- assert len(self.img_scale) == 1
- else:
- # mode 2: given multiple scales or a range of scales
- assert multiscale_mode in ['value', 'range']
-
- self.backend = backend
- self.multiscale_mode = multiscale_mode
- self.ratio_range = ratio_range
- self.keep_ratio = keep_ratio
- # TODO: refactor the override option in Resize
- self.override = override
- self.bbox_clip_border = bbox_clip_border
-
- @staticmethod
- def random_select(img_scales):
- """Randomly select an img_scale from given candidates.
-
- Args:
- img_scales (list[tuple]): Images scales for selection.
-
- Returns:
- (tuple, int): Returns a tuple ``(img_scale, scale_dix)``, \
- where ``img_scale`` is the selected image scale and \
- ``scale_idx`` is the selected index in the given candidates.
- """
-
- assert mmcv.is_list_of(img_scales, tuple)
- scale_idx = np.random.randint(len(img_scales))
- img_scale = img_scales[scale_idx]
- return img_scale, scale_idx
-
- @staticmethod
- def random_sample(img_scales):
- """Randomly sample an img_scale when ``multiscale_mode=='range'``.
-
- Args:
- img_scales (list[tuple]): Images scale range for sampling.
- There must be two tuples in img_scales, which specify the lower
- and upper bound of image scales.
-
- Returns:
- (tuple, None): Returns a tuple ``(img_scale, None)``, where \
- ``img_scale`` is sampled scale and None is just a placeholder \
- to be consistent with :func:`random_select`.
- """
-
- assert mmcv.is_list_of(img_scales, tuple) and len(img_scales) == 2
- img_scale_long = [max(s) for s in img_scales]
- img_scale_short = [min(s) for s in img_scales]
- long_edge = np.random.randint(
- min(img_scale_long),
- max(img_scale_long) + 1)
- short_edge = np.random.randint(
- min(img_scale_short),
- max(img_scale_short) + 1)
- img_scale = (long_edge, short_edge)
- return img_scale, None
-
- @staticmethod
- def random_sample_ratio(img_scale, ratio_range):
- """Randomly sample an img_scale when ``ratio_range`` is specified.
-
- A ratio will be randomly sampled from the range specified by
- ``ratio_range``. Then it would be multiplied with ``img_scale`` to
- generate sampled scale.
-
- Args:
- img_scale (tuple): Images scale base to multiply with ratio.
- ratio_range (tuple[float]): The minimum and maximum ratio to scale
- the ``img_scale``.
-
- Returns:
- (tuple, None): Returns a tuple ``(scale, None)``, where \
- ``scale`` is sampled ratio multiplied with ``img_scale`` and \
- None is just a placeholder to be consistent with \
- :func:`random_select`.
- """
-
- assert isinstance(img_scale, tuple) and len(img_scale) == 2
- min_ratio, max_ratio = ratio_range
- assert min_ratio <= max_ratio
- ratio = np.random.random_sample() * (max_ratio - min_ratio) + min_ratio
- scale = int(img_scale[0] * ratio), int(img_scale[1] * ratio)
- return scale, None
-
- def _random_scale(self, results):
- """Randomly sample an img_scale according to ``ratio_range`` and
- ``multiscale_mode``.
-
- If ``ratio_range`` is specified, a ratio will be sampled and be
- multiplied with ``img_scale``.
- If multiple scales are specified by ``img_scale``, a scale will be
- sampled according to ``multiscale_mode``.
- Otherwise, single scale will be used.
-
- Args:
- results (dict): Result dict from :obj:`dataset`.
-
- Returns:
- dict: Two new keys 'scale` and 'scale_idx` are added into \
- ``results``, which would be used by subsequent pipelines.
- """
-
- if self.ratio_range is not None:
- scale, scale_idx = self.random_sample_ratio(
- self.img_scale[0], self.ratio_range)
- elif len(self.img_scale) == 1:
- scale, scale_idx = self.img_scale[0], 0
- elif self.multiscale_mode == 'range':
- scale, scale_idx = self.random_sample(self.img_scale)
- elif self.multiscale_mode == 'value':
- scale, scale_idx = self.random_select(self.img_scale)
- else:
- raise NotImplementedError
-
- results['scale'] = scale
- results['scale_idx'] = scale_idx
-
- def _resize_img(self, results):
- """Resize images with ``results['scale']``."""
- for key in results.get('img_fields', ['img']):
- if self.keep_ratio:
- img, scale_factor = mmcv.imrescale(
- results[key],
- results['scale'],
- return_scale=True,
- backend=self.backend)
- # the w_scale and h_scale has minor difference
- # a real fix should be done in the mmcv.imrescale in the future
- new_h, new_w = img.shape[:2]
- h, w = results[key].shape[:2]
- w_scale = new_w / w
- h_scale = new_h / h
- else:
- img, w_scale, h_scale = mmcv.imresize(
- results[key],
- results['scale'],
- return_scale=True,
- backend=self.backend)
- results[key] = img
-
- scale_factor = np.array([w_scale, h_scale, w_scale, h_scale],
- dtype=np.float32)
- results['img_shape'] = img.shape
- # in case that there is no padding
- results['pad_shape'] = img.shape
- results['scale_factor'] = scale_factor
- results['keep_ratio'] = self.keep_ratio
-
- def _resize_bboxes(self, results):
- """Resize bounding boxes with ``results['scale_factor']``."""
- for key in results.get('bbox_fields', []):
- bboxes = results[key] * results['scale_factor']
- if self.bbox_clip_border:
- img_shape = results['img_shape']
- bboxes[:, 0::2] = np.clip(bboxes[:, 0::2], 0, img_shape[1])
- bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0])
- results[key] = bboxes
-
- def _resize_bboxes3d(self, results):
- """Resize bounding boxes with ``results['scale_factor']``."""
- key = 'gt_bboxes_3d_proj'
- bboxes3d_proj = results[key][:,:,:2]
- img_shape = results['img_shape']
- for i in range(results[key].shape[1]):
- bboxes3d_proj[:,i,:] = bboxes3d_proj[:,i,:] * results['scale_factor'][:2]
- if self.bbox_clip_border:
- bboxes3d_proj[:, i, 0] = np.clip(bboxes3d_proj[:, i, 0], 0, img_shape[1])
- bboxes3d_proj[:, i, 1] = np.clip(bboxes3d_proj[:, i, 1], 0, img_shape[1])
- results[key] = bboxes3d_proj
-
- def _resize_masks(self, results):
- """Resize masks with ``results['scale']``"""
- for key in results.get('mask_fields', []):
- if results[key] is None:
- continue
- if self.keep_ratio:
- results[key] = results[key].rescale(results['scale'])
- else:
- results[key] = results[key].resize(results['img_shape'][:2])
-
- def _resize_seg(self, results):
- """Resize semantic segmentation map with ``results['scale']``."""
- for key in results.get('seg_fields', []):
- if self.keep_ratio:
- gt_seg = mmcv.imrescale(
- results[key],
- results['scale'],
- interpolation='nearest',
- backend=self.backend)
- else:
- gt_seg = mmcv.imresize(
- results[key],
- results['scale'],
- interpolation='nearest',
- backend=self.backend)
- results['gt_semantic_seg'] = gt_seg
-
- def __call__(self, results):
- """Call function to resize images, bounding boxes, masks, semantic
- segmentation map.
-
- Args:
- results (dict): Result dict from loading pipeline.
-
- Returns:
- dict: Resized results, 'img_shape', 'pad_shape', 'scale_factor', \
- 'keep_ratio' keys are added into result dict.
- """
-
- if 'scale' not in results:
- if 'scale_factor' in results:
- img_shape = results['img'].shape[:2]
- scale_factor = results['scale_factor']
- assert isinstance(scale_factor, float)
- results['scale'] = tuple(
- [int(x * scale_factor) for x in img_shape][::-1])
- else:
- self._random_scale(results)
- else:
- if not self.override:
- assert 'scale_factor' not in results, (
- 'scale and scale_factor cannot be both set.')
- else:
- results.pop('scale')
- if 'scale_factor' in results:
- results.pop('scale_factor')
- self._random_scale(results)
-
- self._resize_img(results)
- self._resize_bboxes(results)
- self._resize_bboxes3d(results)
- self._resize_masks(results)
- self._resize_seg(results)
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(img_scale={self.img_scale}, '
- repr_str += f'multiscale_mode={self.multiscale_mode}, '
- repr_str += f'ratio_range={self.ratio_range}, '
- repr_str += f'keep_ratio={self.keep_ratio}, '
- repr_str += f'bbox_clip_border={self.bbox_clip_border})'
- return repr_str
-
-
-@PIPELINES.register_module()
-class RandomFlip(object):
- """Flip the image & bbox & mask.
-
- If the input dict contains the key "flip", then the flag will be used,
- otherwise it will be randomly decided by a ratio specified in the init
- method.
-
- When random flip is enabled, ``flip_ratio``/``direction`` can either be a
- float/string or tuple of float/string. There are 3 flip modes:
-
- - ``flip_ratio`` is float, ``direction`` is string: the image will be
- ``direction``ly flipped with probability of ``flip_ratio`` .
- E.g., ``flip_ratio=0.5``, ``direction='horizontal'``,
- then image will be horizontally flipped with probability of 0.5.
- - ``flip_ratio`` is float, ``direction`` is list of string: the image wil
- be ``direction[i]``ly flipped with probability of
- ``flip_ratio/len(direction)``.
- E.g., ``flip_ratio=0.5``, ``direction=['horizontal', 'vertical']``,
- then image will be horizontally flipped with probability of 0.25,
- vertically with probability of 0.25.
- - ``flip_ratio`` is list of float, ``direction`` is list of string:
- given ``len(flip_ratio) == len(direction)``, the image wil
- be ``direction[i]``ly flipped with probability of ``flip_ratio[i]``.
- E.g., ``flip_ratio=[0.3, 0.5]``, ``direction=['horizontal',
- 'vertical']``, then image will be horizontally flipped with probability
- of 0.3, vertically with probability of 0.5
-
- Args:
- flip_ratio (float | list[float], optional): The flipping probability.
- Default: None.
- direction(str | list[str], optional): The flipping direction. Options
- are 'horizontal', 'vertical', 'diagonal'. Default: 'horizontal'.
- If input is a list, the length must equal ``flip_ratio``. Each
- element in ``flip_ratio`` indicates the flip probability of
- corresponding direction.
- """
-
- def __init__(self, flip_ratio=None, direction='horizontal'):
- if isinstance(flip_ratio, list):
- assert mmcv.is_list_of(flip_ratio, float)
- assert 0 <= sum(flip_ratio) <= 1
- elif isinstance(flip_ratio, float):
- assert 0 <= flip_ratio <= 1
- elif flip_ratio is None:
- pass
- else:
- raise ValueError('flip_ratios must be None, float, '
- 'or list of float')
- self.flip_ratio = flip_ratio
-
- valid_directions = ['horizontal', 'vertical', 'diagonal']
- if isinstance(direction, str):
- assert direction in valid_directions
- elif isinstance(direction, list):
- assert mmcv.is_list_of(direction, str)
- assert set(direction).issubset(set(valid_directions))
- else:
- raise ValueError('direction must be either str or list of str')
- self.direction = direction
-
- if isinstance(flip_ratio, list):
- assert len(self.flip_ratio) == len(self.direction)
-
- def bbox_flip(self, bboxes, img_shape, direction):
- """Flip bboxes horizontally.
-
- Args:
- bboxes (numpy.ndarray): Bounding boxes, shape (..., 4*k)
- img_shape (tuple[int]): Image shape (height, width)
- direction (str): Flip direction. Options are 'horizontal',
- 'vertical'.
-
- Returns:
- numpy.ndarray: Flipped bounding boxes.
- """
-
- assert bboxes.shape[-1] % 4 == 0
- flipped = bboxes.copy()
- if direction == 'horizontal':
- w = img_shape[1]
- flipped[..., 0::4] = w - bboxes[..., 2::4]
- flipped[..., 2::4] = w - bboxes[..., 0::4]
- elif direction == 'vertical':
- h = img_shape[0]
- flipped[..., 1::4] = h - bboxes[..., 3::4]
- flipped[..., 3::4] = h - bboxes[..., 1::4]
- elif direction == 'diagonal':
- w = img_shape[1]
- h = img_shape[0]
- flipped[..., 0::4] = w - bboxes[..., 2::4]
- flipped[..., 1::4] = h - bboxes[..., 3::4]
- flipped[..., 2::4] = w - bboxes[..., 0::4]
- flipped[..., 3::4] = h - bboxes[..., 1::4]
- else:
- raise ValueError(f"Invalid flipping direction '{direction}'")
- return flipped
-
- def bbox3d_proj_flip(self, bboxes, img_shape, direction):
- """Flip bboxes horizontally.
-
- Args:
- bboxes (numpy.ndarray): Bounding boxes, shape (..., 4*k)
- img_shape (tuple[int]): Image shape (height, width)
- direction (str): Flip direction. Options are 'horizontal',
- 'vertical'.
-
- Returns:
- numpy.ndarray: Flipped bounding boxes.
- """
-
- flipped = bboxes.copy()
- if direction == 'horizontal':
- w = img_shape[1]
-
- flipped[:,:,0] = w - bboxes[:,:, 0]
- elif direction == 'vertical':
- h = img_shape[0]
- flipped[:,:,1] = h - bboxes[:,:, 1]
- elif direction == 'diagonal':
- w = img_shape[1]
- h = img_shape[0]
- flipped[:,:,0] = w - bboxes[:,:, 0]
- flipped[:,:,1] = h - bboxes[:,:, 1]
- else:
- raise ValueError(f"Invalid flipping direction '{direction}'")
- flipped[bboxes == -100] = -100
- return flipped
-
-
- def __call__(self, results):
- """Call function to flip bounding boxes, masks, semantic segmentation
- maps.
-
- Args:
- results (dict): Result dict from loading pipeline.
-
- Returns:
- dict: Flipped results, 'flip', 'flip_direction' keys are added \
- into result dict.
- """
-
- if 'flip' not in results:
- if isinstance(self.direction, list):
- # None means non-flip
- direction_list = self.direction + [None]
- else:
- # None means non-flip
- direction_list = [self.direction, None]
-
- if isinstance(self.flip_ratio, list):
- non_flip_ratio = 1 - sum(self.flip_ratio)
- flip_ratio_list = self.flip_ratio + [non_flip_ratio]
- else:
- non_flip_ratio = 1 - self.flip_ratio
- # exclude non-flip
- single_ratio = self.flip_ratio / (len(direction_list) - 1)
- flip_ratio_list = [single_ratio] * (len(direction_list) -
- 1) + [non_flip_ratio]
-
- cur_dir = np.random.choice(direction_list, p=flip_ratio_list)
-
- results['flip'] = cur_dir is not None
- if 'flip_direction' not in results:
- results['flip_direction'] = cur_dir
- if results['flip']:
- # flip image
- for key in results.get('img_fields', ['img']):
- results[key] = mmcv.imflip(
- results[key], direction=results['flip_direction'])
- # flip bboxes
- for key in results.get('bbox_fields', []):
- results[key] = self.bbox_flip(results[key],
- results['img_shape'],
- results['flip_direction'])
- for key in results.get('bbox3d_fields', []):
- if '_proj' in key:
- results[key] = self.bbox3d_proj_flip(results[key],
- results['img_shape'],
- results['flip_direction'])
- # flip masks
- for key in results.get('mask_fields', []):
- results[key] = results[key].flip(results['flip_direction'])
-
- # flip segs
- for key in results.get('seg_fields', []):
- results[key] = mmcv.imflip(
- results[key], direction=results['flip_direction'])
- return results
-
- def __repr__(self):
- return self.__class__.__name__ + f'(flip_ratio={self.flip_ratio})'
-
-
-@PIPELINES.register_module()
-class Pad(object):
- """Pad the image & mask.
-
- There are two padding modes: (1) pad to a fixed size and (2) pad to the
- minimum size that is divisible by some number.
- Added keys are "pad_shape", "pad_fixed_size", "pad_size_divisor",
-
- Args:
- size (tuple, optional): Fixed padding size.
- size_divisor (int, optional): The divisor of padded size.
- pad_val (float, optional): Padding value, 0 by default.
- """
-
- def __init__(self, size=None, size_divisor=None, pad_val=0):
- self.size = size
- self.size_divisor = size_divisor
- self.pad_val = pad_val
- # only one of size and size_divisor should be valid
- assert size is not None or size_divisor is not None
- assert size is None or size_divisor is None
-
- def _pad_img(self, results):
- """Pad images according to ``self.size``."""
- for key in results.get('img_fields', ['img']):
- if self.size is not None:
- padded_img = mmcv.impad(
- results[key], shape=self.size, pad_val=self.pad_val)
- elif self.size_divisor is not None:
- padded_img = mmcv.impad_to_multiple(
- results[key], self.size_divisor, pad_val=self.pad_val)
- results[key] = padded_img
- results['pad_shape'] = padded_img.shape
- results['pad_fixed_size'] = self.size
- results['pad_size_divisor'] = self.size_divisor
-
- def _pad_masks(self, results):
- """Pad masks according to ``results['pad_shape']``."""
- pad_shape = results['pad_shape'][:2]
- for key in results.get('mask_fields', []):
- results[key] = results[key].pad(pad_shape, pad_val=self.pad_val)
-
- def _pad_seg(self, results):
- """Pad semantic segmentation map according to
- ``results['pad_shape']``."""
- for key in results.get('seg_fields', []):
- results[key] = mmcv.impad(
- results[key], shape=results['pad_shape'][:2])
-
- def __call__(self, results):
- """Call function to pad images, masks, semantic segmentation maps.
-
- Args:
- results (dict): Result dict from loading pipeline.
-
- Returns:
- dict: Updated result dict.
- """
- self._pad_img(results)
- self._pad_masks(results)
- self._pad_seg(results)
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(size={self.size}, '
- repr_str += f'size_divisor={self.size_divisor}, '
- repr_str += f'pad_val={self.pad_val})'
- return repr_str
-
-
-@PIPELINES.register_module()
-class Normalize(object):
- """Normalize the image.
-
- Added key is "img_norm_cfg".
-
- Args:
- mean (sequence): Mean values of 3 channels.
- std (sequence): Std values of 3 channels.
- to_rgb (bool): Whether to convert the image from BGR to RGB,
- default is true.
- """
-
- def __init__(self, mean, std, to_rgb=True):
- self.mean = np.array(mean, dtype=np.float32)
- self.std = np.array(std, dtype=np.float32)
- self.to_rgb = to_rgb
-
- def __call__(self, results):
- """Call function to normalize images.
-
- Args:
- results (dict): Result dict from loading pipeline.
-
- Returns:
- dict: Normalized results, 'img_norm_cfg' key is added into
- result dict.
- """
- for key in results.get('img_fields', ['img']):
- results[key] = mmcv.imnormalize(results[key], self.mean, self.std,
- self.to_rgb)
- results['img_norm_cfg'] = dict(
- mean=self.mean, std=self.std, to_rgb=self.to_rgb)
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(mean={self.mean}, std={self.std}, to_rgb={self.to_rgb})'
- return repr_str
-
-
-@PIPELINES.register_module()
-class RandomCrop(object):
- """Random crop the image & bboxes & masks.
-
- The absolute `crop_size` is sampled based on `crop_type` and `image_size`,
- then the cropped results are generated.
-
- Args:
- crop_size (tuple): The relative ratio or absolute pixels of
- height and width.
- crop_type (str, optional): one of "relative_range", "relative",
- "absolute", "absolute_range". "relative" randomly crops
- (h * crop_size[0], w * crop_size[1]) part from an input of size
- (h, w). "relative_range" uniformly samples relative crop size from
- range [crop_size[0], 1] and [crop_size[1], 1] for height and width
- respectively. "absolute" crops from an input with absolute size
- (crop_size[0], crop_size[1]). "absolute_range" uniformly samples
- crop_h in range [crop_size[0], min(h, crop_size[1])] and crop_w
- in range [crop_size[0], min(w, crop_size[1])]. Default "absolute".
- allow_negative_crop (bool, optional): Whether to allow a crop that does
- not contain any bbox area. Default False.
- bbox_clip_border (bool, optional): Whether clip the objects outside
- the border of the image. Defaults to True.
-
- Note:
- - If the image is smaller than the absolute crop size, return the
- original image.
- - The keys for bboxes, labels and masks must be aligned. That is,
- `gt_bboxes` corresponds to `gt_labels` and `gt_masks`, and
- `gt_bboxes_ignore` corresponds to `gt_labels_ignore` and
- `gt_masks_ignore`.
- - If the crop does not contain any gt-bbox region and
- `allow_negative_crop` is set to False, skip this image.
- """
-
- def __init__(self,
- crop_size,
- crop_type='absolute',
- allow_negative_crop=False,
- bbox_clip_border=True):
- if crop_type not in [
- 'relative_range', 'relative', 'absolute', 'absolute_range'
- ]:
- raise ValueError(f'Invalid crop_type {crop_type}.')
- if crop_type in ['absolute', 'absolute_range']:
- assert crop_size[0] > 0 and crop_size[1] > 0
- assert isinstance(crop_size[0], int) and isinstance(
- crop_size[1], int)
- else:
- assert 0 < crop_size[0] <= 1 and 0 < crop_size[1] <= 1
- self.crop_size = crop_size
- self.crop_type = crop_type
- self.allow_negative_crop = allow_negative_crop
- self.bbox_clip_border = bbox_clip_border
- # The key correspondence from bboxes to labels and masks.
- self.bbox2label = {
- 'gt_bboxes': 'gt_labels',
- 'gt_bboxes_ignore': 'gt_labels_ignore'
- }
- self.bbox2mask = {
- 'gt_bboxes': 'gt_masks',
- 'gt_bboxes_ignore': 'gt_masks_ignore'
- }
-
- def _crop_data(self, results, crop_size, allow_negative_crop):
- """Function to randomly crop images, bounding boxes, masks, semantic
- segmentation maps.
-
- Args:
- results (dict): Result dict from loading pipeline.
- crop_size (tuple): Expected absolute size after cropping, (h, w).
- allow_negative_crop (bool): Whether to allow a crop that does not
- contain any bbox area. Default to False.
-
- Returns:
- dict: Randomly cropped results, 'img_shape' key in result dict is
- updated according to crop size.
- """
- assert crop_size[0] > 0 and crop_size[1] > 0
- for key in results.get('img_fields', ['img']):
- img = results[key]
- margin_h = max(img.shape[0] - crop_size[0], 0)
- margin_w = max(img.shape[1] - crop_size[1], 0)
- offset_h = np.random.randint(0, margin_h + 1)
- offset_w = np.random.randint(0, margin_w + 1)
- crop_y1, crop_y2 = offset_h, offset_h + crop_size[0]
- crop_x1, crop_x2 = offset_w, offset_w + crop_size[1]
-
- # crop the image
- img = img[crop_y1:crop_y2, crop_x1:crop_x2, ...]
- img_shape = img.shape
- results[key] = img
- results['img_shape'] = img_shape
-
- # crop bboxes accordingly and clip to the image boundary
- for key in results.get('bbox_fields', []):
- # e.g. gt_bboxes and gt_bboxes_ignore
- bbox_offset = np.array([offset_w, offset_h, offset_w, offset_h],
- dtype=np.float32)
- bboxes = results[key] - bbox_offset
- if self.bbox_clip_border:
- bboxes[:, 0::2] = np.clip(bboxes[:, 0::2], 0, img_shape[1])
- bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0])
- valid_inds = (bboxes[:, 2] > bboxes[:, 0]) & (
- bboxes[:, 3] > bboxes[:, 1])
- # If the crop does not contain any gt-bbox area and
- # allow_negative_crop is False, skip this image.
- if (key == 'gt_bboxes' and not valid_inds.any()
- and not allow_negative_crop):
- return None
- results[key] = bboxes[valid_inds, :]
- # label fields. e.g. gt_labels and gt_labels_ignore
- label_key = self.bbox2label.get(key)
- if label_key in results:
- results[label_key] = results[label_key][valid_inds]
-
- # mask fields, e.g. gt_masks and gt_masks_ignore
- mask_key = self.bbox2mask.get(key)
- if mask_key in results:
- results[mask_key] = results[mask_key][
- valid_inds.nonzero()[0]].crop(
- np.asarray([crop_x1, crop_y1, crop_x2, crop_y2]))
-
- # crop semantic seg
- for key in results.get('seg_fields', []):
- results[key] = results[key][crop_y1:crop_y2, crop_x1:crop_x2]
-
- return results
-
- def _get_crop_size(self, image_size):
- """Randomly generates the absolute crop size based on `crop_type` and
- `image_size`.
-
- Args:
- image_size (tuple): (h, w).
-
- Returns:
- crop_size (tuple): (crop_h, crop_w) in absolute pixels.
- """
- h, w = image_size
- if self.crop_type == 'absolute':
- return (min(self.crop_size[0], h), min(self.crop_size[1], w))
- elif self.crop_type == 'absolute_range':
- assert self.crop_size[0] <= self.crop_size[1]
- crop_h = np.random.randint(
- min(h, self.crop_size[0]),
- min(h, self.crop_size[1]) + 1)
- crop_w = np.random.randint(
- min(w, self.crop_size[0]),
- min(w, self.crop_size[1]) + 1)
- return crop_h, crop_w
- elif self.crop_type == 'relative':
- crop_h, crop_w = self.crop_size
- return int(h * crop_h + 0.5), int(w * crop_w + 0.5)
- elif self.crop_type == 'relative_range':
- crop_size = np.asarray(self.crop_size, dtype=np.float32)
- crop_h, crop_w = crop_size + np.random.rand(2) * (1 - crop_size)
- return int(h * crop_h + 0.5), int(w * crop_w + 0.5)
-
- def __call__(self, results):
- """Call function to randomly crop images, bounding boxes, masks,
- semantic segmentation maps.
-
- Args:
- results (dict): Result dict from loading pipeline.
-
- Returns:
- dict: Randomly cropped results, 'img_shape' key in result dict is
- updated according to crop size.
- """
- image_size = results['img'].shape[:2]
- crop_size = self._get_crop_size(image_size)
- results = self._crop_data(results, crop_size, self.allow_negative_crop)
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(crop_size={self.crop_size}, '
- repr_str += f'crop_type={self.crop_type}, '
- repr_str += f'allow_negative_crop={self.allow_negative_crop}, '
- repr_str += f'bbox_clip_border={self.bbox_clip_border})'
- return repr_str
-
-
-@PIPELINES.register_module()
-class SegRescale(object):
- """Rescale semantic segmentation maps.
-
- Args:
- scale_factor (float): The scale factor of the final output.
- backend (str): Image rescale backend, choices are 'cv2' and 'pillow'.
- These two backends generates slightly different results. Defaults
- to 'cv2'.
- """
-
- def __init__(self, scale_factor=1, backend='cv2'):
- self.scale_factor = scale_factor
- self.backend = backend
-
- def __call__(self, results):
- """Call function to scale the semantic segmentation map.
-
- Args:
- results (dict): Result dict from loading pipeline.
-
- Returns:
- dict: Result dict with semantic segmentation map scaled.
- """
-
- for key in results.get('seg_fields', []):
- if self.scale_factor != 1:
- results[key] = mmcv.imrescale(
- results[key],
- self.scale_factor,
- interpolation='nearest',
- backend=self.backend)
- return results
-
- def __repr__(self):
- return self.__class__.__name__ + f'(scale_factor={self.scale_factor})'
-
-
-@PIPELINES.register_module()
-class PhotoMetricDistortion(object):
- """Apply photometric distortion to image sequentially, every transformation
- is applied with a probability of 0.5. The position of random contrast is in
- second or second to last.
-
- 1. random brightness
- 2. random contrast (mode 0)
- 3. convert color from BGR to HSV
- 4. random saturation
- 5. random hue
- 6. convert color from HSV to BGR
- 7. random contrast (mode 1)
- 8. randomly swap channels
-
- Args:
- brightness_delta (int): delta of brightness.
- contrast_range (tuple): range of contrast.
- saturation_range (tuple): range of saturation.
- hue_delta (int): delta of hue.
- """
-
- def __init__(self,
- brightness_delta=32,
- contrast_range=(0.5, 1.5),
- saturation_range=(0.5, 1.5),
- hue_delta=18):
- self.brightness_delta = brightness_delta
- self.contrast_lower, self.contrast_upper = contrast_range
- self.saturation_lower, self.saturation_upper = saturation_range
- self.hue_delta = hue_delta
-
- def __call__(self, results):
- """Call function to perform photometric distortion on images.
-
- Args:
- results (dict): Result dict from loading pipeline.
-
- Returns:
- dict: Result dict with images distorted.
- """
-
- if 'img_fields' in results:
- assert results['img_fields'] == ['img'], \
- 'Only single img_fields is allowed'
- img = results['img']
- assert img.dtype == np.float32, \
- 'PhotoMetricDistortion needs the input image of dtype np.float32,'\
- ' please set "to_float32=True" in "LoadImageFromFile" pipeline'
- # random brightness
- if random.randint(2):
- delta = random.uniform(-self.brightness_delta,
- self.brightness_delta)
- img += delta
-
- # mode == 0 --> do random contrast first
- # mode == 1 --> do random contrast last
- mode = random.randint(2)
- if mode == 1:
- if random.randint(2):
- alpha = random.uniform(self.contrast_lower,
- self.contrast_upper)
- img *= alpha
-
- # convert color from BGR to HSV
- img = mmcv.bgr2hsv(img)
-
- # random saturation
- if random.randint(2):
- img[..., 1] *= random.uniform(self.saturation_lower,
- self.saturation_upper)
-
- # random hue
- if random.randint(2):
- img[..., 0] += random.uniform(-self.hue_delta, self.hue_delta)
- img[..., 0][img[..., 0] > 360] -= 360
- img[..., 0][img[..., 0] < 0] += 360
-
- # convert color from HSV to BGR
- img = mmcv.hsv2bgr(img)
-
- # random contrast
- if mode == 0:
- if random.randint(2):
- alpha = random.uniform(self.contrast_lower,
- self.contrast_upper)
- img *= alpha
-
- # randomly swap channels
- if random.randint(2):
- img = img[..., random.permutation(3)]
-
- results['img'] = img
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(\nbrightness_delta={self.brightness_delta},\n'
- repr_str += 'contrast_range='
- repr_str += f'{(self.contrast_lower, self.contrast_upper)},\n'
- repr_str += 'saturation_range='
- repr_str += f'{(self.saturation_lower, self.saturation_upper)},\n'
- repr_str += f'hue_delta={self.hue_delta})'
- return repr_str
-
-
-@PIPELINES.register_module()
-class Expand(object):
- """Random expand the image & bboxes.
-
- Randomly place the original image on a canvas of 'ratio' x original image
- size filled with mean values. The ratio is in the range of ratio_range.
-
- Args:
- mean (tuple): mean value of dataset.
- to_rgb (bool): if need to convert the order of mean to align with RGB.
- ratio_range (tuple): range of expand ratio.
- prob (float): probability of applying this transformation
- """
-
- def __init__(self,
- mean=(0, 0, 0),
- to_rgb=True,
- ratio_range=(1, 4),
- seg_ignore_label=None,
- prob=0.5):
- self.to_rgb = to_rgb
- self.ratio_range = ratio_range
- if to_rgb:
- self.mean = mean[::-1]
- else:
- self.mean = mean
- self.min_ratio, self.max_ratio = ratio_range
- self.seg_ignore_label = seg_ignore_label
- self.prob = prob
-
- def __call__(self, results):
- """Call function to expand images, bounding boxes.
-
- Args:
- results (dict): Result dict from loading pipeline.
-
- Returns:
- dict: Result dict with images, bounding boxes expanded
- """
-
- if random.uniform(0, 1) > self.prob:
- return results
-
- if 'img_fields' in results:
- assert results['img_fields'] == ['img'], \
- 'Only single img_fields is allowed'
- img = results['img']
-
- h, w, c = img.shape
- ratio = random.uniform(self.min_ratio, self.max_ratio)
- # speedup expand when meets large image
- if np.all(self.mean == self.mean[0]):
- expand_img = np.empty((int(h * ratio), int(w * ratio), c),
- img.dtype)
- expand_img.fill(self.mean[0])
- else:
- expand_img = np.full((int(h * ratio), int(w * ratio), c),
- self.mean,
- dtype=img.dtype)
- left = int(random.uniform(0, w * ratio - w))
- top = int(random.uniform(0, h * ratio - h))
- expand_img[top:top + h, left:left + w] = img
-
- results['img'] = expand_img
- # expand bboxes
- for key in results.get('bbox_fields', []):
- results[key] = results[key] + np.tile(
- (left, top), 2).astype(results[key].dtype)
-
- # expand masks
- for key in results.get('mask_fields', []):
- results[key] = results[key].expand(
- int(h * ratio), int(w * ratio), top, left)
-
- # expand segs
- for key in results.get('seg_fields', []):
- gt_seg = results[key]
- expand_gt_seg = np.full((int(h * ratio), int(w * ratio)),
- self.seg_ignore_label,
- dtype=gt_seg.dtype)
- expand_gt_seg[top:top + h, left:left + w] = gt_seg
- results[key] = expand_gt_seg
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(mean={self.mean}, to_rgb={self.to_rgb}, '
- repr_str += f'ratio_range={self.ratio_range}, '
- repr_str += f'seg_ignore_label={self.seg_ignore_label})'
- return repr_str
-
-
-@PIPELINES.register_module()
-class MinIoURandomCrop(object):
- """Random crop the image & bboxes, the cropped patches have minimum IoU
- requirement with original image & bboxes, the IoU threshold is randomly
- selected from min_ious.
-
- Args:
- min_ious (tuple): minimum IoU threshold for all intersections with
- bounding boxes
- min_crop_size (float): minimum crop's size (i.e. h,w := a*h, a*w,
- where a >= min_crop_size).
- bbox_clip_border (bool, optional): Whether clip the objects outside
- the border of the image. Defaults to True.
-
- Note:
- The keys for bboxes, labels and masks should be paired. That is, \
- `gt_bboxes` corresponds to `gt_labels` and `gt_masks`, and \
- `gt_bboxes_ignore` to `gt_labels_ignore` and `gt_masks_ignore`.
- """
-
- def __init__(self,
- min_ious=(0.1, 0.3, 0.5, 0.7, 0.9),
- min_crop_size=0.3,
- bbox_clip_border=True):
- # 1: return ori img
- self.min_ious = min_ious
- self.sample_mode = (1, *min_ious, 0)
- self.min_crop_size = min_crop_size
- self.bbox_clip_border = bbox_clip_border
- self.bbox2label = {
- 'gt_bboxes': 'gt_labels',
- 'gt_bboxes_ignore': 'gt_labels_ignore'
- }
- self.bbox2mask = {
- 'gt_bboxes': 'gt_masks',
- 'gt_bboxes_ignore': 'gt_masks_ignore'
- }
-
- def __call__(self, results):
- """Call function to crop images and bounding boxes with minimum IoU
- constraint.
-
- Args:
- results (dict): Result dict from loading pipeline.
-
- Returns:
- dict: Result dict with images and bounding boxes cropped, \
- 'img_shape' key is updated.
- """
-
- if 'img_fields' in results:
- assert results['img_fields'] == ['img'], \
- 'Only single img_fields is allowed'
- img = results['img']
- assert 'bbox_fields' in results
- boxes = [results[key] for key in results['bbox_fields']]
- boxes = np.concatenate(boxes, 0)
- h, w, c = img.shape
- while True:
- mode = random.choice(self.sample_mode)
- self.mode = mode
- if mode == 1:
- return results
-
- min_iou = mode
- for i in range(50):
- new_w = random.uniform(self.min_crop_size * w, w)
- new_h = random.uniform(self.min_crop_size * h, h)
-
- # h / w in [0.5, 2]
- if new_h / new_w < 0.5 or new_h / new_w > 2:
- continue
-
- left = random.uniform(w - new_w)
- top = random.uniform(h - new_h)
-
- patch = np.array(
- (int(left), int(top), int(left + new_w), int(top + new_h)))
- # Line or point crop is not allowed
- if patch[2] == patch[0] or patch[3] == patch[1]:
- continue
- overlaps = bbox_overlaps(
- patch.reshape(-1, 4), boxes.reshape(-1, 4)).reshape(-1)
- if len(overlaps) > 0 and overlaps.min() < min_iou:
- continue
-
- # center of boxes should inside the crop img
- # only adjust boxes and instance masks when the gt is not empty
- if len(overlaps) > 0:
- # adjust boxes
- def is_center_of_bboxes_in_patch(boxes, patch):
- center = (boxes[:, :2] + boxes[:, 2:]) / 2
- mask = ((center[:, 0] > patch[0]) *
- (center[:, 1] > patch[1]) *
- (center[:, 0] < patch[2]) *
- (center[:, 1] < patch[3]))
- return mask
-
- mask = is_center_of_bboxes_in_patch(boxes, patch)
- if not mask.any():
- continue
- for key in results.get('bbox_fields', []):
- boxes = results[key].copy()
- mask = is_center_of_bboxes_in_patch(boxes, patch)
- boxes = boxes[mask]
- if self.bbox_clip_border:
- boxes[:, 2:] = boxes[:, 2:].clip(max=patch[2:])
- boxes[:, :2] = boxes[:, :2].clip(min=patch[:2])
- boxes -= np.tile(patch[:2], 2)
-
- results[key] = boxes
- # labels
- label_key = self.bbox2label.get(key)
- if label_key in results:
- results[label_key] = results[label_key][mask]
-
- # mask fields
- mask_key = self.bbox2mask.get(key)
- if mask_key in results:
- results[mask_key] = results[mask_key][
- mask.nonzero()[0]].crop(patch)
- # adjust the img no matter whether the gt is empty before crop
- img = img[patch[1]:patch[3], patch[0]:patch[2]]
- results['img'] = img
- results['img_shape'] = img.shape
-
- # seg fields
- for key in results.get('seg_fields', []):
- results[key] = results[key][patch[1]:patch[3],
- patch[0]:patch[2]]
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(min_ious={self.min_ious}, '
- repr_str += f'min_crop_size={self.min_crop_size}, '
- repr_str += f'bbox_clip_border={self.bbox_clip_border})'
- return repr_str
-
-
-@PIPELINES.register_module()
-class Corrupt(object):
- """Corruption augmentation.
-
- Corruption transforms implemented based on
- `imagecorruptions `_.
-
- Args:
- corruption (str): Corruption name.
- severity (int, optional): The severity of corruption. Default: 1.
- """
-
- def __init__(self, corruption, severity=1):
- self.corruption = corruption
- self.severity = severity
-
- def __call__(self, results):
- """Call function to corrupt image.
-
- Args:
- results (dict): Result dict from loading pipeline.
-
- Returns:
- dict: Result dict with images corrupted.
- """
-
- if corrupt is None:
- raise RuntimeError('imagecorruptions is not installed')
- if 'img_fields' in results:
- assert results['img_fields'] == ['img'], \
- 'Only single img_fields is allowed'
- results['img'] = corrupt(
- results['img'].astype(np.uint8),
- corruption_name=self.corruption,
- severity=self.severity)
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(corruption={self.corruption}, '
- repr_str += f'severity={self.severity})'
- return repr_str
-
-
-@PIPELINES.register_module()
-class Albu(object):
- """Albumentation augmentation.
-
- Adds custom transformations from Albumentations library.
- Please, visit `https://albumentations.readthedocs.io`
- to get more information.
-
- An example of ``transforms`` is as followed:
-
- .. code-block::
-
- [
- dict(
- type='ShiftScaleRotate',
- shift_limit=0.0625,
- scale_limit=0.0,
- rotate_limit=0,
- interpolation=1,
- p=0.5),
- dict(
- type='RandomBrightnessContrast',
- brightness_limit=[0.1, 0.3],
- contrast_limit=[0.1, 0.3],
- p=0.2),
- dict(type='ChannelShuffle', p=0.1),
- dict(
- type='OneOf',
- transforms=[
- dict(type='Blur', blur_limit=3, p=1.0),
- dict(type='MedianBlur', blur_limit=3, p=1.0)
- ],
- p=0.1),
- ]
-
- Args:
- transforms (list[dict]): A list of albu transformations
- bbox_params (dict): Bbox_params for albumentation `Compose`
- keymap (dict): Contains {'input key':'albumentation-style key'}
- skip_img_without_anno (bool): Whether to skip the image if no ann left
- after aug
- """
-
- def __init__(self,
- transforms,
- bbox_params=None,
- keymap=None,
- update_pad_shape=False,
- skip_img_without_anno=False):
- if Compose is None:
- raise RuntimeError('albumentations is not installed')
-
- # Args will be modified later, copying it will be safer
- transforms = copy.deepcopy(transforms)
- if bbox_params is not None:
- bbox_params = copy.deepcopy(bbox_params)
- if keymap is not None:
- keymap = copy.deepcopy(keymap)
- self.transforms = transforms
- self.filter_lost_elements = False
- self.update_pad_shape = update_pad_shape
- self.skip_img_without_anno = skip_img_without_anno
-
- # A simple workaround to remove masks without boxes
- if (isinstance(bbox_params, dict) and 'label_fields' in bbox_params
- and 'filter_lost_elements' in bbox_params):
- self.filter_lost_elements = True
- self.origin_label_fields = bbox_params['label_fields']
- bbox_params['label_fields'] = ['idx_mapper']
- del bbox_params['filter_lost_elements']
-
- self.bbox_params = (
- self.albu_builder(bbox_params) if bbox_params else None)
- self.aug = Compose([self.albu_builder(t) for t in self.transforms],
- bbox_params=self.bbox_params)
-
- if not keymap:
- self.keymap_to_albu = {
- 'img': 'image',
- 'gt_masks': 'masks',
- 'gt_bboxes': 'bboxes'
- }
- else:
- self.keymap_to_albu = keymap
- self.keymap_back = {v: k for k, v in self.keymap_to_albu.items()}
-
- def albu_builder(self, cfg):
- """Import a module from albumentations.
-
- It inherits some of :func:`build_from_cfg` logic.
-
- Args:
- cfg (dict): Config dict. It should at least contain the key "type".
-
- Returns:
- obj: The constructed object.
- """
-
- assert isinstance(cfg, dict) and 'type' in cfg
- args = cfg.copy()
-
- obj_type = args.pop('type')
- if mmcv.is_str(obj_type):
- if albumentations is None:
- raise RuntimeError('albumentations is not installed')
- obj_cls = getattr(albumentations, obj_type)
- elif inspect.isclass(obj_type):
- obj_cls = obj_type
- else:
- raise TypeError(
- f'type must be a str or valid type, but got {type(obj_type)}')
-
- if 'transforms' in args:
- args['transforms'] = [
- self.albu_builder(transform)
- for transform in args['transforms']
- ]
-
- return obj_cls(**args)
-
- @staticmethod
- def mapper(d, keymap):
- """Dictionary mapper. Renames keys according to keymap provided.
-
- Args:
- d (dict): old dict
- keymap (dict): {'old_key':'new_key'}
- Returns:
- dict: new dict.
- """
-
- updated_dict = {}
- for k, v in zip(d.keys(), d.values()):
- new_k = keymap.get(k, k)
- updated_dict[new_k] = d[k]
- return updated_dict
-
- def __call__(self, results):
- # dict to albumentations format
- results = self.mapper(results, self.keymap_to_albu)
- # TODO: add bbox_fields
- if 'bboxes' in results:
- # to list of boxes
- if isinstance(results['bboxes'], np.ndarray):
- results['bboxes'] = [x for x in results['bboxes']]
- # add pseudo-field for filtration
- if self.filter_lost_elements:
- results['idx_mapper'] = np.arange(len(results['bboxes']))
-
- # TODO: Support mask structure in albu
- if 'masks' in results:
- if isinstance(results['masks'], PolygonMasks):
- raise NotImplementedError(
- 'Albu only supports BitMap masks now')
- ori_masks = results['masks']
- if albumentations.__version__ < '0.5':
- results['masks'] = results['masks'].masks
- else:
- results['masks'] = [mask for mask in results['masks'].masks]
-
- results = self.aug(**results)
-
- if 'bboxes' in results:
- if isinstance(results['bboxes'], list):
- results['bboxes'] = np.array(
- results['bboxes'], dtype=np.float32)
- results['bboxes'] = results['bboxes'].reshape(-1, 4)
-
- # filter label_fields
- if self.filter_lost_elements:
-
- for label in self.origin_label_fields:
- results[label] = np.array(
- [results[label][i] for i in results['idx_mapper']])
- if 'masks' in results:
- results['masks'] = np.array(
- [results['masks'][i] for i in results['idx_mapper']])
- results['masks'] = ori_masks.__class__(
- results['masks'], results['image'].shape[0],
- results['image'].shape[1])
-
- if (not len(results['idx_mapper'])
- and self.skip_img_without_anno):
- return None
-
- if 'gt_labels' in results:
- if isinstance(results['gt_labels'], list):
- results['gt_labels'] = np.array(results['gt_labels'])
- results['gt_labels'] = results['gt_labels'].astype(np.int64)
-
- # back to the original format
- results = self.mapper(results, self.keymap_back)
-
- # update final shape
- if self.update_pad_shape:
- results['pad_shape'] = results['img'].shape
-
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__ + f'(transforms={self.transforms})'
- return repr_str
-
-
-@PIPELINES.register_module()
-class RandomCenterCropPad(object):
- """Random center crop and random around padding for CornerNet.
-
- This operation generates randomly cropped image from the original image and
- pads it simultaneously. Different from :class:`RandomCrop`, the output
- shape may not equal to ``crop_size`` strictly. We choose a random value
- from ``ratios`` and the output shape could be larger or smaller than
- ``crop_size``. The padding operation is also different from :class:`Pad`,
- here we use around padding instead of right-bottom padding.
-
- The relation between output image (padding image) and original image:
-
- .. code:: text
-
- output image
-
- +----------------------------+
- | padded area |
- +------|----------------------------|----------+
- | | cropped area | |
- | | +---------------+ | |
- | | | . center | | | original image
- | | | range | | |
- | | +---------------+ | |
- +------|----------------------------|----------+
- | padded area |
- +----------------------------+
-
- There are 5 main areas in the figure:
-
- - output image: output image of this operation, also called padding
- image in following instruction.
- - original image: input image of this operation.
- - padded area: non-intersect area of output image and original image.
- - cropped area: the overlap of output image and original image.
- - center range: a smaller area where random center chosen from.
- center range is computed by ``border`` and original image's shape
- to avoid our random center is too close to original image's border.
-
- Also this operation act differently in train and test mode, the summary
- pipeline is listed below.
-
- Train pipeline:
-
- 1. Choose a ``random_ratio`` from ``ratios``, the shape of padding image
- will be ``random_ratio * crop_size``.
- 2. Choose a ``random_center`` in center range.
- 3. Generate padding image with center matches the ``random_center``.
- 4. Initialize the padding image with pixel value equals to ``mean``.
- 5. Copy the cropped area to padding image.
- 6. Refine annotations.
-
- Test pipeline:
-
- 1. Compute output shape according to ``test_pad_mode``.
- 2. Generate padding image with center matches the original image
- center.
- 3. Initialize the padding image with pixel value equals to ``mean``.
- 4. Copy the ``cropped area`` to padding image.
-
- Args:
- crop_size (tuple | None): expected size after crop, final size will
- computed according to ratio. Requires (h, w) in train mode, and
- None in test mode.
- ratios (tuple): random select a ratio from tuple and crop image to
- (crop_size[0] * ratio) * (crop_size[1] * ratio).
- Only available in train mode.
- border (int): max distance from center select area to image border.
- Only available in train mode.
- mean (sequence): Mean values of 3 channels.
- std (sequence): Std values of 3 channels.
- to_rgb (bool): Whether to convert the image from BGR to RGB.
- test_mode (bool): whether involve random variables in transform.
- In train mode, crop_size is fixed, center coords and ratio is
- random selected from predefined lists. In test mode, crop_size
- is image's original shape, center coords and ratio is fixed.
- test_pad_mode (tuple): padding method and padding shape value, only
- available in test mode. Default is using 'logical_or' with
- 127 as padding shape value.
-
- - 'logical_or': final_shape = input_shape | padding_shape_value
- - 'size_divisor': final_shape = int(
- ceil(input_shape / padding_shape_value) * padding_shape_value)
- bbox_clip_border (bool, optional): Whether clip the objects outside
- the border of the image. Defaults to True.
- """
-
- def __init__(self,
- crop_size=None,
- ratios=(0.9, 1.0, 1.1),
- border=128,
- mean=None,
- std=None,
- to_rgb=None,
- test_mode=False,
- test_pad_mode=('logical_or', 127),
- bbox_clip_border=True):
- if test_mode:
- assert crop_size is None, 'crop_size must be None in test mode'
- assert ratios is None, 'ratios must be None in test mode'
- assert border is None, 'border must be None in test mode'
- assert isinstance(test_pad_mode, (list, tuple))
- assert test_pad_mode[0] in ['logical_or', 'size_divisor']
- else:
- assert isinstance(crop_size, (list, tuple))
- assert crop_size[0] > 0 and crop_size[1] > 0, (
- 'crop_size must > 0 in train mode')
- assert isinstance(ratios, (list, tuple))
- assert test_pad_mode is None, (
- 'test_pad_mode must be None in train mode')
-
- self.crop_size = crop_size
- self.ratios = ratios
- self.border = border
- # We do not set default value to mean, std and to_rgb because these
- # hyper-parameters are easy to forget but could affect the performance.
- # Please use the same setting as Normalize for performance assurance.
- assert mean is not None and std is not None and to_rgb is not None
- self.to_rgb = to_rgb
- self.input_mean = mean
- self.input_std = std
- if to_rgb:
- self.mean = mean[::-1]
- self.std = std[::-1]
- else:
- self.mean = mean
- self.std = std
- self.test_mode = test_mode
- self.test_pad_mode = test_pad_mode
- self.bbox_clip_border = bbox_clip_border
-
- def _get_border(self, border, size):
- """Get final border for the target size.
-
- This function generates a ``final_border`` according to image's shape.
- The area between ``final_border`` and ``size - final_border`` is the
- ``center range``. We randomly choose center from the ``center range``
- to avoid our random center is too close to original image's border.
- Also ``center range`` should be larger than 0.
-
- Args:
- border (int): The initial border, default is 128.
- size (int): The width or height of original image.
- Returns:
- int: The final border.
- """
- k = 2 * border / size
- i = pow(2, np.ceil(np.log2(np.ceil(k))) + (k == int(k)))
- return border // i
-
- def _filter_boxes(self, patch, boxes):
- """Check whether the center of each box is in the patch.
-
- Args:
- patch (list[int]): The cropped area, [left, top, right, bottom].
- boxes (numpy array, (N x 4)): Ground truth boxes.
-
- Returns:
- mask (numpy array, (N,)): Each box is inside or outside the patch.
- """
- center = (boxes[:, :2] + boxes[:, 2:]) / 2
- mask = (center[:, 0] > patch[0]) * (center[:, 1] > patch[1]) * (
- center[:, 0] < patch[2]) * (
- center[:, 1] < patch[3])
- return mask
-
- def _crop_image_and_paste(self, image, center, size):
- """Crop image with a given center and size, then paste the cropped
- image to a blank image with two centers align.
-
- This function is equivalent to generating a blank image with ``size``
- as its shape. Then cover it on the original image with two centers (
- the center of blank image and the random center of original image)
- aligned. The overlap area is paste from the original image and the
- outside area is filled with ``mean pixel``.
-
- Args:
- image (np array, H x W x C): Original image.
- center (list[int]): Target crop center coord.
- size (list[int]): Target crop size. [target_h, target_w]
-
- Returns:
- cropped_img (np array, target_h x target_w x C): Cropped image.
- border (np array, 4): The distance of four border of
- ``cropped_img`` to the original image area, [top, bottom,
- left, right]
- patch (list[int]): The cropped area, [left, top, right, bottom].
- """
- center_y, center_x = center
- target_h, target_w = size
- img_h, img_w, img_c = image.shape
-
- x0 = max(0, center_x - target_w // 2)
- x1 = min(center_x + target_w // 2, img_w)
- y0 = max(0, center_y - target_h // 2)
- y1 = min(center_y + target_h // 2, img_h)
- patch = np.array((int(x0), int(y0), int(x1), int(y1)))
-
- left, right = center_x - x0, x1 - center_x
- top, bottom = center_y - y0, y1 - center_y
-
- cropped_center_y, cropped_center_x = target_h // 2, target_w // 2
- cropped_img = np.zeros((target_h, target_w, img_c), dtype=image.dtype)
- for i in range(img_c):
- cropped_img[:, :, i] += self.mean[i]
- y_slice = slice(cropped_center_y - top, cropped_center_y + bottom)
- x_slice = slice(cropped_center_x - left, cropped_center_x + right)
- cropped_img[y_slice, x_slice, :] = image[y0:y1, x0:x1, :]
-
- border = np.array([
- cropped_center_y - top, cropped_center_y + bottom,
- cropped_center_x - left, cropped_center_x + right
- ],
- dtype=np.float32)
-
- return cropped_img, border, patch
-
- def _train_aug(self, results):
- """Random crop and around padding the original image.
-
- Args:
- results (dict): Image infomations in the augment pipeline.
-
- Returns:
- results (dict): The updated dict.
- """
- img = results['img']
- h, w, c = img.shape
- boxes = results['gt_bboxes']
- while True:
- scale = random.choice(self.ratios)
- new_h = int(self.crop_size[0] * scale)
- new_w = int(self.crop_size[1] * scale)
- h_border = self._get_border(self.border, h)
- w_border = self._get_border(self.border, w)
-
- for i in range(50):
- center_x = random.randint(low=w_border, high=w - w_border)
- center_y = random.randint(low=h_border, high=h - h_border)
-
- cropped_img, border, patch = self._crop_image_and_paste(
- img, [center_y, center_x], [new_h, new_w])
-
- mask = self._filter_boxes(patch, boxes)
- # if image do not have valid bbox, any crop patch is valid.
- if not mask.any() and len(boxes) > 0:
- continue
-
- results['img'] = cropped_img
- results['img_shape'] = cropped_img.shape
- results['pad_shape'] = cropped_img.shape
-
- x0, y0, x1, y1 = patch
-
- left_w, top_h = center_x - x0, center_y - y0
- cropped_center_x, cropped_center_y = new_w // 2, new_h // 2
-
- # crop bboxes accordingly and clip to the image boundary
- for key in results.get('bbox_fields', []):
- mask = self._filter_boxes(patch, results[key])
- bboxes = results[key][mask]
- bboxes[:, 0:4:2] += cropped_center_x - left_w - x0
- bboxes[:, 1:4:2] += cropped_center_y - top_h - y0
- if self.bbox_clip_border:
- bboxes[:, 0:4:2] = np.clip(bboxes[:, 0:4:2], 0, new_w)
- bboxes[:, 1:4:2] = np.clip(bboxes[:, 1:4:2], 0, new_h)
- keep = (bboxes[:, 2] > bboxes[:, 0]) & (
- bboxes[:, 3] > bboxes[:, 1])
- bboxes = bboxes[keep]
- results[key] = bboxes
- if key in ['gt_bboxes']:
- if 'gt_labels' in results:
- labels = results['gt_labels'][mask]
- labels = labels[keep]
- results['gt_labels'] = labels
- if 'gt_masks' in results:
- raise NotImplementedError(
- 'RandomCenterCropPad only supports bbox.')
-
- # crop semantic seg
- for key in results.get('seg_fields', []):
- raise NotImplementedError(
- 'RandomCenterCropPad only supports bbox.')
- return results
-
- def _test_aug(self, results):
- """Around padding the original image without cropping.
-
- The padding mode and value are from ``test_pad_mode``.
-
- Args:
- results (dict): Image infomations in the augment pipeline.
-
- Returns:
- results (dict): The updated dict.
- """
- img = results['img']
- h, w, c = img.shape
- results['img_shape'] = img.shape
- if self.test_pad_mode[0] in ['logical_or']:
- target_h = h | self.test_pad_mode[1]
- target_w = w | self.test_pad_mode[1]
- elif self.test_pad_mode[0] in ['size_divisor']:
- divisor = self.test_pad_mode[1]
- target_h = int(np.ceil(h / divisor)) * divisor
- target_w = int(np.ceil(w / divisor)) * divisor
- else:
- raise NotImplementedError(
- 'RandomCenterCropPad only support two testing pad mode:'
- 'logical-or and size_divisor.')
-
- cropped_img, border, _ = self._crop_image_and_paste(
- img, [h // 2, w // 2], [target_h, target_w])
- results['img'] = cropped_img
- results['pad_shape'] = cropped_img.shape
- results['border'] = border
- return results
-
- def __call__(self, results):
- img = results['img']
- assert img.dtype == np.float32, (
- 'RandomCenterCropPad needs the input image of dtype np.float32,'
- ' please set "to_float32=True" in "LoadImageFromFile" pipeline')
- h, w, c = img.shape
- assert c == len(self.mean)
- if self.test_mode:
- return self._test_aug(results)
- else:
- return self._train_aug(results)
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(crop_size={self.crop_size}, '
- repr_str += f'ratios={self.ratios}, '
- repr_str += f'border={self.border}, '
- repr_str += f'mean={self.input_mean}, '
- repr_str += f'std={self.input_std}, '
- repr_str += f'to_rgb={self.to_rgb}, '
- repr_str += f'test_mode={self.test_mode}, '
- repr_str += f'test_pad_mode={self.test_pad_mode}, '
- repr_str += f'bbox_clip_border={self.bbox_clip_border})'
- return repr_str
-
-
-@PIPELINES.register_module()
-class CutOut(object):
- """CutOut operation.
-
- Randomly drop some regions of image used in
- `Cutout `_.
-
- Args:
- n_holes (int | tuple[int, int]): Number of regions to be dropped.
- If it is given as a list, number of holes will be randomly
- selected from the closed interval [`n_holes[0]`, `n_holes[1]`].
- cutout_shape (tuple[int, int] | list[tuple[int, int]]): The candidate
- shape of dropped regions. It can be `tuple[int, int]` to use a
- fixed cutout shape, or `list[tuple[int, int]]` to randomly choose
- shape from the list.
- cutout_ratio (tuple[float, float] | list[tuple[float, float]]): The
- candidate ratio of dropped regions. It can be `tuple[float, float]`
- to use a fixed ratio or `list[tuple[float, float]]` to randomly
- choose ratio from the list. Please note that `cutout_shape`
- and `cutout_ratio` cannot be both given at the same time.
- fill_in (tuple[float, float, float] | tuple[int, int, int]): The value
- of pixel to fill in the dropped regions. Default: (0, 0, 0).
- """
-
- def __init__(self,
- n_holes,
- cutout_shape=None,
- cutout_ratio=None,
- fill_in=(0, 0, 0)):
-
- assert (cutout_shape is None) ^ (cutout_ratio is None), \
- 'Either cutout_shape or cutout_ratio should be specified.'
- assert (isinstance(cutout_shape, (list, tuple))
- or isinstance(cutout_ratio, (list, tuple)))
- if isinstance(n_holes, tuple):
- assert len(n_holes) == 2 and 0 <= n_holes[0] < n_holes[1]
- else:
- n_holes = (n_holes, n_holes)
- self.n_holes = n_holes
- self.fill_in = fill_in
- self.with_ratio = cutout_ratio is not None
- self.candidates = cutout_ratio if self.with_ratio else cutout_shape
- if not isinstance(self.candidates, list):
- self.candidates = [self.candidates]
-
- def __call__(self, results):
- """Call function to drop some regions of image."""
- h, w, c = results['img'].shape
- n_holes = np.random.randint(self.n_holes[0], self.n_holes[1] + 1)
- for _ in range(n_holes):
- x1 = np.random.randint(0, w)
- y1 = np.random.randint(0, h)
- index = np.random.randint(0, len(self.candidates))
- if not self.with_ratio:
- cutout_w, cutout_h = self.candidates[index]
- else:
- cutout_w = int(self.candidates[index][0] * w)
- cutout_h = int(self.candidates[index][1] * h)
-
- x2 = np.clip(x1 + cutout_w, 0, w)
- y2 = np.clip(y1 + cutout_h, 0, h)
- results['img'][y1:y2, x1:x2, :] = self.fill_in
-
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(n_holes={self.n_holes}, '
- repr_str += (f'cutout_ratio={self.candidates}, ' if self.with_ratio
- else f'cutout_shape={self.candidates}, ')
- repr_str += f'fill_in={self.fill_in})'
- return repr_str
diff --git a/spaces/CofAI/chat.b4/g4f/Provider/Providers/Theb.py b/spaces/CofAI/chat.b4/g4f/Provider/Providers/Theb.py
deleted file mode 100644
index aa43ebc55d74ffaa722fe008424fce97c622a323..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat.b4/g4f/Provider/Providers/Theb.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import os
-import json
-import time
-import subprocess
-
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://theb.ai'
-model = ['gpt-3.5-turbo']
-supports_stream = True
-needs_auth = False
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
-
- path = os.path.dirname(os.path.realpath(__file__))
- config = json.dumps({
- 'messages': messages,
- 'model': model}, separators=(',', ':'))
-
- cmd = ['python3', f'{path}/helpers/theb.py', config]
-
- p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
-
- for line in iter(p.stdout.readline, b''):
- yield line.decode('utf-8')
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
\ No newline at end of file
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/_config.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/_config.py
deleted file mode 100644
index 96d4200773d85eef9e846a4e57d63d0f2ee1b9aa..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/_config.py
+++ /dev/null
@@ -1,31 +0,0 @@
-# SPDX-License-Identifier: MIT
-
-
-__all__ = ["set_run_validators", "get_run_validators"]
-
-_run_validators = True
-
-
-def set_run_validators(run):
- """
- Set whether or not validators are run. By default, they are run.
-
- .. deprecated:: 21.3.0 It will not be removed, but it also will not be
- moved to new ``attrs`` namespace. Use `attrs.validators.set_disabled()`
- instead.
- """
- if not isinstance(run, bool):
- raise TypeError("'run' must be bool.")
- global _run_validators
- _run_validators = run
-
-
-def get_run_validators():
- """
- Return whether or not validators are run.
-
- .. deprecated:: 21.3.0 It will not be removed, but it also will not be
- moved to new ``attrs`` namespace. Use `attrs.validators.get_disabled()`
- instead.
- """
- return _run_validators
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/dateutil/parser/_parser.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/dateutil/parser/_parser.py
deleted file mode 100644
index 37d1663b2f72447800d9a553929e3de932244289..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/dateutil/parser/_parser.py
+++ /dev/null
@@ -1,1613 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-This module offers a generic date/time string parser which is able to parse
-most known formats to represent a date and/or time.
-
-This module attempts to be forgiving with regards to unlikely input formats,
-returning a datetime object even for dates which are ambiguous. If an element
-of a date/time stamp is omitted, the following rules are applied:
-
-- If AM or PM is left unspecified, a 24-hour clock is assumed, however, an hour
- on a 12-hour clock (``0 <= hour <= 12``) *must* be specified if AM or PM is
- specified.
-- If a time zone is omitted, a timezone-naive datetime is returned.
-
-If any other elements are missing, they are taken from the
-:class:`datetime.datetime` object passed to the parameter ``default``. If this
-results in a day number exceeding the valid number of days per month, the
-value falls back to the end of the month.
-
-Additional resources about date/time string formats can be found below:
-
-- `A summary of the international standard date and time notation
- `_
-- `W3C Date and Time Formats `_
-- `Time Formats (Planetary Rings Node) `_
-- `CPAN ParseDate module
- `_
-- `Java SimpleDateFormat Class
- `_
-"""
-from __future__ import unicode_literals
-
-import datetime
-import re
-import string
-import time
-import warnings
-
-from calendar import monthrange
-from io import StringIO
-
-import six
-from six import integer_types, text_type
-
-from decimal import Decimal
-
-from warnings import warn
-
-from .. import relativedelta
-from .. import tz
-
-__all__ = ["parse", "parserinfo", "ParserError"]
-
-
-# TODO: pandas.core.tools.datetimes imports this explicitly. Might be worth
-# making public and/or figuring out if there is something we can
-# take off their plate.
-class _timelex(object):
- # Fractional seconds are sometimes split by a comma
- _split_decimal = re.compile("([.,])")
-
- def __init__(self, instream):
- if isinstance(instream, (bytes, bytearray)):
- instream = instream.decode()
-
- if isinstance(instream, text_type):
- instream = StringIO(instream)
- elif getattr(instream, 'read', None) is None:
- raise TypeError('Parser must be a string or character stream, not '
- '{itype}'.format(itype=instream.__class__.__name__))
-
- self.instream = instream
- self.charstack = []
- self.tokenstack = []
- self.eof = False
-
- def get_token(self):
- """
- This function breaks the time string into lexical units (tokens), which
- can be parsed by the parser. Lexical units are demarcated by changes in
- the character set, so any continuous string of letters is considered
- one unit, any continuous string of numbers is considered one unit.
-
- The main complication arises from the fact that dots ('.') can be used
- both as separators (e.g. "Sep.20.2009") or decimal points (e.g.
- "4:30:21.447"). As such, it is necessary to read the full context of
- any dot-separated strings before breaking it into tokens; as such, this
- function maintains a "token stack", for when the ambiguous context
- demands that multiple tokens be parsed at once.
- """
- if self.tokenstack:
- return self.tokenstack.pop(0)
-
- seenletters = False
- token = None
- state = None
-
- while not self.eof:
- # We only realize that we've reached the end of a token when we
- # find a character that's not part of the current token - since
- # that character may be part of the next token, it's stored in the
- # charstack.
- if self.charstack:
- nextchar = self.charstack.pop(0)
- else:
- nextchar = self.instream.read(1)
- while nextchar == '\x00':
- nextchar = self.instream.read(1)
-
- if not nextchar:
- self.eof = True
- break
- elif not state:
- # First character of the token - determines if we're starting
- # to parse a word, a number or something else.
- token = nextchar
- if self.isword(nextchar):
- state = 'a'
- elif self.isnum(nextchar):
- state = '0'
- elif self.isspace(nextchar):
- token = ' '
- break # emit token
- else:
- break # emit token
- elif state == 'a':
- # If we've already started reading a word, we keep reading
- # letters until we find something that's not part of a word.
- seenletters = True
- if self.isword(nextchar):
- token += nextchar
- elif nextchar == '.':
- token += nextchar
- state = 'a.'
- else:
- self.charstack.append(nextchar)
- break # emit token
- elif state == '0':
- # If we've already started reading a number, we keep reading
- # numbers until we find something that doesn't fit.
- if self.isnum(nextchar):
- token += nextchar
- elif nextchar == '.' or (nextchar == ',' and len(token) >= 2):
- token += nextchar
- state = '0.'
- else:
- self.charstack.append(nextchar)
- break # emit token
- elif state == 'a.':
- # If we've seen some letters and a dot separator, continue
- # parsing, and the tokens will be broken up later.
- seenletters = True
- if nextchar == '.' or self.isword(nextchar):
- token += nextchar
- elif self.isnum(nextchar) and token[-1] == '.':
- token += nextchar
- state = '0.'
- else:
- self.charstack.append(nextchar)
- break # emit token
- elif state == '0.':
- # If we've seen at least one dot separator, keep going, we'll
- # break up the tokens later.
- if nextchar == '.' or self.isnum(nextchar):
- token += nextchar
- elif self.isword(nextchar) and token[-1] == '.':
- token += nextchar
- state = 'a.'
- else:
- self.charstack.append(nextchar)
- break # emit token
-
- if (state in ('a.', '0.') and (seenletters or token.count('.') > 1 or
- token[-1] in '.,')):
- l = self._split_decimal.split(token)
- token = l[0]
- for tok in l[1:]:
- if tok:
- self.tokenstack.append(tok)
-
- if state == '0.' and token.count('.') == 0:
- token = token.replace(',', '.')
-
- return token
-
- def __iter__(self):
- return self
-
- def __next__(self):
- token = self.get_token()
- if token is None:
- raise StopIteration
-
- return token
-
- def next(self):
- return self.__next__() # Python 2.x support
-
- @classmethod
- def split(cls, s):
- return list(cls(s))
-
- @classmethod
- def isword(cls, nextchar):
- """ Whether or not the next character is part of a word """
- return nextchar.isalpha()
-
- @classmethod
- def isnum(cls, nextchar):
- """ Whether the next character is part of a number """
- return nextchar.isdigit()
-
- @classmethod
- def isspace(cls, nextchar):
- """ Whether the next character is whitespace """
- return nextchar.isspace()
-
-
-class _resultbase(object):
-
- def __init__(self):
- for attr in self.__slots__:
- setattr(self, attr, None)
-
- def _repr(self, classname):
- l = []
- for attr in self.__slots__:
- value = getattr(self, attr)
- if value is not None:
- l.append("%s=%s" % (attr, repr(value)))
- return "%s(%s)" % (classname, ", ".join(l))
-
- def __len__(self):
- return (sum(getattr(self, attr) is not None
- for attr in self.__slots__))
-
- def __repr__(self):
- return self._repr(self.__class__.__name__)
-
-
-class parserinfo(object):
- """
- Class which handles what inputs are accepted. Subclass this to customize
- the language and acceptable values for each parameter.
-
- :param dayfirst:
- Whether to interpret the first value in an ambiguous 3-integer date
- (e.g. 01/05/09) as the day (``True``) or month (``False``). If
- ``yearfirst`` is set to ``True``, this distinguishes between YDM
- and YMD. Default is ``False``.
-
- :param yearfirst:
- Whether to interpret the first value in an ambiguous 3-integer date
- (e.g. 01/05/09) as the year. If ``True``, the first number is taken
- to be the year, otherwise the last number is taken to be the year.
- Default is ``False``.
- """
-
- # m from a.m/p.m, t from ISO T separator
- JUMP = [" ", ".", ",", ";", "-", "/", "'",
- "at", "on", "and", "ad", "m", "t", "of",
- "st", "nd", "rd", "th"]
-
- WEEKDAYS = [("Mon", "Monday"),
- ("Tue", "Tuesday"), # TODO: "Tues"
- ("Wed", "Wednesday"),
- ("Thu", "Thursday"), # TODO: "Thurs"
- ("Fri", "Friday"),
- ("Sat", "Saturday"),
- ("Sun", "Sunday")]
- MONTHS = [("Jan", "January"),
- ("Feb", "February"), # TODO: "Febr"
- ("Mar", "March"),
- ("Apr", "April"),
- ("May", "May"),
- ("Jun", "June"),
- ("Jul", "July"),
- ("Aug", "August"),
- ("Sep", "Sept", "September"),
- ("Oct", "October"),
- ("Nov", "November"),
- ("Dec", "December")]
- HMS = [("h", "hour", "hours"),
- ("m", "minute", "minutes"),
- ("s", "second", "seconds")]
- AMPM = [("am", "a"),
- ("pm", "p")]
- UTCZONE = ["UTC", "GMT", "Z", "z"]
- PERTAIN = ["of"]
- TZOFFSET = {}
- # TODO: ERA = ["AD", "BC", "CE", "BCE", "Stardate",
- # "Anno Domini", "Year of Our Lord"]
-
- def __init__(self, dayfirst=False, yearfirst=False):
- self._jump = self._convert(self.JUMP)
- self._weekdays = self._convert(self.WEEKDAYS)
- self._months = self._convert(self.MONTHS)
- self._hms = self._convert(self.HMS)
- self._ampm = self._convert(self.AMPM)
- self._utczone = self._convert(self.UTCZONE)
- self._pertain = self._convert(self.PERTAIN)
-
- self.dayfirst = dayfirst
- self.yearfirst = yearfirst
-
- self._year = time.localtime().tm_year
- self._century = self._year // 100 * 100
-
- def _convert(self, lst):
- dct = {}
- for i, v in enumerate(lst):
- if isinstance(v, tuple):
- for v in v:
- dct[v.lower()] = i
- else:
- dct[v.lower()] = i
- return dct
-
- def jump(self, name):
- return name.lower() in self._jump
-
- def weekday(self, name):
- try:
- return self._weekdays[name.lower()]
- except KeyError:
- pass
- return None
-
- def month(self, name):
- try:
- return self._months[name.lower()] + 1
- except KeyError:
- pass
- return None
-
- def hms(self, name):
- try:
- return self._hms[name.lower()]
- except KeyError:
- return None
-
- def ampm(self, name):
- try:
- return self._ampm[name.lower()]
- except KeyError:
- return None
-
- def pertain(self, name):
- return name.lower() in self._pertain
-
- def utczone(self, name):
- return name.lower() in self._utczone
-
- def tzoffset(self, name):
- if name in self._utczone:
- return 0
-
- return self.TZOFFSET.get(name)
-
- def convertyear(self, year, century_specified=False):
- """
- Converts two-digit years to year within [-50, 49]
- range of self._year (current local time)
- """
-
- # Function contract is that the year is always positive
- assert year >= 0
-
- if year < 100 and not century_specified:
- # assume current century to start
- year += self._century
-
- if year >= self._year + 50: # if too far in future
- year -= 100
- elif year < self._year - 50: # if too far in past
- year += 100
-
- return year
-
- def validate(self, res):
- # move to info
- if res.year is not None:
- res.year = self.convertyear(res.year, res.century_specified)
-
- if ((res.tzoffset == 0 and not res.tzname) or
- (res.tzname == 'Z' or res.tzname == 'z')):
- res.tzname = "UTC"
- res.tzoffset = 0
- elif res.tzoffset != 0 and res.tzname and self.utczone(res.tzname):
- res.tzoffset = 0
- return True
-
-
-class _ymd(list):
- def __init__(self, *args, **kwargs):
- super(self.__class__, self).__init__(*args, **kwargs)
- self.century_specified = False
- self.dstridx = None
- self.mstridx = None
- self.ystridx = None
-
- @property
- def has_year(self):
- return self.ystridx is not None
-
- @property
- def has_month(self):
- return self.mstridx is not None
-
- @property
- def has_day(self):
- return self.dstridx is not None
-
- def could_be_day(self, value):
- if self.has_day:
- return False
- elif not self.has_month:
- return 1 <= value <= 31
- elif not self.has_year:
- # Be permissive, assume leap year
- month = self[self.mstridx]
- return 1 <= value <= monthrange(2000, month)[1]
- else:
- month = self[self.mstridx]
- year = self[self.ystridx]
- return 1 <= value <= monthrange(year, month)[1]
-
- def append(self, val, label=None):
- if hasattr(val, '__len__'):
- if val.isdigit() and len(val) > 2:
- self.century_specified = True
- if label not in [None, 'Y']: # pragma: no cover
- raise ValueError(label)
- label = 'Y'
- elif val > 100:
- self.century_specified = True
- if label not in [None, 'Y']: # pragma: no cover
- raise ValueError(label)
- label = 'Y'
-
- super(self.__class__, self).append(int(val))
-
- if label == 'M':
- if self.has_month:
- raise ValueError('Month is already set')
- self.mstridx = len(self) - 1
- elif label == 'D':
- if self.has_day:
- raise ValueError('Day is already set')
- self.dstridx = len(self) - 1
- elif label == 'Y':
- if self.has_year:
- raise ValueError('Year is already set')
- self.ystridx = len(self) - 1
-
- def _resolve_from_stridxs(self, strids):
- """
- Try to resolve the identities of year/month/day elements using
- ystridx, mstridx, and dstridx, if enough of these are specified.
- """
- if len(self) == 3 and len(strids) == 2:
- # we can back out the remaining stridx value
- missing = [x for x in range(3) if x not in strids.values()]
- key = [x for x in ['y', 'm', 'd'] if x not in strids]
- assert len(missing) == len(key) == 1
- key = key[0]
- val = missing[0]
- strids[key] = val
-
- assert len(self) == len(strids) # otherwise this should not be called
- out = {key: self[strids[key]] for key in strids}
- return (out.get('y'), out.get('m'), out.get('d'))
-
- def resolve_ymd(self, yearfirst, dayfirst):
- len_ymd = len(self)
- year, month, day = (None, None, None)
-
- strids = (('y', self.ystridx),
- ('m', self.mstridx),
- ('d', self.dstridx))
-
- strids = {key: val for key, val in strids if val is not None}
- if (len(self) == len(strids) > 0 or
- (len(self) == 3 and len(strids) == 2)):
- return self._resolve_from_stridxs(strids)
-
- mstridx = self.mstridx
-
- if len_ymd > 3:
- raise ValueError("More than three YMD values")
- elif len_ymd == 1 or (mstridx is not None and len_ymd == 2):
- # One member, or two members with a month string
- if mstridx is not None:
- month = self[mstridx]
- # since mstridx is 0 or 1, self[mstridx-1] always
- # looks up the other element
- other = self[mstridx - 1]
- else:
- other = self[0]
-
- if len_ymd > 1 or mstridx is None:
- if other > 31:
- year = other
- else:
- day = other
-
- elif len_ymd == 2:
- # Two members with numbers
- if self[0] > 31:
- # 99-01
- year, month = self
- elif self[1] > 31:
- # 01-99
- month, year = self
- elif dayfirst and self[1] <= 12:
- # 13-01
- day, month = self
- else:
- # 01-13
- month, day = self
-
- elif len_ymd == 3:
- # Three members
- if mstridx == 0:
- if self[1] > 31:
- # Apr-2003-25
- month, year, day = self
- else:
- month, day, year = self
- elif mstridx == 1:
- if self[0] > 31 or (yearfirst and self[2] <= 31):
- # 99-Jan-01
- year, month, day = self
- else:
- # 01-Jan-01
- # Give precedence to day-first, since
- # two-digit years is usually hand-written.
- day, month, year = self
-
- elif mstridx == 2:
- # WTF!?
- if self[1] > 31:
- # 01-99-Jan
- day, year, month = self
- else:
- # 99-01-Jan
- year, day, month = self
-
- else:
- if (self[0] > 31 or
- self.ystridx == 0 or
- (yearfirst and self[1] <= 12 and self[2] <= 31)):
- # 99-01-01
- if dayfirst and self[2] <= 12:
- year, day, month = self
- else:
- year, month, day = self
- elif self[0] > 12 or (dayfirst and self[1] <= 12):
- # 13-01-01
- day, month, year = self
- else:
- # 01-13-01
- month, day, year = self
-
- return year, month, day
-
-
-class parser(object):
- def __init__(self, info=None):
- self.info = info or parserinfo()
-
- def parse(self, timestr, default=None,
- ignoretz=False, tzinfos=None, **kwargs):
- """
- Parse the date/time string into a :class:`datetime.datetime` object.
-
- :param timestr:
- Any date/time string using the supported formats.
-
- :param default:
- The default datetime object, if this is a datetime object and not
- ``None``, elements specified in ``timestr`` replace elements in the
- default object.
-
- :param ignoretz:
- If set ``True``, time zones in parsed strings are ignored and a
- naive :class:`datetime.datetime` object is returned.
-
- :param tzinfos:
- Additional time zone names / aliases which may be present in the
- string. This argument maps time zone names (and optionally offsets
- from those time zones) to time zones. This parameter can be a
- dictionary with timezone aliases mapping time zone names to time
- zones or a function taking two parameters (``tzname`` and
- ``tzoffset``) and returning a time zone.
-
- The timezones to which the names are mapped can be an integer
- offset from UTC in seconds or a :class:`tzinfo` object.
-
- .. doctest::
- :options: +NORMALIZE_WHITESPACE
-
- >>> from dateutil.parser import parse
- >>> from dateutil.tz import gettz
- >>> tzinfos = {"BRST": -7200, "CST": gettz("America/Chicago")}
- >>> parse("2012-01-19 17:21:00 BRST", tzinfos=tzinfos)
- datetime.datetime(2012, 1, 19, 17, 21, tzinfo=tzoffset(u'BRST', -7200))
- >>> parse("2012-01-19 17:21:00 CST", tzinfos=tzinfos)
- datetime.datetime(2012, 1, 19, 17, 21,
- tzinfo=tzfile('/usr/share/zoneinfo/America/Chicago'))
-
- This parameter is ignored if ``ignoretz`` is set.
-
- :param \\*\\*kwargs:
- Keyword arguments as passed to ``_parse()``.
-
- :return:
- Returns a :class:`datetime.datetime` object or, if the
- ``fuzzy_with_tokens`` option is ``True``, returns a tuple, the
- first element being a :class:`datetime.datetime` object, the second
- a tuple containing the fuzzy tokens.
-
- :raises ParserError:
- Raised for invalid or unknown string format, if the provided
- :class:`tzinfo` is not in a valid format, or if an invalid date
- would be created.
-
- :raises TypeError:
- Raised for non-string or character stream input.
-
- :raises OverflowError:
- Raised if the parsed date exceeds the largest valid C integer on
- your system.
- """
-
- if default is None:
- default = datetime.datetime.now().replace(hour=0, minute=0,
- second=0, microsecond=0)
-
- res, skipped_tokens = self._parse(timestr, **kwargs)
-
- if res is None:
- raise ParserError("Unknown string format: %s", timestr)
-
- if len(res) == 0:
- raise ParserError("String does not contain a date: %s", timestr)
-
- try:
- ret = self._build_naive(res, default)
- except ValueError as e:
- six.raise_from(ParserError(str(e) + ": %s", timestr), e)
-
- if not ignoretz:
- ret = self._build_tzaware(ret, res, tzinfos)
-
- if kwargs.get('fuzzy_with_tokens', False):
- return ret, skipped_tokens
- else:
- return ret
-
- class _result(_resultbase):
- __slots__ = ["year", "month", "day", "weekday",
- "hour", "minute", "second", "microsecond",
- "tzname", "tzoffset", "ampm","any_unused_tokens"]
-
- def _parse(self, timestr, dayfirst=None, yearfirst=None, fuzzy=False,
- fuzzy_with_tokens=False):
- """
- Private method which performs the heavy lifting of parsing, called from
- ``parse()``, which passes on its ``kwargs`` to this function.
-
- :param timestr:
- The string to parse.
-
- :param dayfirst:
- Whether to interpret the first value in an ambiguous 3-integer date
- (e.g. 01/05/09) as the day (``True``) or month (``False``). If
- ``yearfirst`` is set to ``True``, this distinguishes between YDM
- and YMD. If set to ``None``, this value is retrieved from the
- current :class:`parserinfo` object (which itself defaults to
- ``False``).
-
- :param yearfirst:
- Whether to interpret the first value in an ambiguous 3-integer date
- (e.g. 01/05/09) as the year. If ``True``, the first number is taken
- to be the year, otherwise the last number is taken to be the year.
- If this is set to ``None``, the value is retrieved from the current
- :class:`parserinfo` object (which itself defaults to ``False``).
-
- :param fuzzy:
- Whether to allow fuzzy parsing, allowing for string like "Today is
- January 1, 2047 at 8:21:00AM".
-
- :param fuzzy_with_tokens:
- If ``True``, ``fuzzy`` is automatically set to True, and the parser
- will return a tuple where the first element is the parsed
- :class:`datetime.datetime` datetimestamp and the second element is
- a tuple containing the portions of the string which were ignored:
-
- .. doctest::
-
- >>> from dateutil.parser import parse
- >>> parse("Today is January 1, 2047 at 8:21:00AM", fuzzy_with_tokens=True)
- (datetime.datetime(2047, 1, 1, 8, 21), (u'Today is ', u' ', u'at '))
-
- """
- if fuzzy_with_tokens:
- fuzzy = True
-
- info = self.info
-
- if dayfirst is None:
- dayfirst = info.dayfirst
-
- if yearfirst is None:
- yearfirst = info.yearfirst
-
- res = self._result()
- l = _timelex.split(timestr) # Splits the timestr into tokens
-
- skipped_idxs = []
-
- # year/month/day list
- ymd = _ymd()
-
- len_l = len(l)
- i = 0
- try:
- while i < len_l:
-
- # Check if it's a number
- value_repr = l[i]
- try:
- value = float(value_repr)
- except ValueError:
- value = None
-
- if value is not None:
- # Numeric token
- i = self._parse_numeric_token(l, i, info, ymd, res, fuzzy)
-
- # Check weekday
- elif info.weekday(l[i]) is not None:
- value = info.weekday(l[i])
- res.weekday = value
-
- # Check month name
- elif info.month(l[i]) is not None:
- value = info.month(l[i])
- ymd.append(value, 'M')
-
- if i + 1 < len_l:
- if l[i + 1] in ('-', '/'):
- # Jan-01[-99]
- sep = l[i + 1]
- ymd.append(l[i + 2])
-
- if i + 3 < len_l and l[i + 3] == sep:
- # Jan-01-99
- ymd.append(l[i + 4])
- i += 2
-
- i += 2
-
- elif (i + 4 < len_l and l[i + 1] == l[i + 3] == ' ' and
- info.pertain(l[i + 2])):
- # Jan of 01
- # In this case, 01 is clearly year
- if l[i + 4].isdigit():
- # Convert it here to become unambiguous
- value = int(l[i + 4])
- year = str(info.convertyear(value))
- ymd.append(year, 'Y')
- else:
- # Wrong guess
- pass
- # TODO: not hit in tests
- i += 4
-
- # Check am/pm
- elif info.ampm(l[i]) is not None:
- value = info.ampm(l[i])
- val_is_ampm = self._ampm_valid(res.hour, res.ampm, fuzzy)
-
- if val_is_ampm:
- res.hour = self._adjust_ampm(res.hour, value)
- res.ampm = value
-
- elif fuzzy:
- skipped_idxs.append(i)
-
- # Check for a timezone name
- elif self._could_be_tzname(res.hour, res.tzname, res.tzoffset, l[i]):
- res.tzname = l[i]
- res.tzoffset = info.tzoffset(res.tzname)
-
- # Check for something like GMT+3, or BRST+3. Notice
- # that it doesn't mean "I am 3 hours after GMT", but
- # "my time +3 is GMT". If found, we reverse the
- # logic so that timezone parsing code will get it
- # right.
- if i + 1 < len_l and l[i + 1] in ('+', '-'):
- l[i + 1] = ('+', '-')[l[i + 1] == '+']
- res.tzoffset = None
- if info.utczone(res.tzname):
- # With something like GMT+3, the timezone
- # is *not* GMT.
- res.tzname = None
-
- # Check for a numbered timezone
- elif res.hour is not None and l[i] in ('+', '-'):
- signal = (-1, 1)[l[i] == '+']
- len_li = len(l[i + 1])
-
- # TODO: check that l[i + 1] is integer?
- if len_li == 4:
- # -0300
- hour_offset = int(l[i + 1][:2])
- min_offset = int(l[i + 1][2:])
- elif i + 2 < len_l and l[i + 2] == ':':
- # -03:00
- hour_offset = int(l[i + 1])
- min_offset = int(l[i + 3]) # TODO: Check that l[i+3] is minute-like?
- i += 2
- elif len_li <= 2:
- # -[0]3
- hour_offset = int(l[i + 1][:2])
- min_offset = 0
- else:
- raise ValueError(timestr)
-
- res.tzoffset = signal * (hour_offset * 3600 + min_offset * 60)
-
- # Look for a timezone name between parenthesis
- if (i + 5 < len_l and
- info.jump(l[i + 2]) and l[i + 3] == '(' and
- l[i + 5] == ')' and
- 3 <= len(l[i + 4]) and
- self._could_be_tzname(res.hour, res.tzname,
- None, l[i + 4])):
- # -0300 (BRST)
- res.tzname = l[i + 4]
- i += 4
-
- i += 1
-
- # Check jumps
- elif not (info.jump(l[i]) or fuzzy):
- raise ValueError(timestr)
-
- else:
- skipped_idxs.append(i)
- i += 1
-
- # Process year/month/day
- year, month, day = ymd.resolve_ymd(yearfirst, dayfirst)
-
- res.century_specified = ymd.century_specified
- res.year = year
- res.month = month
- res.day = day
-
- except (IndexError, ValueError):
- return None, None
-
- if not info.validate(res):
- return None, None
-
- if fuzzy_with_tokens:
- skipped_tokens = self._recombine_skipped(l, skipped_idxs)
- return res, tuple(skipped_tokens)
- else:
- return res, None
-
- def _parse_numeric_token(self, tokens, idx, info, ymd, res, fuzzy):
- # Token is a number
- value_repr = tokens[idx]
- try:
- value = self._to_decimal(value_repr)
- except Exception as e:
- six.raise_from(ValueError('Unknown numeric token'), e)
-
- len_li = len(value_repr)
-
- len_l = len(tokens)
-
- if (len(ymd) == 3 and len_li in (2, 4) and
- res.hour is None and
- (idx + 1 >= len_l or
- (tokens[idx + 1] != ':' and
- info.hms(tokens[idx + 1]) is None))):
- # 19990101T23[59]
- s = tokens[idx]
- res.hour = int(s[:2])
-
- if len_li == 4:
- res.minute = int(s[2:])
-
- elif len_li == 6 or (len_li > 6 and tokens[idx].find('.') == 6):
- # YYMMDD or HHMMSS[.ss]
- s = tokens[idx]
-
- if not ymd and '.' not in tokens[idx]:
- ymd.append(s[:2])
- ymd.append(s[2:4])
- ymd.append(s[4:])
- else:
- # 19990101T235959[.59]
-
- # TODO: Check if res attributes already set.
- res.hour = int(s[:2])
- res.minute = int(s[2:4])
- res.second, res.microsecond = self._parsems(s[4:])
-
- elif len_li in (8, 12, 14):
- # YYYYMMDD
- s = tokens[idx]
- ymd.append(s[:4], 'Y')
- ymd.append(s[4:6])
- ymd.append(s[6:8])
-
- if len_li > 8:
- res.hour = int(s[8:10])
- res.minute = int(s[10:12])
-
- if len_li > 12:
- res.second = int(s[12:])
-
- elif self._find_hms_idx(idx, tokens, info, allow_jump=True) is not None:
- # HH[ ]h or MM[ ]m or SS[.ss][ ]s
- hms_idx = self._find_hms_idx(idx, tokens, info, allow_jump=True)
- (idx, hms) = self._parse_hms(idx, tokens, info, hms_idx)
- if hms is not None:
- # TODO: checking that hour/minute/second are not
- # already set?
- self._assign_hms(res, value_repr, hms)
-
- elif idx + 2 < len_l and tokens[idx + 1] == ':':
- # HH:MM[:SS[.ss]]
- res.hour = int(value)
- value = self._to_decimal(tokens[idx + 2]) # TODO: try/except for this?
- (res.minute, res.second) = self._parse_min_sec(value)
-
- if idx + 4 < len_l and tokens[idx + 3] == ':':
- res.second, res.microsecond = self._parsems(tokens[idx + 4])
-
- idx += 2
-
- idx += 2
-
- elif idx + 1 < len_l and tokens[idx + 1] in ('-', '/', '.'):
- sep = tokens[idx + 1]
- ymd.append(value_repr)
-
- if idx + 2 < len_l and not info.jump(tokens[idx + 2]):
- if tokens[idx + 2].isdigit():
- # 01-01[-01]
- ymd.append(tokens[idx + 2])
- else:
- # 01-Jan[-01]
- value = info.month(tokens[idx + 2])
-
- if value is not None:
- ymd.append(value, 'M')
- else:
- raise ValueError()
-
- if idx + 3 < len_l and tokens[idx + 3] == sep:
- # We have three members
- value = info.month(tokens[idx + 4])
-
- if value is not None:
- ymd.append(value, 'M')
- else:
- ymd.append(tokens[idx + 4])
- idx += 2
-
- idx += 1
- idx += 1
-
- elif idx + 1 >= len_l or info.jump(tokens[idx + 1]):
- if idx + 2 < len_l and info.ampm(tokens[idx + 2]) is not None:
- # 12 am
- hour = int(value)
- res.hour = self._adjust_ampm(hour, info.ampm(tokens[idx + 2]))
- idx += 1
- else:
- # Year, month or day
- ymd.append(value)
- idx += 1
-
- elif info.ampm(tokens[idx + 1]) is not None and (0 <= value < 24):
- # 12am
- hour = int(value)
- res.hour = self._adjust_ampm(hour, info.ampm(tokens[idx + 1]))
- idx += 1
-
- elif ymd.could_be_day(value):
- ymd.append(value)
-
- elif not fuzzy:
- raise ValueError()
-
- return idx
-
- def _find_hms_idx(self, idx, tokens, info, allow_jump):
- len_l = len(tokens)
-
- if idx+1 < len_l and info.hms(tokens[idx+1]) is not None:
- # There is an "h", "m", or "s" label following this token. We take
- # assign the upcoming label to the current token.
- # e.g. the "12" in 12h"
- hms_idx = idx + 1
-
- elif (allow_jump and idx+2 < len_l and tokens[idx+1] == ' ' and
- info.hms(tokens[idx+2]) is not None):
- # There is a space and then an "h", "m", or "s" label.
- # e.g. the "12" in "12 h"
- hms_idx = idx + 2
-
- elif idx > 0 and info.hms(tokens[idx-1]) is not None:
- # There is a "h", "m", or "s" preceding this token. Since neither
- # of the previous cases was hit, there is no label following this
- # token, so we use the previous label.
- # e.g. the "04" in "12h04"
- hms_idx = idx-1
-
- elif (1 < idx == len_l-1 and tokens[idx-1] == ' ' and
- info.hms(tokens[idx-2]) is not None):
- # If we are looking at the final token, we allow for a
- # backward-looking check to skip over a space.
- # TODO: Are we sure this is the right condition here?
- hms_idx = idx - 2
-
- else:
- hms_idx = None
-
- return hms_idx
-
- def _assign_hms(self, res, value_repr, hms):
- # See GH issue #427, fixing float rounding
- value = self._to_decimal(value_repr)
-
- if hms == 0:
- # Hour
- res.hour = int(value)
- if value % 1:
- res.minute = int(60*(value % 1))
-
- elif hms == 1:
- (res.minute, res.second) = self._parse_min_sec(value)
-
- elif hms == 2:
- (res.second, res.microsecond) = self._parsems(value_repr)
-
- def _could_be_tzname(self, hour, tzname, tzoffset, token):
- return (hour is not None and
- tzname is None and
- tzoffset is None and
- len(token) <= 5 and
- (all(x in string.ascii_uppercase for x in token)
- or token in self.info.UTCZONE))
-
- def _ampm_valid(self, hour, ampm, fuzzy):
- """
- For fuzzy parsing, 'a' or 'am' (both valid English words)
- may erroneously trigger the AM/PM flag. Deal with that
- here.
- """
- val_is_ampm = True
-
- # If there's already an AM/PM flag, this one isn't one.
- if fuzzy and ampm is not None:
- val_is_ampm = False
-
- # If AM/PM is found and hour is not, raise a ValueError
- if hour is None:
- if fuzzy:
- val_is_ampm = False
- else:
- raise ValueError('No hour specified with AM or PM flag.')
- elif not 0 <= hour <= 12:
- # If AM/PM is found, it's a 12 hour clock, so raise
- # an error for invalid range
- if fuzzy:
- val_is_ampm = False
- else:
- raise ValueError('Invalid hour specified for 12-hour clock.')
-
- return val_is_ampm
-
- def _adjust_ampm(self, hour, ampm):
- if hour < 12 and ampm == 1:
- hour += 12
- elif hour == 12 and ampm == 0:
- hour = 0
- return hour
-
- def _parse_min_sec(self, value):
- # TODO: Every usage of this function sets res.second to the return
- # value. Are there any cases where second will be returned as None and
- # we *don't* want to set res.second = None?
- minute = int(value)
- second = None
-
- sec_remainder = value % 1
- if sec_remainder:
- second = int(60 * sec_remainder)
- return (minute, second)
-
- def _parse_hms(self, idx, tokens, info, hms_idx):
- # TODO: Is this going to admit a lot of false-positives for when we
- # just happen to have digits and "h", "m" or "s" characters in non-date
- # text? I guess hex hashes won't have that problem, but there's plenty
- # of random junk out there.
- if hms_idx is None:
- hms = None
- new_idx = idx
- elif hms_idx > idx:
- hms = info.hms(tokens[hms_idx])
- new_idx = hms_idx
- else:
- # Looking backwards, increment one.
- hms = info.hms(tokens[hms_idx]) + 1
- new_idx = idx
-
- return (new_idx, hms)
-
- # ------------------------------------------------------------------
- # Handling for individual tokens. These are kept as methods instead
- # of functions for the sake of customizability via subclassing.
-
- def _parsems(self, value):
- """Parse a I[.F] seconds value into (seconds, microseconds)."""
- if "." not in value:
- return int(value), 0
- else:
- i, f = value.split(".")
- return int(i), int(f.ljust(6, "0")[:6])
-
- def _to_decimal(self, val):
- try:
- decimal_value = Decimal(val)
- # See GH 662, edge case, infinite value should not be converted
- # via `_to_decimal`
- if not decimal_value.is_finite():
- raise ValueError("Converted decimal value is infinite or NaN")
- except Exception as e:
- msg = "Could not convert %s to decimal" % val
- six.raise_from(ValueError(msg), e)
- else:
- return decimal_value
-
- # ------------------------------------------------------------------
- # Post-Parsing construction of datetime output. These are kept as
- # methods instead of functions for the sake of customizability via
- # subclassing.
-
- def _build_tzinfo(self, tzinfos, tzname, tzoffset):
- if callable(tzinfos):
- tzdata = tzinfos(tzname, tzoffset)
- else:
- tzdata = tzinfos.get(tzname)
- # handle case where tzinfo is paased an options that returns None
- # eg tzinfos = {'BRST' : None}
- if isinstance(tzdata, datetime.tzinfo) or tzdata is None:
- tzinfo = tzdata
- elif isinstance(tzdata, text_type):
- tzinfo = tz.tzstr(tzdata)
- elif isinstance(tzdata, integer_types):
- tzinfo = tz.tzoffset(tzname, tzdata)
- else:
- raise TypeError("Offset must be tzinfo subclass, tz string, "
- "or int offset.")
- return tzinfo
-
- def _build_tzaware(self, naive, res, tzinfos):
- if (callable(tzinfos) or (tzinfos and res.tzname in tzinfos)):
- tzinfo = self._build_tzinfo(tzinfos, res.tzname, res.tzoffset)
- aware = naive.replace(tzinfo=tzinfo)
- aware = self._assign_tzname(aware, res.tzname)
-
- elif res.tzname and res.tzname in time.tzname:
- aware = naive.replace(tzinfo=tz.tzlocal())
-
- # Handle ambiguous local datetime
- aware = self._assign_tzname(aware, res.tzname)
-
- # This is mostly relevant for winter GMT zones parsed in the UK
- if (aware.tzname() != res.tzname and
- res.tzname in self.info.UTCZONE):
- aware = aware.replace(tzinfo=tz.UTC)
-
- elif res.tzoffset == 0:
- aware = naive.replace(tzinfo=tz.UTC)
-
- elif res.tzoffset:
- aware = naive.replace(tzinfo=tz.tzoffset(res.tzname, res.tzoffset))
-
- elif not res.tzname and not res.tzoffset:
- # i.e. no timezone information was found.
- aware = naive
-
- elif res.tzname:
- # tz-like string was parsed but we don't know what to do
- # with it
- warnings.warn("tzname {tzname} identified but not understood. "
- "Pass `tzinfos` argument in order to correctly "
- "return a timezone-aware datetime. In a future "
- "version, this will raise an "
- "exception.".format(tzname=res.tzname),
- category=UnknownTimezoneWarning)
- aware = naive
-
- return aware
-
- def _build_naive(self, res, default):
- repl = {}
- for attr in ("year", "month", "day", "hour",
- "minute", "second", "microsecond"):
- value = getattr(res, attr)
- if value is not None:
- repl[attr] = value
-
- if 'day' not in repl:
- # If the default day exceeds the last day of the month, fall back
- # to the end of the month.
- cyear = default.year if res.year is None else res.year
- cmonth = default.month if res.month is None else res.month
- cday = default.day if res.day is None else res.day
-
- if cday > monthrange(cyear, cmonth)[1]:
- repl['day'] = monthrange(cyear, cmonth)[1]
-
- naive = default.replace(**repl)
-
- if res.weekday is not None and not res.day:
- naive = naive + relativedelta.relativedelta(weekday=res.weekday)
-
- return naive
-
- def _assign_tzname(self, dt, tzname):
- if dt.tzname() != tzname:
- new_dt = tz.enfold(dt, fold=1)
- if new_dt.tzname() == tzname:
- return new_dt
-
- return dt
-
- def _recombine_skipped(self, tokens, skipped_idxs):
- """
- >>> tokens = ["foo", " ", "bar", " ", "19June2000", "baz"]
- >>> skipped_idxs = [0, 1, 2, 5]
- >>> _recombine_skipped(tokens, skipped_idxs)
- ["foo bar", "baz"]
- """
- skipped_tokens = []
- for i, idx in enumerate(sorted(skipped_idxs)):
- if i > 0 and idx - 1 == skipped_idxs[i - 1]:
- skipped_tokens[-1] = skipped_tokens[-1] + tokens[idx]
- else:
- skipped_tokens.append(tokens[idx])
-
- return skipped_tokens
-
-
-DEFAULTPARSER = parser()
-
-
-def parse(timestr, parserinfo=None, **kwargs):
- """
-
- Parse a string in one of the supported formats, using the
- ``parserinfo`` parameters.
-
- :param timestr:
- A string containing a date/time stamp.
-
- :param parserinfo:
- A :class:`parserinfo` object containing parameters for the parser.
- If ``None``, the default arguments to the :class:`parserinfo`
- constructor are used.
-
- The ``**kwargs`` parameter takes the following keyword arguments:
-
- :param default:
- The default datetime object, if this is a datetime object and not
- ``None``, elements specified in ``timestr`` replace elements in the
- default object.
-
- :param ignoretz:
- If set ``True``, time zones in parsed strings are ignored and a naive
- :class:`datetime` object is returned.
-
- :param tzinfos:
- Additional time zone names / aliases which may be present in the
- string. This argument maps time zone names (and optionally offsets
- from those time zones) to time zones. This parameter can be a
- dictionary with timezone aliases mapping time zone names to time
- zones or a function taking two parameters (``tzname`` and
- ``tzoffset``) and returning a time zone.
-
- The timezones to which the names are mapped can be an integer
- offset from UTC in seconds or a :class:`tzinfo` object.
-
- .. doctest::
- :options: +NORMALIZE_WHITESPACE
-
- >>> from dateutil.parser import parse
- >>> from dateutil.tz import gettz
- >>> tzinfos = {"BRST": -7200, "CST": gettz("America/Chicago")}
- >>> parse("2012-01-19 17:21:00 BRST", tzinfos=tzinfos)
- datetime.datetime(2012, 1, 19, 17, 21, tzinfo=tzoffset(u'BRST', -7200))
- >>> parse("2012-01-19 17:21:00 CST", tzinfos=tzinfos)
- datetime.datetime(2012, 1, 19, 17, 21,
- tzinfo=tzfile('/usr/share/zoneinfo/America/Chicago'))
-
- This parameter is ignored if ``ignoretz`` is set.
-
- :param dayfirst:
- Whether to interpret the first value in an ambiguous 3-integer date
- (e.g. 01/05/09) as the day (``True``) or month (``False``). If
- ``yearfirst`` is set to ``True``, this distinguishes between YDM and
- YMD. If set to ``None``, this value is retrieved from the current
- :class:`parserinfo` object (which itself defaults to ``False``).
-
- :param yearfirst:
- Whether to interpret the first value in an ambiguous 3-integer date
- (e.g. 01/05/09) as the year. If ``True``, the first number is taken to
- be the year, otherwise the last number is taken to be the year. If
- this is set to ``None``, the value is retrieved from the current
- :class:`parserinfo` object (which itself defaults to ``False``).
-
- :param fuzzy:
- Whether to allow fuzzy parsing, allowing for string like "Today is
- January 1, 2047 at 8:21:00AM".
-
- :param fuzzy_with_tokens:
- If ``True``, ``fuzzy`` is automatically set to True, and the parser
- will return a tuple where the first element is the parsed
- :class:`datetime.datetime` datetimestamp and the second element is
- a tuple containing the portions of the string which were ignored:
-
- .. doctest::
-
- >>> from dateutil.parser import parse
- >>> parse("Today is January 1, 2047 at 8:21:00AM", fuzzy_with_tokens=True)
- (datetime.datetime(2047, 1, 1, 8, 21), (u'Today is ', u' ', u'at '))
-
- :return:
- Returns a :class:`datetime.datetime` object or, if the
- ``fuzzy_with_tokens`` option is ``True``, returns a tuple, the
- first element being a :class:`datetime.datetime` object, the second
- a tuple containing the fuzzy tokens.
-
- :raises ParserError:
- Raised for invalid or unknown string formats, if the provided
- :class:`tzinfo` is not in a valid format, or if an invalid date would
- be created.
-
- :raises OverflowError:
- Raised if the parsed date exceeds the largest valid C integer on
- your system.
- """
- if parserinfo:
- return parser(parserinfo).parse(timestr, **kwargs)
- else:
- return DEFAULTPARSER.parse(timestr, **kwargs)
-
-
-class _tzparser(object):
-
- class _result(_resultbase):
-
- __slots__ = ["stdabbr", "stdoffset", "dstabbr", "dstoffset",
- "start", "end"]
-
- class _attr(_resultbase):
- __slots__ = ["month", "week", "weekday",
- "yday", "jyday", "day", "time"]
-
- def __repr__(self):
- return self._repr("")
-
- def __init__(self):
- _resultbase.__init__(self)
- self.start = self._attr()
- self.end = self._attr()
-
- def parse(self, tzstr):
- res = self._result()
- l = [x for x in re.split(r'([,:.]|[a-zA-Z]+|[0-9]+)',tzstr) if x]
- used_idxs = list()
- try:
-
- len_l = len(l)
-
- i = 0
- while i < len_l:
- # BRST+3[BRDT[+2]]
- j = i
- while j < len_l and not [x for x in l[j]
- if x in "0123456789:,-+"]:
- j += 1
- if j != i:
- if not res.stdabbr:
- offattr = "stdoffset"
- res.stdabbr = "".join(l[i:j])
- else:
- offattr = "dstoffset"
- res.dstabbr = "".join(l[i:j])
-
- for ii in range(j):
- used_idxs.append(ii)
- i = j
- if (i < len_l and (l[i] in ('+', '-') or l[i][0] in
- "0123456789")):
- if l[i] in ('+', '-'):
- # Yes, that's right. See the TZ variable
- # documentation.
- signal = (1, -1)[l[i] == '+']
- used_idxs.append(i)
- i += 1
- else:
- signal = -1
- len_li = len(l[i])
- if len_li == 4:
- # -0300
- setattr(res, offattr, (int(l[i][:2]) * 3600 +
- int(l[i][2:]) * 60) * signal)
- elif i + 1 < len_l and l[i + 1] == ':':
- # -03:00
- setattr(res, offattr,
- (int(l[i]) * 3600 +
- int(l[i + 2]) * 60) * signal)
- used_idxs.append(i)
- i += 2
- elif len_li <= 2:
- # -[0]3
- setattr(res, offattr,
- int(l[i][:2]) * 3600 * signal)
- else:
- return None
- used_idxs.append(i)
- i += 1
- if res.dstabbr:
- break
- else:
- break
-
-
- if i < len_l:
- for j in range(i, len_l):
- if l[j] == ';':
- l[j] = ','
-
- assert l[i] == ','
-
- i += 1
-
- if i >= len_l:
- pass
- elif (8 <= l.count(',') <= 9 and
- not [y for x in l[i:] if x != ','
- for y in x if y not in "0123456789+-"]):
- # GMT0BST,3,0,30,3600,10,0,26,7200[,3600]
- for x in (res.start, res.end):
- x.month = int(l[i])
- used_idxs.append(i)
- i += 2
- if l[i] == '-':
- value = int(l[i + 1]) * -1
- used_idxs.append(i)
- i += 1
- else:
- value = int(l[i])
- used_idxs.append(i)
- i += 2
- if value:
- x.week = value
- x.weekday = (int(l[i]) - 1) % 7
- else:
- x.day = int(l[i])
- used_idxs.append(i)
- i += 2
- x.time = int(l[i])
- used_idxs.append(i)
- i += 2
- if i < len_l:
- if l[i] in ('-', '+'):
- signal = (-1, 1)[l[i] == "+"]
- used_idxs.append(i)
- i += 1
- else:
- signal = 1
- used_idxs.append(i)
- res.dstoffset = (res.stdoffset + int(l[i]) * signal)
-
- # This was a made-up format that is not in normal use
- warn(('Parsed time zone "%s"' % tzstr) +
- 'is in a non-standard dateutil-specific format, which ' +
- 'is now deprecated; support for parsing this format ' +
- 'will be removed in future versions. It is recommended ' +
- 'that you switch to a standard format like the GNU ' +
- 'TZ variable format.', tz.DeprecatedTzFormatWarning)
- elif (l.count(',') == 2 and l[i:].count('/') <= 2 and
- not [y for x in l[i:] if x not in (',', '/', 'J', 'M',
- '.', '-', ':')
- for y in x if y not in "0123456789"]):
- for x in (res.start, res.end):
- if l[i] == 'J':
- # non-leap year day (1 based)
- used_idxs.append(i)
- i += 1
- x.jyday = int(l[i])
- elif l[i] == 'M':
- # month[-.]week[-.]weekday
- used_idxs.append(i)
- i += 1
- x.month = int(l[i])
- used_idxs.append(i)
- i += 1
- assert l[i] in ('-', '.')
- used_idxs.append(i)
- i += 1
- x.week = int(l[i])
- if x.week == 5:
- x.week = -1
- used_idxs.append(i)
- i += 1
- assert l[i] in ('-', '.')
- used_idxs.append(i)
- i += 1
- x.weekday = (int(l[i]) - 1) % 7
- else:
- # year day (zero based)
- x.yday = int(l[i]) + 1
-
- used_idxs.append(i)
- i += 1
-
- if i < len_l and l[i] == '/':
- used_idxs.append(i)
- i += 1
- # start time
- len_li = len(l[i])
- if len_li == 4:
- # -0300
- x.time = (int(l[i][:2]) * 3600 +
- int(l[i][2:]) * 60)
- elif i + 1 < len_l and l[i + 1] == ':':
- # -03:00
- x.time = int(l[i]) * 3600 + int(l[i + 2]) * 60
- used_idxs.append(i)
- i += 2
- if i + 1 < len_l and l[i + 1] == ':':
- used_idxs.append(i)
- i += 2
- x.time += int(l[i])
- elif len_li <= 2:
- # -[0]3
- x.time = (int(l[i][:2]) * 3600)
- else:
- return None
- used_idxs.append(i)
- i += 1
-
- assert i == len_l or l[i] == ','
-
- i += 1
-
- assert i >= len_l
-
- except (IndexError, ValueError, AssertionError):
- return None
-
- unused_idxs = set(range(len_l)).difference(used_idxs)
- res.any_unused_tokens = not {l[n] for n in unused_idxs}.issubset({",",":"})
- return res
-
-
-DEFAULTTZPARSER = _tzparser()
-
-
-def _parsetz(tzstr):
- return DEFAULTTZPARSER.parse(tzstr)
-
-
-class ParserError(ValueError):
- """Exception subclass used for any failure to parse a datetime string.
-
- This is a subclass of :py:exc:`ValueError`, and should be raised any time
- earlier versions of ``dateutil`` would have raised ``ValueError``.
-
- .. versionadded:: 2.8.1
- """
- def __str__(self):
- try:
- return self.args[0] % self.args[1:]
- except (TypeError, IndexError):
- return super(ParserError, self).__str__()
-
- def __repr__(self):
- args = ", ".join("'%s'" % arg for arg in self.args)
- return "%s(%s)" % (self.__class__.__name__, args)
-
-
-class UnknownTimezoneWarning(RuntimeWarning):
- """Raised when the parser finds a timezone it cannot parse into a tzinfo.
-
- .. versionadded:: 2.7.0
- """
-# vim:ts=4:sw=4:et
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/radio.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/radio.py
deleted file mode 100644
index a8846a84a621c298a41922c0457dd38dba7a3b21..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/radio.py
+++ /dev/null
@@ -1,193 +0,0 @@
-"""gr.Radio() component."""
-
-from __future__ import annotations
-
-from typing import Any, Callable, Literal
-
-from gradio_client.documentation import document, set_documentation_group
-from gradio_client.serializing import StringSerializable
-
-from gradio.components.base import FormComponent, IOComponent, _Keywords
-from gradio.deprecation import warn_deprecation, warn_style_method_deprecation
-from gradio.events import Changeable, EventListenerMethod, Inputable, Selectable
-from gradio.interpretation import NeighborInterpretable
-
-set_documentation_group("component")
-
-
-@document()
-class Radio(
- FormComponent,
- Selectable,
- Changeable,
- Inputable,
- IOComponent,
- StringSerializable,
- NeighborInterpretable,
-):
- """
- Creates a set of radio buttons of which only one can be selected.
- Preprocessing: passes the value of the selected radio button as a {str} or its index as an {int} into the function, depending on `type`.
- Postprocessing: expects a {str} corresponding to the value of the radio button to be selected.
- Examples-format: a {str} representing the radio option to select.
-
- Demos: sentence_builder, titanic_survival, blocks_essay
- """
-
- def __init__(
- self,
- choices: list[str] | None = None,
- *,
- value: str | Callable | None = None,
- type: str = "value",
- label: str | None = None,
- info: str | None = None,
- every: float | None = None,
- show_label: bool | None = None,
- container: bool = True,
- scale: int | None = None,
- min_width: int = 160,
- interactive: bool | None = None,
- visible: bool = True,
- elem_id: str | None = None,
- elem_classes: list[str] | str | None = None,
- **kwargs,
- ):
- """
- Parameters:
- choices: list of options to select from.
- value: the button selected by default. If None, no button is selected by default. If callable, the function will be called whenever the app loads to set the initial value of the component.
- type: Type of value to be returned by component. "value" returns the string of the choice selected, "index" returns the index of the choice selected.
- label: component name in interface.
- info: additional component description.
- every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute.
- show_label: if True, will display label.
- container: If True, will place the component in a container - providing some extra padding around the border.
- scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer.
- min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.
- interactive: if True, choices in this radio group will be selectable; if False, selection will be disabled. If not provided, this is inferred based on whether the component is used as an input or output.
- visible: If False, component will be hidden.
- elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
- elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.
- """
- self.choices = choices or []
- valid_types = ["value", "index"]
- if type not in valid_types:
- raise ValueError(
- f"Invalid value for parameter `type`: {type}. Please choose from one of: {valid_types}"
- )
- self.type = type
- self.select: EventListenerMethod
- """
- Event listener for when the user selects Radio option.
- Uses event data gradio.SelectData to carry `value` referring to label of selected option, and `index` to refer to index.
- See EventData documentation on how to use this event data.
- """
- IOComponent.__init__(
- self,
- label=label,
- info=info,
- every=every,
- show_label=show_label,
- container=container,
- scale=scale,
- min_width=min_width,
- interactive=interactive,
- visible=visible,
- elem_id=elem_id,
- elem_classes=elem_classes,
- value=value,
- **kwargs,
- )
- NeighborInterpretable.__init__(self)
-
- def get_config(self):
- return {
- "choices": self.choices,
- "value": self.value,
- **IOComponent.get_config(self),
- }
-
- def example_inputs(self) -> dict[str, Any]:
- return {
- "raw": self.choices[0] if self.choices else None,
- "serialized": self.choices[0] if self.choices else None,
- }
-
- @staticmethod
- def update(
- value: Any | Literal[_Keywords.NO_VALUE] | None = _Keywords.NO_VALUE,
- choices: list[str] | None = None,
- label: str | None = None,
- info: str | None = None,
- show_label: bool | None = None,
- container: bool | None = None,
- scale: int | None = None,
- min_width: int | None = None,
- interactive: bool | None = None,
- visible: bool | None = None,
- ):
- return {
- "choices": choices,
- "label": label,
- "info": info,
- "show_label": show_label,
- "container": container,
- "scale": scale,
- "min_width": min_width,
- "interactive": interactive,
- "visible": visible,
- "value": value,
- "__type__": "update",
- }
-
- def preprocess(self, x: str | None) -> str | int | None:
- """
- Parameters:
- x: selected choice
- Returns:
- selected choice as string or index within choice list
- """
- if self.type == "value":
- return x
- elif self.type == "index":
- if x is None:
- return None
- else:
- return self.choices.index(x)
- else:
- raise ValueError(
- f"Unknown type: {self.type}. Please choose from: 'value', 'index'."
- )
-
- def get_interpretation_neighbors(self, x):
- choices = list(self.choices)
- choices.remove(x)
- return choices, {}
-
- def get_interpretation_scores(
- self, x, neighbors, scores: list[float | None], **kwargs
- ) -> list:
- """
- Returns:
- Each value represents the interpretation score corresponding to each choice.
- """
- scores.insert(self.choices.index(x), None)
- return scores
-
- def style(
- self,
- *,
- item_container: bool | None = None,
- container: bool | None = None,
- **kwargs,
- ):
- """
- This method is deprecated. Please set these arguments in the constructor instead.
- """
- warn_style_method_deprecation()
- if item_container is not None:
- warn_deprecation("The `item_container` parameter is deprecated.")
- if container is not None:
- self.container = container
- return self
diff --git a/spaces/Defalt-404/Bittensor_Explore/README.md b/spaces/Defalt-404/Bittensor_Explore/README.md
deleted file mode 100644
index c7f88e97dc1d69a22b70836ac3ec6abc9c610b68..0000000000000000000000000000000000000000
--- a/spaces/Defalt-404/Bittensor_Explore/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Bittensor Explore
-emoji: ⚡
-colorFrom: blue
-colorTo: red
-sdk: gradio
-sdk_version: 3.42.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Detomo/ai-comic-generation/src/app/engine/forbidden.ts b/spaces/Detomo/ai-comic-generation/src/app/engine/forbidden.ts
deleted file mode 100644
index 512b65e22b18f3bd39f6aec58198576b2ffc67f5..0000000000000000000000000000000000000000
--- a/spaces/Detomo/ai-comic-generation/src/app/engine/forbidden.ts
+++ /dev/null
@@ -1,6 +0,0 @@
-
-// the NSFW has to contain bad words, but doing so might get the code flagged
-// or attract unwanted attention, so we hash them
-export const forbidden = [
- // TODO implement this
-]
\ No newline at end of file
diff --git a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/models/stylegan2/model.py b/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/models/stylegan2/model.py
deleted file mode 100644
index 9d5559203f4f3843fc814b090780ffa129a6fdf0..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/models/stylegan2/model.py
+++ /dev/null
@@ -1,674 +0,0 @@
-import math
-import random
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from models.StyleCLIP.models.stylegan2.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d
-
-
-class PixelNorm(nn.Module):
- def __init__(self):
- super().__init__()
-
- def forward(self, input):
- return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8)
-
-
-def make_kernel(k):
- k = torch.tensor(k, dtype=torch.float32)
-
- if k.ndim == 1:
- k = k[None, :] * k[:, None]
-
- k /= k.sum()
-
- return k
-
-
-class Upsample(nn.Module):
- def __init__(self, kernel, factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel) * (factor ** 2)
- self.register_buffer('kernel', kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad)
-
- return out
-
-
-class Downsample(nn.Module):
- def __init__(self, kernel, factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel)
- self.register_buffer('kernel', kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad)
-
- return out
-
-
-class Blur(nn.Module):
- def __init__(self, kernel, pad, upsample_factor=1):
- super().__init__()
-
- kernel = make_kernel(kernel)
-
- if upsample_factor > 1:
- kernel = kernel * (upsample_factor ** 2)
-
- self.register_buffer('kernel', kernel)
-
- self.pad = pad
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, pad=self.pad)
-
- return out
-
-
-class EqualConv2d(nn.Module):
- def __init__(
- self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True
- ):
- super().__init__()
-
- self.weight = nn.Parameter(
- torch.randn(out_channel, in_channel, kernel_size, kernel_size)
- )
- self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2)
-
- self.stride = stride
- self.padding = padding
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_channel))
-
- else:
- self.bias = None
-
- def forward(self, input):
- out = F.conv2d(
- input,
- self.weight * self.scale,
- bias=self.bias,
- stride=self.stride,
- padding=self.padding,
- )
-
- return out
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},'
- f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})'
- )
-
-
-class EqualLinear(nn.Module):
- def __init__(
- self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None
- ):
- super().__init__()
-
- self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul))
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init))
-
- else:
- self.bias = None
-
- self.activation = activation
-
- self.scale = (1 / math.sqrt(in_dim)) * lr_mul
- self.lr_mul = lr_mul
-
- def forward(self, input):
- if self.activation:
- out = F.linear(input, self.weight * self.scale)
- out = fused_leaky_relu(out, self.bias * self.lr_mul)
-
- else:
- out = F.linear(
- input, self.weight * self.scale, bias=self.bias * self.lr_mul
- )
-
- return out
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})'
- )
-
-
-class ScaledLeakyReLU(nn.Module):
- def __init__(self, negative_slope=0.2):
- super().__init__()
-
- self.negative_slope = negative_slope
-
- def forward(self, input):
- out = F.leaky_relu(input, negative_slope=self.negative_slope)
-
- return out * math.sqrt(2)
-
-
-class ModulatedConv2d(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- demodulate=True,
- upsample=False,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- ):
- super().__init__()
-
- self.eps = 1e-8
- self.kernel_size = kernel_size
- self.in_channel = in_channel
- self.out_channel = out_channel
- self.upsample = upsample
- self.downsample = downsample
-
- if upsample:
- factor = 2
- p = (len(blur_kernel) - factor) - (kernel_size - 1)
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2 + 1
-
- self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor)
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.blur = Blur(blur_kernel, pad=(pad0, pad1))
-
- fan_in = in_channel * kernel_size ** 2
- self.scale = 1 / math.sqrt(fan_in)
- self.padding = kernel_size // 2
-
- self.weight = nn.Parameter(
- torch.randn(1, out_channel, in_channel, kernel_size, kernel_size)
- )
-
- self.modulation = EqualLinear(style_dim, in_channel, bias_init=1)
-
- self.demodulate = demodulate
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, '
- f'upsample={self.upsample}, downsample={self.downsample})'
- )
-
- def forward(self, input, style):
- batch, in_channel, height, width = input.shape
-
- style = self.modulation(style).view(batch, 1, in_channel, 1, 1)
- weight = self.scale * self.weight * style
-
- if self.demodulate:
- demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8)
- weight = weight * demod.view(batch, self.out_channel, 1, 1, 1)
-
- weight = weight.view(
- batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
-
- if self.upsample:
- input = input.view(1, batch * in_channel, height, width)
- weight = weight.view(
- batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
- weight = weight.transpose(1, 2).reshape(
- batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size
- )
- out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
- out = self.blur(out)
-
- elif self.downsample:
- input = self.blur(input)
- _, _, height, width = input.shape
- input = input.view(1, batch * in_channel, height, width)
- out = F.conv2d(input, weight, padding=0, stride=2, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- else:
- input = input.view(1, batch * in_channel, height, width)
- out = F.conv2d(input, weight, padding=self.padding, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- return out
-
-
-class NoiseInjection(nn.Module):
- def __init__(self):
- super().__init__()
-
- self.weight = nn.Parameter(torch.zeros(1))
-
- def forward(self, image, noise=None):
- if noise is None:
- batch, _, height, width = image.shape
- noise = image.new_empty(batch, 1, height, width).normal_()
-
- return image + self.weight * noise
-
-
-class ConstantInput(nn.Module):
- def __init__(self, channel, size=4):
- super().__init__()
-
- self.input = nn.Parameter(torch.randn(1, channel, size, size))
-
- def forward(self, input):
- batch = input.shape[0]
- out = self.input.repeat(batch, 1, 1, 1)
-
- return out
-
-
-class StyledConv(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=False,
- blur_kernel=[1, 3, 3, 1],
- demodulate=True,
- ):
- super().__init__()
-
- self.conv = ModulatedConv2d(
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=upsample,
- blur_kernel=blur_kernel,
- demodulate=demodulate,
- )
-
- self.noise = NoiseInjection()
- # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1))
- # self.activate = ScaledLeakyReLU(0.2)
- self.activate = FusedLeakyReLU(out_channel)
-
- def forward(self, input, style, noise=None):
- out = self.conv(input, style)
- out = self.noise(out, noise=noise)
- # out = out + self.bias
- out = self.activate(out)
-
- return out
-
-
-class ToRGB(nn.Module):
- def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- if upsample:
- self.upsample = Upsample(blur_kernel)
-
- self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False)
- self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1))
-
- def forward(self, input, style, skip=None):
- out = self.conv(input, style)
- out = out + self.bias
-
- if skip is not None:
- skip = self.upsample(skip)
-
- out = out + skip
-
- return out
-
-
-class Generator(nn.Module):
- def __init__(
- self,
- size,
- style_dim,
- n_mlp,
- channel_multiplier=2,
- blur_kernel=[1, 3, 3, 1],
- lr_mlp=0.01,
- ):
- super().__init__()
-
- self.size = size
-
- self.style_dim = style_dim
-
- layers = [PixelNorm()]
-
- for i in range(n_mlp):
- layers.append(
- EqualLinear(
- style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu'
- )
- )
-
- self.style = nn.Sequential(*layers)
-
- self.channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256 * channel_multiplier,
- 128: 128 * channel_multiplier,
- 256: 64 * channel_multiplier,
- 512: 32 * channel_multiplier,
- 1024: 16 * channel_multiplier,
- }
-
- self.input = ConstantInput(self.channels[4])
- self.conv1 = StyledConv(
- self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel
- )
- self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False)
-
- self.log_size = int(math.log(size, 2))
- self.num_layers = (self.log_size - 2) * 2 + 1
-
- self.convs = nn.ModuleList()
- self.upsamples = nn.ModuleList()
- self.to_rgbs = nn.ModuleList()
- self.noises = nn.Module()
-
- in_channel = self.channels[4]
-
- for layer_idx in range(self.num_layers):
- res = (layer_idx + 5) // 2
- shape = [1, 1, 2 ** res, 2 ** res]
- self.noises.register_buffer(f'noise_{layer_idx}', torch.randn(*shape))
-
- for i in range(3, self.log_size + 1):
- out_channel = self.channels[2 ** i]
-
- self.convs.append(
- StyledConv(
- in_channel,
- out_channel,
- 3,
- style_dim,
- upsample=True,
- blur_kernel=blur_kernel,
- )
- )
-
- self.convs.append(
- StyledConv(
- out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel
- )
- )
-
- self.to_rgbs.append(ToRGB(out_channel, style_dim))
-
- in_channel = out_channel
-
- self.n_latent = self.log_size * 2 - 2
-
- def make_noise(self):
- device = self.input.input.device
-
- noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)]
-
- for i in range(3, self.log_size + 1):
- for _ in range(2):
- noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device))
-
- return noises
-
- def mean_latent(self, n_latent):
- latent_in = torch.randn(
- n_latent, self.style_dim, device=self.input.input.device
- )
- latent = self.style(latent_in).mean(0, keepdim=True)
-
- return latent
-
- def get_latent(self, input):
- return self.style(input)
-
- def forward(
- self,
- styles,
- return_latents=False,
- inject_index=None,
- truncation=1,
- truncation_latent=None,
- input_is_latent=False,
- noise=None,
- randomize_noise=True,
- ):
- if not input_is_latent:
- styles = [self.style(s) for s in styles]
-
- if noise is None:
- if randomize_noise:
- noise = [None] * self.num_layers
- else:
- noise = [
- getattr(self.noises, f'noise_{i}') for i in range(self.num_layers)
- ]
-
- if truncation < 1:
- style_t = []
-
- for style in styles:
- style_t.append(
- truncation_latent + truncation * (style - truncation_latent)
- )
-
- styles = style_t
-
- if len(styles) < 2:
- inject_index = self.n_latent
-
- if styles[0].ndim < 3:
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
-
- else:
- latent = styles[0]
-
- else:
- if inject_index is None:
- inject_index = random.randint(1, self.n_latent - 1)
-
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
- latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1)
-
- latent = torch.cat([latent, latent2], 1)
-
- out = self.input(latent)
- out = self.conv1(out, latent[:, 0], noise=noise[0])
-
- skip = self.to_rgb1(out, latent[:, 1])
-
- i = 1
- for conv1, conv2, noise1, noise2, to_rgb in zip(
- self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs
- ):
- out = conv1(out, latent[:, i], noise=noise1)
- out = conv2(out, latent[:, i + 1], noise=noise2)
- skip = to_rgb(out, latent[:, i + 2], skip)
-
- i += 2
-
- image = skip
-
- if return_latents:
- return image, latent
-
- else:
- return image, None
-
-
-class ConvLayer(nn.Sequential):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- bias=True,
- activate=True,
- ):
- layers = []
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- layers.append(Blur(blur_kernel, pad=(pad0, pad1)))
-
- stride = 2
- self.padding = 0
-
- else:
- stride = 1
- self.padding = kernel_size // 2
-
- layers.append(
- EqualConv2d(
- in_channel,
- out_channel,
- kernel_size,
- padding=self.padding,
- stride=stride,
- bias=bias and not activate,
- )
- )
-
- if activate:
- if bias:
- layers.append(FusedLeakyReLU(out_channel))
-
- else:
- layers.append(ScaledLeakyReLU(0.2))
-
- super().__init__(*layers)
-
-
-class ResBlock(nn.Module):
- def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- self.conv1 = ConvLayer(in_channel, in_channel, 3)
- self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True)
-
- self.skip = ConvLayer(
- in_channel, out_channel, 1, downsample=True, activate=False, bias=False
- )
-
- def forward(self, input):
- out = self.conv1(input)
- out = self.conv2(out)
-
- skip = self.skip(input)
- out = (out + skip) / math.sqrt(2)
-
- return out
-
-
-class Discriminator(nn.Module):
- def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256 * channel_multiplier,
- 128: 128 * channel_multiplier,
- 256: 64 * channel_multiplier,
- 512: 32 * channel_multiplier,
- 1024: 16 * channel_multiplier,
- }
-
- convs = [ConvLayer(3, channels[size], 1)]
-
- log_size = int(math.log(size, 2))
-
- in_channel = channels[size]
-
- for i in range(log_size, 2, -1):
- out_channel = channels[2 ** (i - 1)]
-
- convs.append(ResBlock(in_channel, out_channel, blur_kernel))
-
- in_channel = out_channel
-
- self.convs = nn.Sequential(*convs)
-
- self.stddev_group = 4
- self.stddev_feat = 1
-
- self.final_conv = ConvLayer(in_channel + 1, channels[4], 3)
- self.final_linear = nn.Sequential(
- EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'),
- EqualLinear(channels[4], 1),
- )
-
- def forward(self, input):
- out = self.convs(input)
-
- batch, channel, height, width = out.shape
- group = min(batch, self.stddev_group)
- stddev = out.view(
- group, -1, self.stddev_feat, channel // self.stddev_feat, height, width
- )
- stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8)
- stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2)
- stddev = stddev.repeat(group, 1, height, width)
- out = torch.cat([out, stddev], 1)
-
- out = self.final_conv(out)
-
- out = out.view(batch, -1)
- out = self.final_linear(out)
-
- return out
-
diff --git a/spaces/ECCV2022/bytetrack/tutorials/motr/README.md b/spaces/ECCV2022/bytetrack/tutorials/motr/README.md
deleted file mode 100644
index 3fcc6ca471912eba104c258cc8a152f14673d813..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/tutorials/motr/README.md
+++ /dev/null
@@ -1,100 +0,0 @@
-# MOTR
-
-Step1.
-
-git clone https://github.com/megvii-model/MOTR.git and install
-
-replace https://github.com/megvii-model/MOTR/blob/main/datasets/joint.py
-
-replace https://github.com/megvii-model/MOTR/blob/main/datasets/transforms.py
-
-
-train
-
-```
-python3 -m torch.distributed.launch --nproc_per_node=8 \
- --use_env main.py \
- --meta_arch motr \
- --dataset_file e2e_joint \
- --epoch 50 \
- --with_box_refine \
- --lr_drop 40 \
- --lr 2e-4 \
- --lr_backbone 2e-5 \
- --pretrained coco_model_final.pth \
- --output_dir exps/e2e_motr_r50_mot17trainhalf \
- --batch_size 1 \
- --sample_mode 'random_interval' \
- --sample_interval 10 \
- --sampler_steps 10 20 30 \
- --sampler_lengths 2 3 4 5 \
- --update_query_pos \
- --merger_dropout 0 \
- --dropout 0 \
- --random_drop 0.1 \
- --fp_ratio 0.3 \
- --query_interaction_layer 'QIM' \
- --extra_track_attn \
- --mot_path .
- --data_txt_path_train ./datasets/data_path/mot17.half \
- --data_txt_path_val ./datasets/data_path/mot17.val \
-```
-mot17.half and mot17.val are from https://github.com/ifzhang/FairMOT/tree/master/src/data
-
-You can also download the MOTR model trained by us: [google](https://drive.google.com/file/d/1pzGi53VooppQqhKf3TSxLK99LERsVyTw/view?usp=sharing), [baidu(code:t87h)](https://pan.baidu.com/s/1OrcR3L9Bf2xXIo8RQl3zyA)
-
-
-Step2.
-
-replace https://github.com/megvii-model/MOTR/blob/main/util/evaluation.py
-
-replace https://github.com/megvii-model/MOTR/blob/main/eval.py
-
-replace https://github.com/megvii-model/MOTR/blob/main/models/motr.py
-
-add byte_tracker.py to https://github.com/megvii-model/MOTR
-
-add mot_online to https://github.com/megvii-model/MOTR
-
-
-Step3.
-
-
-val
-
-```
-python3 eval.py \
- --meta_arch motr \
- --dataset_file e2e_joint \
- --epoch 200 \
- --with_box_refine \
- --lr_drop 100 \
- --lr 2e-4 \
- --lr_backbone 2e-5 \
- --pretrained exps/e2e_motr_r50_mot17val/motr_final.pth \
- --output_dir exps/e2e_motr_r50_mot17val \
- --batch_size 1 \
- --sample_mode 'random_interval' \
- --sample_interval 10 \
- --sampler_steps 50 90 120 \
- --sampler_lengths 2 3 4 5 \
- --update_query_pos \
- --merger_dropout 0 \
- --dropout 0 \
- --random_drop 0.1 \
- --fp_ratio 0.3 \
- --query_interaction_layer 'QIM' \
- --extra_track_attn \
- --mot_path ./MOT17/images/train
- --data_txt_path_train ./datasets/data_path/mot17.half \
- --data_txt_path_val ./datasets/data_path/mot17.val \
- --resume model_final.pth \
-```
-
-
-
-# MOTR det
-
-in Step2, replace https://github.com/megvii-model/MOTR/blob/main/models/motr.py by motr_det.py
-
-others are the same as MOTR
diff --git a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/data/realesrgan_paired_dataset.py b/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/data/realesrgan_paired_dataset.py
deleted file mode 100644
index 386c8d72496245dae8df033c2ebbd76b41ff45f1..0000000000000000000000000000000000000000
--- a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/data/realesrgan_paired_dataset.py
+++ /dev/null
@@ -1,108 +0,0 @@
-import os
-from basicsr.data.data_util import paired_paths_from_folder, paired_paths_from_lmdb
-from basicsr.data.transforms import augment, paired_random_crop
-from basicsr.utils import FileClient, imfrombytes, img2tensor
-from basicsr.utils.registry import DATASET_REGISTRY
-from torch.utils import data as data
-from torchvision.transforms.functional import normalize
-
-
-@DATASET_REGISTRY.register()
-class RealESRGANPairedDataset(data.Dataset):
- """Paired image dataset for image restoration.
-
- Read LQ (Low Quality, e.g. LR (Low Resolution), blurry, noisy, etc) and GT image pairs.
-
- There are three modes:
- 1. 'lmdb': Use lmdb files.
- If opt['io_backend'] == lmdb.
- 2. 'meta_info': Use meta information file to generate paths.
- If opt['io_backend'] != lmdb and opt['meta_info'] is not None.
- 3. 'folder': Scan folders to generate paths.
- The rest.
-
- Args:
- opt (dict): Config for train datasets. It contains the following keys:
- dataroot_gt (str): Data root path for gt.
- dataroot_lq (str): Data root path for lq.
- meta_info (str): Path for meta information file.
- io_backend (dict): IO backend type and other kwarg.
- filename_tmpl (str): Template for each filename. Note that the template excludes the file extension.
- Default: '{}'.
- gt_size (int): Cropped patched size for gt patches.
- use_hflip (bool): Use horizontal flips.
- use_rot (bool): Use rotation (use vertical flip and transposing h
- and w for implementation).
-
- scale (bool): Scale, which will be added automatically.
- phase (str): 'train' or 'val'.
- """
-
- def __init__(self, opt):
- super(RealESRGANPairedDataset, self).__init__()
- self.opt = opt
- self.file_client = None
- self.io_backend_opt = opt['io_backend']
- # mean and std for normalizing the input images
- self.mean = opt['mean'] if 'mean' in opt else None
- self.std = opt['std'] if 'std' in opt else None
-
- self.gt_folder, self.lq_folder = opt['dataroot_gt'], opt['dataroot_lq']
- self.filename_tmpl = opt['filename_tmpl'] if 'filename_tmpl' in opt else '{}'
-
- # file client (lmdb io backend)
- if self.io_backend_opt['type'] == 'lmdb':
- self.io_backend_opt['db_paths'] = [self.lq_folder, self.gt_folder]
- self.io_backend_opt['client_keys'] = ['lq', 'gt']
- self.paths = paired_paths_from_lmdb([self.lq_folder, self.gt_folder], ['lq', 'gt'])
- elif 'meta_info' in self.opt and self.opt['meta_info'] is not None:
- # disk backend with meta_info
- # Each line in the meta_info describes the relative path to an image
- with open(self.opt['meta_info']) as fin:
- paths = [line.strip() for line in fin]
- self.paths = []
- for path in paths:
- gt_path, lq_path = path.split(', ')
- gt_path = os.path.join(self.gt_folder, gt_path)
- lq_path = os.path.join(self.lq_folder, lq_path)
- self.paths.append(dict([('gt_path', gt_path), ('lq_path', lq_path)]))
- else:
- # disk backend
- # it will scan the whole folder to get meta info
- # it will be time-consuming for folders with too many files. It is recommended using an extra meta txt file
- self.paths = paired_paths_from_folder([self.lq_folder, self.gt_folder], ['lq', 'gt'], self.filename_tmpl)
-
- def __getitem__(self, index):
- if self.file_client is None:
- self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt)
-
- scale = self.opt['scale']
-
- # Load gt and lq images. Dimension order: HWC; channel order: BGR;
- # image range: [0, 1], float32.
- gt_path = self.paths[index]['gt_path']
- img_bytes = self.file_client.get(gt_path, 'gt')
- img_gt = imfrombytes(img_bytes, float32=True)
- lq_path = self.paths[index]['lq_path']
- img_bytes = self.file_client.get(lq_path, 'lq')
- img_lq = imfrombytes(img_bytes, float32=True)
-
- # augmentation for training
- if self.opt['phase'] == 'train':
- gt_size = self.opt['gt_size']
- # random crop
- img_gt, img_lq = paired_random_crop(img_gt, img_lq, gt_size, scale, gt_path)
- # flip, rotation
- img_gt, img_lq = augment([img_gt, img_lq], self.opt['use_hflip'], self.opt['use_rot'])
-
- # BGR to RGB, HWC to CHW, numpy to tensor
- img_gt, img_lq = img2tensor([img_gt, img_lq], bgr2rgb=True, float32=True)
- # normalize
- if self.mean is not None or self.std is not None:
- normalize(img_lq, self.mean, self.std, inplace=True)
- normalize(img_gt, self.mean, self.std, inplace=True)
-
- return {'lq': img_lq, 'gt': img_gt, 'lq_path': lq_path, 'gt_path': gt_path}
-
- def __len__(self):
- return len(self.paths)
diff --git a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/modules/util.py b/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/modules/util.py
deleted file mode 100644
index 9ee16385d8b1342a2d60a5f1aa5cadcfbe934bd8..0000000000000000000000000000000000000000
--- a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/modules/util.py
+++ /dev/null
@@ -1,130 +0,0 @@
-import torch
-import torch.nn as nn
-
-
-def count_params(model):
- total_params = sum(p.numel() for p in model.parameters())
- return total_params
-
-
-class ActNorm(nn.Module):
- def __init__(self, num_features, logdet=False, affine=True,
- allow_reverse_init=False):
- assert affine
- super().__init__()
- self.logdet = logdet
- self.loc = nn.Parameter(torch.zeros(1, num_features, 1, 1))
- self.scale = nn.Parameter(torch.ones(1, num_features, 1, 1))
- self.allow_reverse_init = allow_reverse_init
-
- self.register_buffer('initialized', torch.tensor(0, dtype=torch.uint8))
-
- def initialize(self, input):
- with torch.no_grad():
- flatten = input.permute(1, 0, 2, 3).contiguous().view(input.shape[1], -1)
- mean = (
- flatten.mean(1)
- .unsqueeze(1)
- .unsqueeze(2)
- .unsqueeze(3)
- .permute(1, 0, 2, 3)
- )
- std = (
- flatten.std(1)
- .unsqueeze(1)
- .unsqueeze(2)
- .unsqueeze(3)
- .permute(1, 0, 2, 3)
- )
-
- self.loc.data.copy_(-mean)
- self.scale.data.copy_(1 / (std + 1e-6))
-
- def forward(self, input, reverse=False):
- if reverse:
- return self.reverse(input)
- if len(input.shape) == 2:
- input = input[:,:,None,None]
- squeeze = True
- else:
- squeeze = False
-
- _, _, height, width = input.shape
-
- if self.training and self.initialized.item() == 0:
- self.initialize(input)
- self.initialized.fill_(1)
-
- h = self.scale * (input + self.loc)
-
- if squeeze:
- h = h.squeeze(-1).squeeze(-1)
-
- if self.logdet:
- log_abs = torch.log(torch.abs(self.scale))
- logdet = height*width*torch.sum(log_abs)
- logdet = logdet * torch.ones(input.shape[0]).to(input)
- return h, logdet
-
- return h
-
- def reverse(self, output):
- if self.training and self.initialized.item() == 0:
- if not self.allow_reverse_init:
- raise RuntimeError(
- "Initializing ActNorm in reverse direction is "
- "disabled by default. Use allow_reverse_init=True to enable."
- )
- else:
- self.initialize(output)
- self.initialized.fill_(1)
-
- if len(output.shape) == 2:
- output = output[:,:,None,None]
- squeeze = True
- else:
- squeeze = False
-
- h = output / self.scale - self.loc
-
- if squeeze:
- h = h.squeeze(-1).squeeze(-1)
- return h
-
-
-class AbstractEncoder(nn.Module):
- def __init__(self):
- super().__init__()
-
- def encode(self, *args, **kwargs):
- raise NotImplementedError
-
-
-class Labelator(AbstractEncoder):
- """Net2Net Interface for Class-Conditional Model"""
- def __init__(self, n_classes, quantize_interface=True):
- super().__init__()
- self.n_classes = n_classes
- self.quantize_interface = quantize_interface
-
- def encode(self, c):
- c = c[:,None]
- if self.quantize_interface:
- return c, None, [None, None, c.long()]
- return c
-
-
-class SOSProvider(AbstractEncoder):
- # for unconditional training
- def __init__(self, sos_token, quantize_interface=True):
- super().__init__()
- self.sos_token = sos_token
- self.quantize_interface = quantize_interface
-
- def encode(self, x):
- # get batch size from data and replicate sos_token
- c = torch.ones(x.shape[0], 1)*self.sos_token
- c = c.long().to(x.device)
- if self.quantize_interface:
- return c, None, [None, None, c]
- return c
diff --git a/spaces/EnigmaOfTheWorld/Power_AI_Point/README.md b/spaces/EnigmaOfTheWorld/Power_AI_Point/README.md
deleted file mode 100644
index 2ec7437e43b2c4fc839835c1d7c5892ddbe5ef8d..0000000000000000000000000000000000000000
--- a/spaces/EnigmaOfTheWorld/Power_AI_Point/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: GenAI Point
-emoji: 😻
-colorFrom: pink
-colorTo: purple
-sdk: gradio
-sdk_version: 3.38.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/EsoCode/text-generation-webui/docs/README.md b/spaces/EsoCode/text-generation-webui/docs/README.md
deleted file mode 100644
index 06b73b8468ab263a230cb44ba45a6c95f00b2ada..0000000000000000000000000000000000000000
--- a/spaces/EsoCode/text-generation-webui/docs/README.md
+++ /dev/null
@@ -1,23 +0,0 @@
-# text-generation-webui documentation
-
-## Table of contents
-
-* [Audio Notification](Audio-Notification.md)
-* [Chat mode](Chat-mode.md)
-* [DeepSpeed](DeepSpeed.md)
-* [Docker](Docker.md)
-* [ExLlama](ExLlama.md)
-* [Extensions](Extensions.md)
-* [FlexGen](FlexGen.md)
-* [Generation parameters](Generation-parameters.md)
-* [GPTQ models (4 bit mode)](GPTQ-models-(4-bit-mode).md)
-* [llama.cpp models](llama.cpp-models.md)
-* [LLaMA model](LLaMA-model.md)
-* [LoRA](LoRA.md)
-* [Low VRAM guide](Low-VRAM-guide.md)
-* [RWKV model](RWKV-model.md)
-* [Spell book](Spell-book.md)
-* [System requirements](System-requirements.md)
-* [Training LoRAs](Training-LoRAs.md)
-* [Windows installation guide](Windows-installation-guide.md)
-* [WSL installation guide](WSL-installation-guide.md)
diff --git a/spaces/EsoCode/text-generation-webui/extensions/multimodal/multimodal_embedder.py b/spaces/EsoCode/text-generation-webui/extensions/multimodal/multimodal_embedder.py
deleted file mode 100644
index 626077cb80987d66af90f390e31aa2f2def76fec..0000000000000000000000000000000000000000
--- a/spaces/EsoCode/text-generation-webui/extensions/multimodal/multimodal_embedder.py
+++ /dev/null
@@ -1,178 +0,0 @@
-import base64
-import re
-from dataclasses import dataclass
-from io import BytesIO
-from typing import Any, List, Optional
-
-import torch
-from PIL import Image
-
-from extensions.multimodal.pipeline_loader import load_pipeline
-from modules import shared
-from modules.logging_colors import logger
-from modules.text_generation import encode, get_max_prompt_length
-
-
-@dataclass
-class PromptPart:
- text: str
- image: Optional[Image.Image] = None
- is_image: bool = False
- input_ids: Optional[torch.Tensor] = None
- embedding: Optional[torch.Tensor] = None
-
-
-class MultimodalEmbedder:
- def __init__(self, params: dict):
- pipeline, source = load_pipeline(params)
- self.pipeline = pipeline
- logger.info(f'Multimodal: loaded pipeline {self.pipeline.name()} from pipelines/{source} ({self.pipeline.__class__.__name__})')
-
- def _split_prompt(self, prompt: str, load_images: bool = False) -> List[PromptPart]:
- """Splits a prompt into a list of `PromptParts` to separate image data from text.
- It will also append `image_start` and `image_end` before and after the image, and optionally parse and load the images,
- if `load_images` is `True`.
- """
- parts: List[PromptPart] = []
- curr = 0
- while True:
- match = re.search(r'', prompt[curr:])
- if match is None:
- # no more image tokens, append the rest of the prompt
- if curr > 0:
- # add image end token after last image
- parts.append(PromptPart(text=self.pipeline.image_end() + prompt[curr:]))
- else:
- parts.append(PromptPart(text=prompt))
- break
- # found an image, append image start token to the text
- if match.start() > 0:
- parts.append(PromptPart(text=prompt[curr:curr + match.start()] + self.pipeline.image_start()))
- else:
- parts.append(PromptPart(text=self.pipeline.image_start()))
- # append the image
- parts.append(PromptPart(
- text=match.group(0),
- image=Image.open(BytesIO(base64.b64decode(match.group(1)))) if load_images else None,
- is_image=True
- ))
- curr += match.end()
- return parts
-
- def _len_in_tokens_prompt_parts(self, parts: List[PromptPart]) -> int:
- """Total length in tokens of all `parts`"""
- tokens = 0
- for part in parts:
- if part.is_image:
- tokens += self.pipeline.num_image_embeds()
- elif part.input_ids is not None:
- tokens += len(part.input_ids)
- else:
- tokens += len(encode(part.text)[0])
- return tokens
-
- def len_in_tokens(self, prompt: str) -> int:
- """Total length in tokens for a given text `prompt`"""
- parts = self._split_prompt(prompt, False)
- return self._len_in_tokens_prompt_parts(parts)
-
- def _encode_single_text(self, part: PromptPart, add_bos_token: bool) -> PromptPart:
- """Encode a single prompt `part` to `input_ids`. Returns a `PromptPart`"""
- if part.is_image:
- placeholders = torch.ones((self.pipeline.num_image_embeds())) * self.pipeline.placeholder_token_id()
- part.input_ids = placeholders.to(shared.model.device, dtype=torch.int64)
- else:
- part.input_ids = encode(part.text, add_bos_token=add_bos_token)[0].to(shared.model.device, dtype=torch.int64)
- return part
-
- @staticmethod
- def _num_images(parts: List[PromptPart]) -> int:
- count = 0
- for part in parts:
- if part.is_image:
- count += 1
- return count
-
- def _encode_text(self, state, parts: List[PromptPart]) -> List[PromptPart]:
- """Encode text to token_ids, also truncate the prompt, if necessary.
-
- The chat/instruct mode should make prompts that fit in get_max_prompt_length, but if max_new_tokens are set
- such that the context + min_rows don't fit, we can get a prompt which is too long.
- We can't truncate image embeddings, as it leads to broken generation, so remove the images instead and warn the user
- """
- encoded: List[PromptPart] = []
- for i, part in enumerate(parts):
- encoded.append(self._encode_single_text(part, i == 0 and state['add_bos_token']))
-
- # truncation:
- max_len = get_max_prompt_length(state)
- removed_images = 0
-
- # 1. remove entire text/image blocks
- while self._len_in_tokens_prompt_parts(encoded[1:]) > max_len:
- if encoded[0].is_image:
- removed_images += 1
- encoded = encoded[1:]
-
- # 2. check if the last prompt part doesn't need to get truncated
- if self._len_in_tokens_prompt_parts(encoded) > max_len:
- if encoded[0].is_image:
- # don't truncate image embeddings, just remove the image, otherwise generation will be broken
- removed_images += 1
- encoded = encoded[1:]
- elif len(encoded) > 1 and encoded[0].text.endswith(self.pipeline.image_start()):
- # see if we can keep image_start token
- len_image_start = len(encode(self.pipeline.image_start(), add_bos_token=state['add_bos_token'])[0])
- if self._len_in_tokens_prompt_parts(encoded[1:]) + len_image_start > max_len:
- # we can't -> remove this text, and the image
- encoded = encoded[2:]
- removed_images += 1
- else:
- # we can -> just truncate the text
- trunc_len = self._len_in_tokens_prompt_parts(encoded) - max_len
- encoded[0].input_ids = encoded[0].input_ids[trunc_len:]
- elif len(encoded) > 0:
- # only one text left, truncate it normally
- trunc_len = self._len_in_tokens_prompt_parts(encoded) - max_len
- encoded[0].input_ids = encoded[0].input_ids[trunc_len:]
-
- # notify user if we truncated an image
- if removed_images > 0:
- logger.warning(f"Multimodal: removed {removed_images} image(s) from prompt. Try decreasing max_new_tokens if generation is broken")
-
- return encoded
-
- def _embed(self, parts: List[PromptPart]) -> List[PromptPart]:
- # batch images
- image_indicies = [i for i, part in enumerate(parts) if part.is_image]
- embedded = self.pipeline.embed_images([parts[i].image for i in image_indicies])
- for i, embeds in zip(image_indicies, embedded):
- parts[i].embedding = embeds
- # embed text
- for (i, part) in enumerate(parts):
- if not part.is_image:
- parts[i].embedding = self.pipeline.embed_tokens(part.input_ids)
- return parts
-
- def _remove_old_images(self, parts: List[PromptPart], params: dict) -> List[PromptPart]:
- if params['add_all_images_to_prompt']:
- return parts
- already_added = False
- for i, part in reversed(list(enumerate(parts))):
- if part.is_image:
- if already_added:
- parts[i].embedding = self.pipeline.placeholder_embeddings()
- else:
- already_added = True
- return parts
-
- def forward(self, prompt: str, state: Any, params: dict):
- prompt_parts = self._split_prompt(prompt, True)
- prompt_parts = self._encode_text(state, prompt_parts)
- prompt_parts = self._embed(prompt_parts)
- prompt_parts = self._remove_old_images(prompt_parts, params)
- embeds = tuple(part.embedding for part in prompt_parts)
- ids = tuple(part.input_ids for part in prompt_parts)
- input_embeds = torch.cat(embeds, dim=0)
- input_ids = torch.cat(ids, dim=0)
- return prompt, input_ids, input_embeds, self._num_images(prompt_parts)
diff --git a/spaces/Gabesantos1007/Dall-e/app.py b/spaces/Gabesantos1007/Dall-e/app.py
deleted file mode 100644
index d64025b1a44ce9d83ca9cb3c9840aaa9e7c0eebb..0000000000000000000000000000000000000000
--- a/spaces/Gabesantos1007/Dall-e/app.py
+++ /dev/null
@@ -1,74 +0,0 @@
-import base64
-import streamlit as st
-import openai
-import os
-
-# openai.api_key = ""
-openai.api_key = os.environ.get("OPENAI_API_KEY")
-
-st.set_page_config(
- page_title="DALL·E Gerador de Imagens 🖼️",
- page_icon="🎨",
- layout="wide",
-)
-# Custom CSS styles
-st.markdown(
- """
-
- """,
- unsafe_allow_html=True
-)
-
-st.title("DALL·E Gerador de Imagens 🖼️")
-
-# Prompt input
-prompt = st.text_area("Entre o prompt:👇", height=5)
-
-# Size selection
-size_options = ["256x256", "512x512", "1024x1024"]
-selected_size = st.selectbox("Selecione o tamanho da imagem:", size_options)
-# href = f'Download'
-# st.markdown(href, unsafe_allow_html=True)
-
-
-if st.button("Veja a mágica 🪄"):
- # Generate image
- try:
- response = openai.Image.create(
- prompt=prompt,
- n=1,
- size=selected_size,
- response_format="b64_json",
- )
-
- # Display image
-
- if response["data"]:
- image_data = base64.b64decode(response["data"][0]["b64_json"])
- st.image(image_data)
-
- # Download button
- b64_image = base64.b64encode(image_data).decode()
- href = f'Download'
- st.markdown(href, unsafe_allow_html=True)
- else:
- st.warning("No image generated.")
- except Exception as e:
- st.error(e)
- print(e)
\ No newline at end of file
diff --git a/spaces/Gauri54damle/sdxl-lora-multi-object/app.py b/spaces/Gauri54damle/sdxl-lora-multi-object/app.py
deleted file mode 100644
index 6331091fc36baca7acbd56f36aaee7469deb138c..0000000000000000000000000000000000000000
--- a/spaces/Gauri54damle/sdxl-lora-multi-object/app.py
+++ /dev/null
@@ -1,137 +0,0 @@
-from email import generator
-from diffusers import DiffusionPipeline
-
-import gradio as gr
-import torch
-from PIL import Image, ImageDraw, ImageFont
-## VAE - Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
-from diffusers import AutoencoderKL
-
-
-
-model = "stabilityai/stable-diffusion-xl-base-1.0"
-finetuningLayer = "Gauri54damle/sdxl-lora-McDBigMac-meal-model"
-
-device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-torch_dtype = torch.float16 if device.type == 'cuda' else torch.float32
-
-
-
-import os
-HF_API_TOKEN = os.getenv("HF_API_TOKEN")
-
-from huggingface_hub import login
-login(token=HF_API_TOKEN)
-
-
-vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch_dtype)
-pipe = DiffusionPipeline.from_pretrained(
- model,
- vae=vae,
- torch_dtype=torch_dtype,
- use_safetensors=True
-)
-pipe.load_lora_weights(finetuningLayer)
-
-pipe = pipe.to(device)
-
-
-
-
-def create_error_image(message):
- # Create a blank image with white background
- width, height = 512, 512
- image = Image.new('RGB', (width, height), 'white')
- draw = ImageDraw.Draw(image)
-
- # Load a truetype or opentype font file
- font = ImageFont.load_default()
-
- # Position and message
-
- draw.text((127,251), message, font=font, fill="black")
-
- return image
-
-def inference(model,finetuningLayer, prompt, guidance, steps, seed):
-
-
-
- if not prompt:
- return create_error_image("Sorry, add your text prompt and try again!!")
- else:
- generator = torch.Generator(device).manual_seed(seed)
- image = pipe(
- prompt,
- num_inference_steps=int(steps),
- guidance_scale=guidance,
- generator=generator).images[0]
-
- return image
-
-
-css = """
-
-"""
-with gr.Blocks(css=css) as demo:
- gr.HTML(
- """
-
-
-
Finetuned Diffusion
-
-
- """
- )
- with gr.Row():
-
- with gr.Column():
-
- model = gr.Dropdown(label="Base Model", choices=["stabilityai/stable-diffusion-xl-base-1.0"], default="stabilityai/stable-diffusion-xl-base-1.0")
- finetuningLayer= gr.Dropdown(label="Finetuning Layer", choices=["Gauri54damle/sdxl-lora-multi-object"], default="Gauri54damle/sdxl-lora-multi-object")
-
- prompt = gr.Textbox(label="Prompt", placeholder="photo of burger called McDBigMac placed on serving tray with fries called McDFries- it is unique identifier need to be used to identify burger")
-
- with gr.Accordion("Advanced options", open=True):
- guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15)
- steps = gr.Slider(label="Steps", value=50, maximum=100, minimum=2)
- seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1)
-
- run = gr.Button(value="Run")
- gr.Markdown(f"Running on: {device}")
- with gr.Column():
- image_out = gr.Image()
-
- ## Add prompt and press enter to run
- ##prompt.submit(inference, inputs=[model, finetuningLayer,prompt, guidance, steps, seed], outputs=image_out)
-
- ## Click run button to run
- run.click(inference, inputs=[model, finetuningLayer, prompt, guidance, steps, seed], outputs=image_out)
-
-
-
-demo.queue()
-demo.launch()
\ No newline at end of file
diff --git a/spaces/Gen-Sim/Gen-Sim/misc/run_figure2_blender.sh b/spaces/Gen-Sim/Gen-Sim/misc/run_figure2_blender.sh
deleted file mode 100644
index d42e7d34947c37ea36b0f49869fda7645920be97..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/misc/run_figure2_blender.sh
+++ /dev/null
@@ -1,10 +0,0 @@
-python cliport/demos.py n=3 task=build-bridge mode=test disp=False record.save_video=True +regenerate_data=True record.add_text=True +record.blender_render=True ;
-python cliport/demos.py n=3 task=block_on_cylinder_on_pallet mode=test disp=False record.save_video=True +regenerate_data=True record.add_text=True +record.blender_render=True ;
-python cliport/demos.py n=3 task=build-two-circles mode=test disp=False record.save_video=True +regenerate_data=True record.add_text=True +record.blender_render=True ;
-python cliport/demos.py n=3 task=Four_corner_pyramid_challenge mode=test disp=False record.save_video=True +regenerate_data=True record.add_text=True +record.blender_render=True ;
-python cliport/demos.py n=3 task=align_cylinders_in_zones mode=test disp=False record.save_video=True +regenerate_data=True record.add_text=True +record.blender_render=True ;
-python cliport/demos.py n=3 task=build_car mode=test disp=False record.save_video=True +regenerate_data=True record.add_text=True +record.blender_render=True ;
-python cliport/demos.py n=3 task=construct_corner_blocks mode=test disp=False record.save_video=True +regenerate_data=True record.add_text=True +record.blender_render=True ;
-python cliport/demos.py n=3 task=color_ordered_insertion mode=test disp=False record.save_video=True +regenerate_data=True record.add_text=True +record.blender_render=True ;
-python cliport/demos.py n=3 task=align_pair_colored_blocks_along_line mode=test disp=False record.save_video=True +regenerate_data=True record.add_text=True +record.blender_render=True ;
-python cliport/demos.py n=3 task=palletizing_boxes mode=test disp=False record.save_video=True +regenerate_data=True record.add_text=True +record.blender_render=True ;
diff --git a/spaces/Gradio-Blocks/StyleGAN-Human/model.py b/spaces/Gradio-Blocks/StyleGAN-Human/model.py
deleted file mode 100644
index ae84a84f1827a190309d8cd5d57a84c408fb69ad..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/StyleGAN-Human/model.py
+++ /dev/null
@@ -1,80 +0,0 @@
-from __future__ import annotations
-
-import pathlib
-import pickle
-import sys
-
-import numpy as np
-import torch
-import torch.nn as nn
-from huggingface_hub import hf_hub_download
-
-app_dir = pathlib.Path(__file__).parent
-submodule_dir = app_dir / 'StyleGAN-Human'
-sys.path.insert(0, submodule_dir.as_posix())
-
-
-class Model:
- def __init__(self):
- self.device = torch.device(
- 'cuda:0' if torch.cuda.is_available() else 'cpu')
- self.model = self.load_model('stylegan_human_v2_1024.pkl')
-
- def load_model(self, file_name: str) -> nn.Module:
- path = hf_hub_download('public-data/StyleGAN-Human',
- f'models/{file_name}')
- with open(path, 'rb') as f:
- model = pickle.load(f)['G_ema']
- model.eval()
- model.to(self.device)
- with torch.inference_mode():
- z = torch.zeros((1, model.z_dim)).to(self.device)
- label = torch.zeros([1, model.c_dim], device=self.device)
- model(z, label, force_fp32=True)
- return model
-
- def generate_z(self, z_dim: int, seed: int) -> torch.Tensor:
- return torch.from_numpy(np.random.RandomState(seed).randn(
- 1, z_dim)).to(self.device).float()
-
- @torch.inference_mode()
- def generate_single_image(self, seed: int,
- truncation_psi: float) -> np.ndarray:
- seed = int(np.clip(seed, 0, np.iinfo(np.uint32).max))
-
- z = self.generate_z(self.model.z_dim, seed)
- label = torch.zeros([1, self.model.c_dim], device=self.device)
-
- out = self.model(z,
- label,
- truncation_psi=truncation_psi,
- force_fp32=True)
- out = (out.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(
- torch.uint8)
- return out[0].cpu().numpy()
-
- @torch.inference_mode()
- def generate_interpolated_images(
- self, seed0: int, psi0: float, seed1: int, psi1: float,
- num_intermediate: int) -> list[np.ndarray]:
- seed0 = int(np.clip(seed0, 0, np.iinfo(np.uint32).max))
- seed1 = int(np.clip(seed1, 0, np.iinfo(np.uint32).max))
-
- z0 = self.generate_z(self.model.z_dim, seed0)
- z1 = self.generate_z(self.model.z_dim, seed1)
- vec = z1 - z0
- dvec = vec / (num_intermediate + 1)
- zs = [z0 + dvec * i for i in range(num_intermediate + 2)]
- dpsi = (psi1 - psi0) / (num_intermediate + 1)
- psis = [psi0 + dpsi * i for i in range(num_intermediate + 2)]
-
- label = torch.zeros([1, self.model.c_dim], device=self.device)
-
- res = []
- for z, psi in zip(zs, psis):
- out = self.model(z, label, truncation_psi=psi, force_fp32=True)
- out = (out.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(
- torch.uint8)
- out = out[0].cpu().numpy()
- res.append(out)
- return res
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_1x_coco.py
deleted file mode 100644
index 0308a567c147413688c9da679d06f93b0e154d88..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_1x_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py'
-model = dict(
- backbone=dict(
- norm_cfg=dict(type='SyncBN', requires_grad=True), norm_eval=False))
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/samplers/random_sampler.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/samplers/random_sampler.py
deleted file mode 100644
index f34b006e8bb0b55c74aa1c3b792f3664ada93162..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/samplers/random_sampler.py
+++ /dev/null
@@ -1,78 +0,0 @@
-import torch
-
-from ..builder import BBOX_SAMPLERS
-from .base_sampler import BaseSampler
-
-
-@BBOX_SAMPLERS.register_module()
-class RandomSampler(BaseSampler):
- """Random sampler.
-
- Args:
- num (int): Number of samples
- pos_fraction (float): Fraction of positive samples
- neg_pos_up (int, optional): Upper bound number of negative and
- positive samples. Defaults to -1.
- add_gt_as_proposals (bool, optional): Whether to add ground truth
- boxes as proposals. Defaults to True.
- """
-
- def __init__(self,
- num,
- pos_fraction,
- neg_pos_ub=-1,
- add_gt_as_proposals=True,
- **kwargs):
- from mmdet.core.bbox import demodata
- super(RandomSampler, self).__init__(num, pos_fraction, neg_pos_ub,
- add_gt_as_proposals)
- self.rng = demodata.ensure_rng(kwargs.get('rng', None))
-
- def random_choice(self, gallery, num):
- """Random select some elements from the gallery.
-
- If `gallery` is a Tensor, the returned indices will be a Tensor;
- If `gallery` is a ndarray or list, the returned indices will be a
- ndarray.
-
- Args:
- gallery (Tensor | ndarray | list): indices pool.
- num (int): expected sample num.
-
- Returns:
- Tensor or ndarray: sampled indices.
- """
- assert len(gallery) >= num
-
- is_tensor = isinstance(gallery, torch.Tensor)
- if not is_tensor:
- if torch.cuda.is_available():
- device = torch.cuda.current_device()
- else:
- device = 'cpu'
- gallery = torch.tensor(gallery, dtype=torch.long, device=device)
- perm = torch.randperm(gallery.numel(), device=gallery.device)[:num]
- rand_inds = gallery[perm]
- if not is_tensor:
- rand_inds = rand_inds.cpu().numpy()
- return rand_inds
-
- def _sample_pos(self, assign_result, num_expected, **kwargs):
- """Randomly sample some positive samples."""
- pos_inds = torch.nonzero(assign_result.gt_inds > 0, as_tuple=False)
- if pos_inds.numel() != 0:
- pos_inds = pos_inds.squeeze(1)
- if pos_inds.numel() <= num_expected:
- return pos_inds
- else:
- return self.random_choice(pos_inds, num_expected)
-
- def _sample_neg(self, assign_result, num_expected, **kwargs):
- """Randomly sample some negative samples."""
- neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False)
- if neg_inds.numel() != 0:
- neg_inds = neg_inds.squeeze(1)
- if len(neg_inds) <= num_expected:
- return neg_inds
- else:
- return self.random_choice(neg_inds, num_expected)
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/fovea.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/fovea.py
deleted file mode 100644
index 22a578efffbd108db644d907bae95c7c8df31f2e..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/fovea.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from ..builder import DETECTORS
-from .single_stage import SingleStageDetector
-
-
-@DETECTORS.register_module()
-class FOVEA(SingleStageDetector):
- """Implementation of `FoveaBox `_"""
-
- def __init__(self,
- backbone,
- neck,
- bbox_head,
- train_cfg=None,
- test_cfg=None,
- pretrained=None):
- super(FOVEA, self).__init__(backbone, neck, bbox_head, train_cfg,
- test_cfg, pretrained)
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/pascal_context.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/pascal_context.py
deleted file mode 100644
index 541a63c66a13fb16fd52921e755715ad8d078fdd..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/pascal_context.py
+++ /dev/null
@@ -1,103 +0,0 @@
-import os.path as osp
-
-from .builder import DATASETS
-from .custom import CustomDataset
-
-
-@DATASETS.register_module()
-class PascalContextDataset(CustomDataset):
- """PascalContext dataset.
-
- In segmentation map annotation for PascalContext, 0 stands for background,
- which is included in 60 categories. ``reduce_zero_label`` is fixed to
- False. The ``img_suffix`` is fixed to '.jpg' and ``seg_map_suffix`` is
- fixed to '.png'.
-
- Args:
- split (str): Split txt file for PascalContext.
- """
-
- CLASSES = ('background', 'aeroplane', 'bag', 'bed', 'bedclothes', 'bench',
- 'bicycle', 'bird', 'boat', 'book', 'bottle', 'building', 'bus',
- 'cabinet', 'car', 'cat', 'ceiling', 'chair', 'cloth',
- 'computer', 'cow', 'cup', 'curtain', 'dog', 'door', 'fence',
- 'floor', 'flower', 'food', 'grass', 'ground', 'horse',
- 'keyboard', 'light', 'motorbike', 'mountain', 'mouse', 'person',
- 'plate', 'platform', 'pottedplant', 'road', 'rock', 'sheep',
- 'shelves', 'sidewalk', 'sign', 'sky', 'snow', 'sofa', 'table',
- 'track', 'train', 'tree', 'truck', 'tvmonitor', 'wall', 'water',
- 'window', 'wood')
-
- PALETTE = [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50],
- [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255],
- [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7],
- [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82],
- [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3],
- [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255],
- [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220],
- [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224],
- [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255],
- [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7],
- [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153],
- [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255],
- [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0],
- [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255],
- [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255]]
-
- def __init__(self, split, **kwargs):
- super(PascalContextDataset, self).__init__(
- img_suffix='.jpg',
- seg_map_suffix='.png',
- split=split,
- reduce_zero_label=False,
- **kwargs)
- assert osp.exists(self.img_dir) and self.split is not None
-
-
-@DATASETS.register_module()
-class PascalContextDataset59(CustomDataset):
- """PascalContext dataset.
-
- In segmentation map annotation for PascalContext, 0 stands for background,
- which is included in 60 categories. ``reduce_zero_label`` is fixed to
- False. The ``img_suffix`` is fixed to '.jpg' and ``seg_map_suffix`` is
- fixed to '.png'.
-
- Args:
- split (str): Split txt file for PascalContext.
- """
-
- CLASSES = ('aeroplane', 'bag', 'bed', 'bedclothes', 'bench', 'bicycle',
- 'bird', 'boat', 'book', 'bottle', 'building', 'bus', 'cabinet',
- 'car', 'cat', 'ceiling', 'chair', 'cloth', 'computer', 'cow',
- 'cup', 'curtain', 'dog', 'door', 'fence', 'floor', 'flower',
- 'food', 'grass', 'ground', 'horse', 'keyboard', 'light',
- 'motorbike', 'mountain', 'mouse', 'person', 'plate', 'platform',
- 'pottedplant', 'road', 'rock', 'sheep', 'shelves', 'sidewalk',
- 'sign', 'sky', 'snow', 'sofa', 'table', 'track', 'train',
- 'tree', 'truck', 'tvmonitor', 'wall', 'water', 'window', 'wood')
-
- PALETTE = [[180, 120, 120], [6, 230, 230], [80, 50, 50], [4, 200, 3],
- [120, 120, 80], [140, 140, 140], [204, 5, 255], [230, 230, 230],
- [4, 250, 7], [224, 5, 255], [235, 255, 7], [150, 5, 61],
- [120, 120, 70], [8, 255, 51], [255, 6, 82], [143, 255, 140],
- [204, 255, 4], [255, 51, 7], [204, 70, 3], [0, 102, 200],
- [61, 230, 250], [255, 6, 51], [11, 102, 255], [255, 7, 71],
- [255, 9, 224], [9, 7, 230], [220, 220, 220], [255, 9, 92],
- [112, 9, 255], [8, 255, 214], [7, 255, 224], [255, 184, 6],
- [10, 255, 71], [255, 41, 10], [7, 255, 255], [224, 255, 8],
- [102, 8, 255], [255, 61, 6], [255, 194, 7], [255, 122, 8],
- [0, 255, 20], [255, 8, 41], [255, 5, 153], [6, 51, 255],
- [235, 12, 255], [160, 150, 20], [0, 163, 255], [140, 140, 140],
- [250, 10, 15], [20, 255, 0], [31, 255, 0], [255, 31, 0],
- [255, 224, 0], [153, 255, 0], [0, 0, 255], [255, 71, 0],
- [0, 235, 255], [0, 173, 255], [31, 0, 255]]
-
- def __init__(self, split, **kwargs):
- super(PascalContextDataset59, self).__init__(
- img_suffix='.jpg',
- seg_map_suffix='.png',
- split=split,
- reduce_zero_label=True,
- **kwargs)
- assert osp.exists(self.img_dir) and self.split is not None
diff --git a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/data/audio.py b/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/data/audio.py
deleted file mode 100644
index 2048df6f175d7303bcf5c7b931922fd297908ead..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/data/audio.py
+++ /dev/null
@@ -1,215 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Audio IO methods are defined in this module (info, read, write),
-We rely on av library for faster read when possible, otherwise on torchaudio.
-"""
-
-from dataclasses import dataclass
-from pathlib import Path
-import logging
-import typing as tp
-
-import numpy as np
-import soundfile
-import torch
-from torch.nn import functional as F
-import torchaudio as ta
-
-import av
-
-from .audio_utils import f32_pcm, i16_pcm, normalize_audio
-
-
-_av_initialized = False
-
-
-def _init_av():
- global _av_initialized
- if _av_initialized:
- return
- logger = logging.getLogger('libav.mp3')
- logger.setLevel(logging.ERROR)
- _av_initialized = True
-
-
-@dataclass(frozen=True)
-class AudioFileInfo:
- sample_rate: int
- duration: float
- channels: int
-
-
-def _av_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- _init_av()
- with av.open(str(filepath)) as af:
- stream = af.streams.audio[0]
- sample_rate = stream.codec_context.sample_rate
- duration = float(stream.duration * stream.time_base)
- channels = stream.channels
- return AudioFileInfo(sample_rate, duration, channels)
-
-
-def _soundfile_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- info = soundfile.info(filepath)
- return AudioFileInfo(info.samplerate, info.duration, info.channels)
-
-
-def audio_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- # torchaudio no longer returns useful duration informations for some formats like mp3s.
- filepath = Path(filepath)
- if filepath.suffix in ['.flac', '.ogg']: # TODO: Validate .ogg can be safely read with av_info
- # ffmpeg has some weird issue with flac.
- return _soundfile_info(filepath)
- else:
- return _av_info(filepath)
-
-
-def _av_read(filepath: tp.Union[str, Path], seek_time: float = 0, duration: float = -1.) -> tp.Tuple[torch.Tensor, int]:
- """FFMPEG-based audio file reading using PyAV bindings.
- Soundfile cannot read mp3 and av_read is more efficient than torchaudio.
-
- Args:
- filepath (str or Path): Path to audio file to read.
- seek_time (float): Time at which to start reading in the file.
- duration (float): Duration to read from the file. If set to -1, the whole file is read.
- Returns:
- Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate
- """
- _init_av()
- with av.open(str(filepath)) as af:
- stream = af.streams.audio[0]
- sr = stream.codec_context.sample_rate
- num_frames = int(sr * duration) if duration >= 0 else -1
- frame_offset = int(sr * seek_time)
- # we need a small negative offset otherwise we get some edge artifact
- # from the mp3 decoder.
- af.seek(int(max(0, (seek_time - 0.1)) / stream.time_base), stream=stream)
- frames = []
- length = 0
- for frame in af.decode(streams=stream.index):
- current_offset = int(frame.rate * frame.pts * frame.time_base)
- strip = max(0, frame_offset - current_offset)
- buf = torch.from_numpy(frame.to_ndarray())
- if buf.shape[0] != stream.channels:
- buf = buf.view(-1, stream.channels).t()
- buf = buf[:, strip:]
- frames.append(buf)
- length += buf.shape[1]
- if num_frames > 0 and length >= num_frames:
- break
- assert frames
- # If the above assert fails, it is likely because we seeked past the end of file point,
- # in which case ffmpeg returns a single frame with only zeros, and a weird timestamp.
- # This will need proper debugging, in due time.
- wav = torch.cat(frames, dim=1)
- assert wav.shape[0] == stream.channels
- if num_frames > 0:
- wav = wav[:, :num_frames]
- return f32_pcm(wav), sr
-
-
-def audio_read(filepath: tp.Union[str, Path], seek_time: float = 0.,
- duration: float = -1., pad: bool = False) -> tp.Tuple[torch.Tensor, int]:
- """Read audio by picking the most appropriate backend tool based on the audio format.
-
- Args:
- filepath (str or Path): Path to audio file to read.
- seek_time (float): Time at which to start reading in the file.
- duration (float): Duration to read from the file. If set to -1, the whole file is read.
- pad (bool): Pad output audio if not reaching expected duration.
- Returns:
- Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate.
- """
- fp = Path(filepath)
- if fp.suffix in ['.flac', '.ogg']: # TODO: check if we can safely use av_read for .ogg
- # There is some bug with ffmpeg and reading flac
- info = _soundfile_info(filepath)
- frames = -1 if duration <= 0 else int(duration * info.sample_rate)
- frame_offset = int(seek_time * info.sample_rate)
- wav, sr = soundfile.read(filepath, start=frame_offset, frames=frames, dtype=np.float32)
- assert info.sample_rate == sr, f"Mismatch of sample rates {info.sample_rate} {sr}"
- wav = torch.from_numpy(wav).t().contiguous()
- if len(wav.shape) == 1:
- wav = torch.unsqueeze(wav, 0)
- elif (
- fp.suffix in ['.wav', '.mp3'] and fp.suffix[1:] in ta.utils.sox_utils.list_read_formats()
- and duration <= 0 and seek_time == 0
- ):
- # Torchaudio is faster if we load an entire file at once.
- wav, sr = ta.load(fp)
- else:
- wav, sr = _av_read(filepath, seek_time, duration)
- if pad and duration > 0:
- expected_frames = int(duration * sr)
- wav = F.pad(wav, (0, expected_frames - wav.shape[-1]))
- return wav, sr
-
-
-def audio_write(stem_name: tp.Union[str, Path],
- wav: torch.Tensor, sample_rate: int,
- format: str = 'wav', mp3_rate: int = 320, normalize: bool = True,
- strategy: str = 'peak', peak_clip_headroom_db: float = 1,
- rms_headroom_db: float = 18, loudness_headroom_db: float = 14,
- loudness_compressor: bool = False,
- log_clipping: bool = True, make_parent_dir: bool = True,
- add_suffix: bool = True) -> Path:
- """Convenience function for saving audio to disk. Returns the filename the audio was written to.
-
- Args:
- stem_name (str or Path): Filename without extension which will be added automatically.
- format (str): Either "wav" or "mp3".
- mp3_rate (int): kbps when using mp3s.
- normalize (bool): if `True` (default), normalizes according to the prescribed
- strategy (see after). If `False`, the strategy is only used in case clipping
- would happen.
- strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak',
- i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square
- with extra headroom to avoid clipping. 'clip' just clips.
- peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy.
- rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger
- than the `peak_clip` one to avoid further clipping.
- loudness_headroom_db (float): Target loudness for loudness normalization.
- loudness_compressor (bool): Uses tanh for soft clipping when strategy is 'loudness'.
- when strategy is 'loudness'log_clipping (bool): If True, basic logging on stderr when clipping still
- occurs despite strategy (only for 'rms').
- make_parent_dir (bool): Make parent directory if it doesn't exist.
- Returns:
- Path: Path of the saved audio.
- """
- assert wav.dtype.is_floating_point, "wav is not floating point"
- if wav.dim() == 1:
- wav = wav[None]
- elif wav.dim() > 2:
- raise ValueError("Input wav should be at most 2 dimension.")
- assert wav.isfinite().all()
- wav = normalize_audio(wav, normalize, strategy, peak_clip_headroom_db,
- rms_headroom_db, loudness_headroom_db, log_clipping=log_clipping,
- sample_rate=sample_rate, stem_name=str(stem_name))
- kwargs: dict = {}
- if format == 'mp3':
- suffix = '.mp3'
- kwargs.update({"compression": mp3_rate})
- elif format == 'wav':
- wav = i16_pcm(wav)
- suffix = '.wav'
- kwargs.update({"encoding": "PCM_S", "bits_per_sample": 16})
- else:
- raise RuntimeError(f"Invalid format {format}. Only wav or mp3 are supported.")
- if not add_suffix:
- suffix = ''
- path = Path(str(stem_name) + suffix)
- if make_parent_dir:
- path.parent.mkdir(exist_ok=True, parents=True)
- try:
- ta.save(path, wav, sample_rate, **kwargs)
- except Exception:
- if path.exists():
- # we do not want to leave half written files around.
- path.unlink()
- raise
- return path
diff --git a/spaces/HLasse/textdescriptives/options.py b/spaces/HLasse/textdescriptives/options.py
deleted file mode 100644
index 56e9a9fba21988f853f67e2f0e1553afd55b371a..0000000000000000000000000000000000000000
--- a/spaces/HLasse/textdescriptives/options.py
+++ /dev/null
@@ -1,113 +0,0 @@
-from typing import Dict, List, Set
-
-from spacy.cli.download import get_compatibility
-
-
-def metrics_options() -> List[str]:
- return [
- "descriptive_stats",
- "readability",
- "dependency_distance",
- "pos_proportions",
- "coherence",
- "quality",
- "information_theory",
- ]
-
-
-def language_options() -> Dict[str, str]:
- return {
- "Catalan": "ca",
- "Chinese": "zh",
- "Croatian": "hr",
- "Danish": "da",
- "Dutch": "nl",
- "English": "en",
- "Finnish": "fi",
- "French": "fr",
- "German": "de",
- "Greek": "el",
- "Italian": "it",
- "Japanese": "ja",
- "Korean": "ko",
- "Lithuanian": "lt",
- "Macedonian": "mk",
- "Multi-language": "xx",
- "Norwegian Bokmål": "nb",
- "Polish": "pl",
- "Portuguese": "pt",
- "Romanian": "ro",
- "Russian": "ru",
- "Spanish": "es",
- "Swedish": "sv",
- "Ukrainian": "uk",
- }
-
-
-#################
-# Model options #
-#################
-
-
-def all_model_size_options_pretty_to_short() -> Dict[str, str]:
- return {
- "Small": "sm",
- "Medium": "md",
- "Large": "lg",
- # "Transformer": "trf" # Disabled for now
- }
-
-
-def all_model_size_options_short_to_pretty() -> Dict[str, str]:
- return {
- short: pretty
- for pretty, short in all_model_size_options_pretty_to_short().items()
- }
-
-
-def available_model_size_options(lang) -> List[str]:
- short_to_pretty = all_model_size_options_short_to_pretty()
- if lang == "all":
- return sorted(list(short_to_pretty.values()))
- return sorted(
- [
- short_to_pretty[short]
- for short in ModelAvailabilityChecker.available_model_sizes_for_language(
- lang
- )
- ]
- )
-
-
-class ModelAvailabilityChecker:
- @staticmethod
- def available_models() -> List[str]:
- return list(get_compatibility().keys())
-
- @staticmethod
- def extract_language_and_size() -> List[List[str]]:
- # [["ca", "sm"], ["en", "lg"], ...]
- return list(
- [
- list(map(m.split("_").__getitem__, [0, -1]))
- for m in ModelAvailabilityChecker.available_models()
- ]
- )
-
- @staticmethod
- def model_is_available(lang: str, size: str) -> bool:
- lang_and_size = set(
- [
- "_".join(lang_size)
- for lang_size in ModelAvailabilityChecker.extract_language_and_size()
- ]
- )
- return f"{lang}_{size}" in lang_and_size
-
- @staticmethod
- def available_model_sizes_for_language(lang: str) -> Set[str]:
- return set([
- size
- for (lang_, size) in ModelAvailabilityChecker.extract_language_and_size()
- if lang_ == lang and size in all_model_size_options_pretty_to_short().values()
- ])
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/paraphraser/README.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/paraphraser/README.md
deleted file mode 100644
index 3810311f30f99f0a07fd8e5d3723bffeba9948c3..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/paraphraser/README.md
+++ /dev/null
@@ -1,46 +0,0 @@
-# Paraphrasing with round-trip translation and mixture of experts
-
-Machine translation models can be used to paraphrase text by translating it to
-an intermediate language and back (round-trip translation).
-
-This example shows how to paraphrase text by first passing it to an
-English-French translation model, followed by a French-English [mixture of
-experts translation model](/examples/translation_moe).
-
-##### 0. Setup
-
-Clone fairseq from source and install necessary dependencies:
-```bash
-git clone https://github.com/pytorch/fairseq.git
-cd fairseq
-pip install --editable .
-pip install sacremoses sentencepiece
-```
-
-##### 1. Download models
-```bash
-wget https://dl.fbaipublicfiles.com/fairseq/models/paraphraser.en-fr.tar.gz
-wget https://dl.fbaipublicfiles.com/fairseq/models/paraphraser.fr-en.hMoEup.tar.gz
-tar -xzvf paraphraser.en-fr.tar.gz
-tar -xzvf paraphraser.fr-en.hMoEup.tar.gz
-```
-
-##### 2. Paraphrase
-```bash
-python examples/paraphraser/paraphrase.py \
- --en2fr paraphraser.en-fr \
- --fr2en paraphraser.fr-en.hMoEup
-# Example input:
-# The new date for the Games, postponed for a year in response to the coronavirus pandemic, gives athletes time to recalibrate their training schedules.
-# Example outputs:
-# Delayed one year in response to the coronavirus pandemic, the new date of the Games gives athletes time to rebalance their training schedule.
-# The new date of the Games, which was rescheduled one year in response to the coronavirus (CV) pandemic, gives athletes time to rebalance their training schedule.
-# The new date of the Games, postponed one year in response to the coronavirus pandemic, provides athletes with time to rebalance their training schedule.
-# The Games' new date, postponed one year in response to the coronavirus pandemic, gives athletes time to rebalance their training schedule.
-# The new Games date, postponed one year in response to the coronavirus pandemic, gives the athletes time to rebalance their training schedule.
-# The new date of the Games, which was postponed one year in response to the coronavirus pandemic, gives the athletes time to rebalance their training schedule.
-# The new date of the Games, postponed one year in response to the coronavirus pandemic, gives athletes time to rebalance their training schedule.
-# The new date of the Games, postponed one year in response to the coronavirus pandemic, gives athletes time to re-balance their training schedule.
-# The new date of the Games, postponed one year in response to the coronavirus pandemic, gives the athletes time to rebalance their schedule of training.
-# The new date of the Games, postponed one year in response to the pandemic of coronavirus, gives the athletes time to rebalance their training schedule.
-```
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/scripts/phonemize_with_sil.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/scripts/phonemize_with_sil.py
deleted file mode 100644
index c6512d7322def67b27aba46e9e36da171db6963b..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/scripts/phonemize_with_sil.py
+++ /dev/null
@@ -1,83 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import numpy as np
-import sys
-
-
-def get_parser():
- parser = argparse.ArgumentParser(
- description="converts words to phones adding optional silences around in between words"
- )
- parser.add_argument(
- "--sil-prob",
- "-s",
- type=float,
- default=0,
- help="probability of inserting silence between each word",
- )
- parser.add_argument(
- "--surround",
- action="store_true",
- help="if set, surrounds each example with silence",
- )
- parser.add_argument(
- "--lexicon",
- help="lexicon to convert to phones",
- required=True,
- )
-
- return parser
-
-
-def main():
- parser = get_parser()
- args = parser.parse_args()
-
- sil_prob = args.sil_prob
- surround = args.surround
- sil = ""
-
- wrd_to_phn = {}
-
- with open(args.lexicon, "r") as lf:
- for line in lf:
- items = line.rstrip().split()
- assert len(items) > 1, line
- assert items[0] not in wrd_to_phn, items
- wrd_to_phn[items[0]] = items[1:]
-
- for line in sys.stdin:
- words = line.strip().split()
-
- if not all(w in wrd_to_phn for w in words):
- continue
-
- phones = []
- if surround:
- phones.append(sil)
-
- sample_sil_probs = None
- if sil_prob > 0 and len(words) > 1:
- sample_sil_probs = np.random.random(len(words) - 1)
-
- for i, w in enumerate(words):
- phones.extend(wrd_to_phn[w])
- if (
- sample_sil_probs is not None
- and i < len(sample_sil_probs)
- and sample_sil_probs[i] < sil_prob
- ):
- phones.append(sil)
-
- if surround:
- phones.append(sil)
- print(" ".join(phones))
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/noising.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/noising.py
deleted file mode 100644
index 2b1cc347203bfbdc9f1cba29e2e36427b7b5be57..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/noising.py
+++ /dev/null
@@ -1,335 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-import torch
-from fairseq.data import data_utils
-
-
-class WordNoising(object):
- """Generate a noisy version of a sentence, without changing words themselves."""
-
- def __init__(self, dictionary, bpe_cont_marker="@@", bpe_end_marker=None):
- self.dictionary = dictionary
- self.bpe_end = None
- if bpe_cont_marker:
- self.bpe_end = np.array(
- [
- not self.dictionary[i].endswith(bpe_cont_marker)
- for i in range(len(self.dictionary))
- ]
- )
- elif bpe_end_marker:
- self.bpe_end = np.array(
- [
- self.dictionary[i].endswith(bpe_end_marker)
- for i in range(len(self.dictionary))
- ]
- )
-
- self.get_word_idx = (
- self._get_bpe_word_idx if self.bpe_end is not None else self._get_token_idx
- )
-
- def noising(self, x, lengths, noising_prob=0.0):
- raise NotImplementedError()
-
- def _get_bpe_word_idx(self, x):
- """
- Given a list of BPE tokens, for every index in the tokens list,
- return the index of the word grouping that it belongs to.
- For example, for input x corresponding to ["how", "are", "y@@", "ou"],
- return [[0], [1], [2], [2]].
- """
- # x: (T x B)
- bpe_end = self.bpe_end[x]
-
- if x.size(0) == 1 and x.size(1) == 1:
- # Special case when we only have one word in x. If x = [[N]],
- # bpe_end is a scalar (bool) instead of a 2-dim array of bools,
- # which makes the sum operation below fail.
- return np.array([[0]])
-
- # do a reduce front sum to generate word ids
- word_idx = bpe_end[::-1].cumsum(0)[::-1]
- word_idx = word_idx.max(0)[None, :] - word_idx
- return word_idx
-
- def _get_token_idx(self, x):
- """
- This is to extend noising functions to be able to apply to non-bpe
- tokens, e.g. word or characters.
- """
- x = torch.t(x)
- word_idx = np.array([range(len(x_i)) for x_i in x])
- return np.transpose(word_idx)
-
-
-class WordDropout(WordNoising):
- """Randomly drop input words. If not passing blank_idx (default is None),
- then dropped words will be removed. Otherwise, it will be replaced by the
- blank_idx."""
-
- def __init__(
- self,
- dictionary,
- default_dropout_prob=0.1,
- bpe_cont_marker="@@",
- bpe_end_marker=None,
- ):
- super().__init__(dictionary, bpe_cont_marker, bpe_end_marker)
- self.default_dropout_prob = default_dropout_prob
-
- def noising(self, x, lengths, dropout_prob=None, blank_idx=None):
- if dropout_prob is None:
- dropout_prob = self.default_dropout_prob
- # x: (T x B), lengths: B
- if dropout_prob == 0:
- return x, lengths
-
- assert 0 < dropout_prob < 1
-
- # be sure to drop entire words
- word_idx = self.get_word_idx(x)
- sentences = []
- modified_lengths = []
- for i in range(lengths.size(0)):
- # Since dropout probabilities need to apply over non-pad tokens,
- # it is not trivial to generate the keep mask without consider
- # input lengths; otherwise, this could be done outside the loop
-
- # We want to drop whole words based on word_idx grouping
- num_words = max(word_idx[:, i]) + 1
-
- # ith example: [x0, x1, ..., eos, pad, ..., pad]
- # We should only generate keep probs for non-EOS tokens. Thus if the
- # input sentence ends in EOS, the last word idx is not included in
- # the dropout mask generation and we append True to always keep EOS.
- # Otherwise, just generate the dropout mask for all word idx
- # positions.
- has_eos = x[lengths[i] - 1, i] == self.dictionary.eos()
- if has_eos: # has eos?
- keep = np.random.rand(num_words - 1) >= dropout_prob
- keep = np.append(keep, [True]) # keep EOS symbol
- else:
- keep = np.random.rand(num_words) >= dropout_prob
-
- words = x[: lengths[i], i].tolist()
-
- # TODO: speed up the following loop
- # drop words from the input according to keep
- new_s = [
- w if keep[word_idx[j, i]] else blank_idx for j, w in enumerate(words)
- ]
- new_s = [w for w in new_s if w is not None]
- # we need to have at least one word in the sentence (more than the
- # start / end sentence symbols)
- if len(new_s) <= 1:
- # insert at beginning in case the only token left is EOS
- # EOS should be at end of list.
- new_s.insert(0, words[np.random.randint(0, len(words))])
- assert len(new_s) >= 1 and (
- not has_eos # Either don't have EOS at end or last token is EOS
- or (len(new_s) >= 2 and new_s[-1] == self.dictionary.eos())
- ), "New sentence is invalid."
- sentences.append(new_s)
- modified_lengths.append(len(new_s))
- # re-construct input
- modified_lengths = torch.LongTensor(modified_lengths)
- modified_x = torch.LongTensor(
- modified_lengths.max(), modified_lengths.size(0)
- ).fill_(self.dictionary.pad())
- for i in range(modified_lengths.size(0)):
- modified_x[: modified_lengths[i], i].copy_(torch.LongTensor(sentences[i]))
-
- return modified_x, modified_lengths
-
-
-class WordShuffle(WordNoising):
- """Shuffle words by no more than k positions."""
-
- def __init__(
- self,
- dictionary,
- default_max_shuffle_distance=3,
- bpe_cont_marker="@@",
- bpe_end_marker=None,
- ):
- super().__init__(dictionary, bpe_cont_marker, bpe_end_marker)
- self.default_max_shuffle_distance = 3
-
- def noising(self, x, lengths, max_shuffle_distance=None):
- if max_shuffle_distance is None:
- max_shuffle_distance = self.default_max_shuffle_distance
- # x: (T x B), lengths: B
- if max_shuffle_distance == 0:
- return x, lengths
-
- # max_shuffle_distance < 1 will return the same sequence
- assert max_shuffle_distance > 1
-
- # define noise word scores
- noise = np.random.uniform(
- 0,
- max_shuffle_distance,
- size=(x.size(0), x.size(1)),
- )
- noise[0] = -1 # do not move start sentence symbol
- # be sure to shuffle entire words
- word_idx = self.get_word_idx(x)
- x2 = x.clone()
- for i in range(lengths.size(0)):
- length_no_eos = lengths[i]
- if x[lengths[i] - 1, i] == self.dictionary.eos():
- length_no_eos = lengths[i] - 1
- # generate a random permutation
- scores = word_idx[:length_no_eos, i] + noise[word_idx[:length_no_eos, i], i]
- # ensure no reordering inside a word
- scores += 1e-6 * np.arange(length_no_eos.item())
- permutation = scores.argsort()
- # shuffle words
- x2[:length_no_eos, i].copy_(
- x2[:length_no_eos, i][torch.from_numpy(permutation)]
- )
- return x2, lengths
-
-
-class UnsupervisedMTNoising(WordNoising):
- """
- Implements the default configuration for noising in UnsupervisedMT
- (github.com/facebookresearch/UnsupervisedMT)
- """
-
- def __init__(
- self,
- dictionary,
- max_word_shuffle_distance,
- word_dropout_prob,
- word_blanking_prob,
- bpe_cont_marker="@@",
- bpe_end_marker=None,
- ):
- super().__init__(dictionary)
- self.max_word_shuffle_distance = max_word_shuffle_distance
- self.word_dropout_prob = word_dropout_prob
- self.word_blanking_prob = word_blanking_prob
-
- self.word_dropout = WordDropout(
- dictionary=dictionary,
- bpe_cont_marker=bpe_cont_marker,
- bpe_end_marker=bpe_end_marker,
- )
- self.word_shuffle = WordShuffle(
- dictionary=dictionary,
- bpe_cont_marker=bpe_cont_marker,
- bpe_end_marker=bpe_end_marker,
- )
-
- def noising(self, x, lengths):
- # 1. Word Shuffle
- noisy_src_tokens, noisy_src_lengths = self.word_shuffle.noising(
- x=x,
- lengths=lengths,
- max_shuffle_distance=self.max_word_shuffle_distance,
- )
- # 2. Word Dropout
- noisy_src_tokens, noisy_src_lengths = self.word_dropout.noising(
- x=noisy_src_tokens,
- lengths=noisy_src_lengths,
- dropout_prob=self.word_dropout_prob,
- )
- # 3. Word Blanking
- noisy_src_tokens, noisy_src_lengths = self.word_dropout.noising(
- x=noisy_src_tokens,
- lengths=noisy_src_lengths,
- dropout_prob=self.word_blanking_prob,
- blank_idx=self.dictionary.unk(),
- )
-
- return noisy_src_tokens
-
-
-class NoisingDataset(torch.utils.data.Dataset):
- def __init__(
- self,
- src_dataset,
- src_dict,
- seed,
- noiser=None,
- noising_class=UnsupervisedMTNoising,
- **kwargs
- ):
- """
- Wrap a :class:`~torch.utils.data.Dataset` and apply noise to the
- samples based on the supplied noising configuration.
-
- Args:
- src_dataset (~torch.utils.data.Dataset): dataset to wrap.
- to build self.src_dataset --
- a LanguagePairDataset with src dataset as the source dataset and
- None as the target dataset. Should NOT have padding so that
- src_lengths are accurately calculated by language_pair_dataset
- collate function.
- We use language_pair_dataset here to encapsulate the tgt_dataset
- so we can re-use the LanguagePairDataset collater to format the
- batches in the structure that SequenceGenerator expects.
- src_dict (~fairseq.data.Dictionary): source dictionary
- seed (int): seed to use when generating random noise
- noiser (WordNoising): a pre-initialized :class:`WordNoising`
- instance. If this is None, a new instance will be created using
- *noising_class* and *kwargs*.
- noising_class (class, optional): class to use to initialize a
- default :class:`WordNoising` instance.
- kwargs (dict, optional): arguments to initialize the default
- :class:`WordNoising` instance given by *noiser*.
- """
- self.src_dataset = src_dataset
- self.src_dict = src_dict
- self.seed = seed
- self.noiser = (
- noiser
- if noiser is not None
- else noising_class(
- dictionary=src_dict,
- **kwargs,
- )
- )
- self.sizes = src_dataset.sizes
-
-
- def __getitem__(self, index):
- """
- Returns a single noisy sample. Multiple samples are fed to the collater
- create a noising dataset batch.
- """
- src_tokens = self.src_dataset[index]
- src_lengths = torch.LongTensor([len(src_tokens)])
- src_tokens = src_tokens.unsqueeze(0)
-
- # Transpose src tokens to fit expected shape of x in noising function
- # (batch size, sequence length) -> (sequence length, batch size)
- src_tokens_t = torch.t(src_tokens)
-
- with data_utils.numpy_seed(self.seed + index):
- noisy_src_tokens = self.noiser.noising(src_tokens_t, src_lengths)
-
- # Transpose back to expected src_tokens format
- # (sequence length, 1) -> (1, sequence length)
- noisy_src_tokens = torch.t(noisy_src_tokens)
- return noisy_src_tokens[0]
-
- def __len__(self):
- """
- The length of the noising dataset is the length of src.
- """
- return len(self.src_dataset)
-
- @property
- def supports_prefetch(self):
- return self.src_dataset.supports_prefetch
-
- def prefetch(self, indices):
- if self.src_dataset.supports_prefetch:
- self.src_dataset.prefetch(indices)
diff --git a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/utils/inference/__init__.py b/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/utils/inference/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/install.sh b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/install.sh
deleted file mode 100644
index 51e038d5a0098f21d4efd8051a15b7f0cdeb4b73..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/install.sh
+++ /dev/null
@@ -1,6 +0,0 @@
-cd src/glow_tts/monotonic_align/
-pip install .
-cd ../../../
-
-# torch
-pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
diff --git a/spaces/Hoodady/3DFuse/my/utils/plot.py b/spaces/Hoodady/3DFuse/my/utils/plot.py
deleted file mode 100644
index e4172311da88fbabcd107dd3f57b98db7638243a..0000000000000000000000000000000000000000
--- a/spaces/Hoodady/3DFuse/my/utils/plot.py
+++ /dev/null
@@ -1,9 +0,0 @@
-import numpy as np
-import matplotlib.pyplot as plt
-
-
-def mpl_fig_to_buffer(fig):
- fig.canvas.draw()
- plot = np.array(fig.canvas.renderer.buffer_rgba())
- plt.close(fig)
- return plot
diff --git a/spaces/ICML2022/OFA/fairseq/examples/language_model/README.md b/spaces/ICML2022/OFA/fairseq/examples/language_model/README.md
deleted file mode 100644
index e78ea48e08dc99b69751923762107a8f8a9a5e3e..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/language_model/README.md
+++ /dev/null
@@ -1,123 +0,0 @@
-# Neural Language Modeling
-
-## Pre-trained models
-
-Model | Description | Dataset | Download
----|---|---|---
-`transformer_lm.gbw.adaptive_huge` | Adaptive Inputs ([Baevski and Auli, 2018](https://arxiv.org/abs/1809.10853)) 1026M params | [Google Billion Words](https://github.com/ciprian-chelba/1-billion-word-language-modeling-benchmark) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/lm/adaptive_lm_gbw_huge.tar.bz2)
-`transformer_lm.wiki103.adaptive` | Adaptive Inputs ([Baevski and Auli, 2018](https://arxiv.org/abs/1809.10853)) 247M params | [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/lm/adaptive_lm_wiki103.v2.tar.bz2)
-`transformer_lm.wmt19.en` | English LM ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) | [WMT News Crawl](http://data.statmt.org/news-crawl/) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.en.tar.gz)
-`transformer_lm.wmt19.de` | German LM ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) | [WMT News Crawl](http://data.statmt.org/news-crawl/) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.de.tar.gz)
-`transformer_lm.wmt19.ru` | Russian LM ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) | [WMT News Crawl](http://data.statmt.org/news-crawl/) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.ru.tar.gz)
-
-## Example usage
-
-We require a few additional Python dependencies for preprocessing:
-```bash
-pip install fastBPE sacremoses
-```
-
-To sample from a language model using PyTorch Hub:
-```python
-import torch
-
-# List available models
-torch.hub.list('pytorch/fairseq') # [..., 'transformer_lm.wmt19.en', ...]
-
-# Load an English LM trained on WMT'19 News Crawl data
-en_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt19.en', tokenizer='moses', bpe='fastbpe')
-en_lm.eval() # disable dropout
-
-# Move model to GPU
-en_lm.cuda()
-
-# Sample from the language model
-en_lm.sample('Barack Obama', beam=1, sampling=True, sampling_topk=10, temperature=0.8)
-# "Barack Obama is coming to Sydney and New Zealand (...)"
-
-# Compute perplexity for a sequence
-en_lm.score('Barack Obama is coming to Sydney and New Zealand')['positional_scores'].mean().neg().exp()
-# tensor(15.1474)
-
-# The same interface can be used with custom models as well
-from fairseq.models.transformer_lm import TransformerLanguageModel
-custom_lm = TransformerLanguageModel.from_pretrained('/path/to/model/dir', 'checkpoint100.pt', tokenizer='moses', bpe='fastbpe')
-custom_lm.sample('Barack Obama', beam=5)
-# "Barack Obama (...)"
-```
-
-## Training a transformer language model with the CLI tools
-
-### 1) Preprocess the data
-
-First download and prepare the [WikiText-103 dataset](https://www.salesforce.com/products/einstein/ai-research/the-wikitext-dependency-language-modeling-dataset/):
-```bash
-cd examples/language_model/
-bash prepare-wikitext-103.sh
-cd ../..
-```
-
-Next preprocess/binarize the data:
-```bash
-TEXT=examples/language_model/wikitext-103
-fairseq-preprocess \
- --only-source \
- --trainpref $TEXT/wiki.train.tokens \
- --validpref $TEXT/wiki.valid.tokens \
- --testpref $TEXT/wiki.test.tokens \
- --destdir data-bin/wikitext-103 \
- --workers 20
-```
-
-### 2) Train a language model
-
-Next we'll train a basic transformer language model on wikitext-103. For more
-advanced usage, see the [adaptive inputs README](README.adaptive_inputs.md).
-
-To train a basic LM (assumes 2 GPUs):
-```
-$ fairseq-train --task language_modeling \
- data-bin/wikitext-103 \
- --save-dir checkpoints/transformer_wikitext-103 \
- --arch transformer_lm --share-decoder-input-output-embed \
- --dropout 0.1 \
- --optimizer adam --adam-betas '(0.9, 0.98)' --weight-decay 0.01 --clip-norm 0.0 \
- --lr 0.0005 --lr-scheduler inverse_sqrt --warmup-updates 4000 --warmup-init-lr 1e-07 \
- --tokens-per-sample 512 --sample-break-mode none \
- --max-tokens 2048 --update-freq 16 \
- --fp16 \
- --max-update 50000
-```
-
-If you run out of memory, try reducing `--max-tokens` (max number of tokens per
-batch) or `--tokens-per-sample` (max sequence length). You can also adjust
-`--update-freq` to accumulate gradients and simulate training on a different
-number of GPUs.
-
-### 3) Evaluate
-
-```bash
-fairseq-eval-lm data-bin/wikitext-103 \
- --path checkpoints/transformer_wiki103/checkpoint_best.pt \
- --batch-size 2 \
- --tokens-per-sample 512 \
- --context-window 400
-# | Evaluated 245569 tokens in 56.1s (4379.02 tokens/s)
-# | Loss: 3.4164, Perplexity: 30.46
-```
-
-*Note:* The `--context-window` option controls how much context is provided to
-each token when computing perplexity. When the window size is 0, the dataset is
-chunked into segments of length 512 and perplexity is computed over each segment
-normally. However, this results in worse (higher) perplexity since tokens that
-appear earlier in each segment have less conditioning. When the maximum window
-size is used (511 in this case), then we compute perplexity for each token
-fully conditioned on 511 tokens of context. This slows down evaluation
-significantly, since we must run a separate forward pass for every token in the
-dataset, but results in better (lower) perplexity.
-
-
-## Convolutional language models
-
-Please see the [convolutional LM README](README.conv.md) for instructions on
-training convolutional language models.
diff --git a/spaces/ICML2022/OFA/fairseq/examples/m2m_100/tokenizers/tokenize_indic.py b/spaces/ICML2022/OFA/fairseq/examples/m2m_100/tokenizers/tokenize_indic.py
deleted file mode 100644
index a44fad07f7c718f99cccd445f33c62b0e3c562f4..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/m2m_100/tokenizers/tokenize_indic.py
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-# Use: echo {text} | python tokenize_indic.py {language}
-
-import sys
-
-from indicnlp.normalize.indic_normalize import IndicNormalizerFactory
-from indicnlp.tokenize.indic_tokenize import trivial_tokenize
-
-
-factory = IndicNormalizerFactory()
-normalizer = factory.get_normalizer(
- sys.argv[1], remove_nuktas=False, nasals_mode="do_nothing"
-)
-
-for line in sys.stdin:
- normalized_line = normalizer.normalize(line.strip())
- tokenized_line = " ".join(trivial_tokenize(normalized_line, sys.argv[1]))
- print(tokenized_line)
diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_text_joint_to_text/scripts/g2p_encode.py b/spaces/ICML2022/OFA/fairseq/examples/speech_text_joint_to_text/scripts/g2p_encode.py
deleted file mode 100644
index 9db779396f492e3f71b08d7b895beb81d8e46bc9..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/speech_text_joint_to_text/scripts/g2p_encode.py
+++ /dev/null
@@ -1,191 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import itertools
-import logging
-import re
-import time
-
-from g2p_en import G2p
-
-logger = logging.getLogger(__name__)
-
-FAIL_SENT = "FAILED_SENTENCE"
-
-
-def parse():
- parser = argparse.ArgumentParser()
- parser.add_argument("--data-path", type=str, required=True)
- parser.add_argument("--out-path", type=str, required=True)
- parser.add_argument("--lower-case", action="store_true")
- parser.add_argument("--do-filter", action="store_true")
- parser.add_argument("--use-word-start", action="store_true")
- parser.add_argument("--dup-vowel", default=1, type=int)
- parser.add_argument("--dup-consonant", default=1, type=int)
- parser.add_argument("--no-punc", action="store_true")
- parser.add_argument("--reserve-word", type=str, default="")
- parser.add_argument(
- "--reserve-first-column",
- action="store_true",
- help="first column is sentence id",
- )
- ###
- parser.add_argument("--parallel-process-num", default=1, type=int)
- parser.add_argument("--logdir", default="")
- args = parser.parse_args()
- return args
-
-
-def process_sent(sent, g2p, res_wrds, args):
- sents = pre_process_sent(sent, args.do_filter, args.lower_case, res_wrds)
- pho_seqs = [do_g2p(g2p, s, res_wrds, i == 0) for i, s in enumerate(sents)]
- pho_seq = (
- [FAIL_SENT]
- if [FAIL_SENT] in pho_seqs
- else list(itertools.chain.from_iterable(pho_seqs))
- )
- if args.no_punc:
- pho_seq = remove_punc(pho_seq)
- if args.dup_vowel > 1 or args.dup_consonant > 1:
- pho_seq = dup_pho(pho_seq, args.dup_vowel, args.dup_consonant)
- if args.use_word_start:
- pho_seq = add_word_start(pho_seq)
- return " ".join(pho_seq)
-
-
-def remove_punc(sent):
- ns = []
- regex = re.compile("[^a-zA-Z0-9 ]")
- for p in sent:
- if (not regex.search(p)) or p == FAIL_SENT:
- if p == " " and (len(ns) == 0 or ns[-1] == " "):
- continue
- ns.append(p)
- return ns
-
-
-def do_g2p(g2p, sent, res_wrds, is_first_sent):
- if sent in res_wrds:
- pho_seq = [res_wrds[sent]]
- else:
- pho_seq = g2p(sent)
- if not is_first_sent:
- pho_seq = [" "] + pho_seq # add space to separate
- return pho_seq
-
-
-def pre_process_sent(sent, do_filter, lower_case, res_wrds):
- if do_filter:
- sent = re.sub("-", " ", sent)
- sent = re.sub("—", " ", sent)
- if len(res_wrds) > 0:
- wrds = sent.split()
- wrds = ["SPLIT_ME " + w + " SPLIT_ME" if w in res_wrds else w for w in wrds]
- sents = [x.strip() for x in " ".join(wrds).split("SPLIT_ME") if x.strip() != ""]
- else:
- sents = [sent]
- if lower_case:
- sents = [s.lower() if s not in res_wrds else s for s in sents]
- return sents
-
-
-def dup_pho(sent, dup_v_num, dup_c_num):
- """
- duplicate phoneme defined as cmudict
- http://www.speech.cs.cmu.edu/cgi-bin/cmudict
- """
- if dup_v_num == 1 and dup_c_num == 1:
- return sent
- ns = []
- for p in sent:
- ns.append(p)
- if re.search(r"\d$", p):
- for i in range(1, dup_v_num):
- ns.append(f"{p}-{i}P")
- elif re.search(r"\w", p):
- for i in range(1, dup_c_num):
- ns.append(f"{p}-{i}P")
- return ns
-
-
-def add_word_start(sent):
- ns = []
- do_add = True
- ws = "▁"
- for p in sent:
- if do_add:
- p = ws + p
- do_add = False
- if p == " ":
- do_add = True
- else:
- ns.append(p)
- return ns
-
-
-def load_reserve_word(reserve_word):
- if reserve_word == "":
- return []
- with open(reserve_word, "r") as fp:
- res_wrds = [x.strip().split() for x in fp.readlines() if x.strip() != ""]
- assert sum([0 if len(x) == 2 else 1 for x in res_wrds]) == 0
- res_wrds = dict(res_wrds)
- return res_wrds
-
-
-def process_sents(sents, args):
- g2p = G2p()
- out_sents = []
- res_wrds = load_reserve_word(args.reserve_word)
- for sent in sents:
- col1 = ""
- if args.reserve_first_column:
- col1, sent = sent.split(None, 1)
- sent = process_sent(sent, g2p, res_wrds, args)
- if args.reserve_first_column and col1 != "":
- sent = f"{col1} {sent}"
- out_sents.append(sent)
- return out_sents
-
-
-def main():
- args = parse()
- out_sents = []
- with open(args.data_path, "r") as fp:
- sent_list = [x.strip() for x in fp.readlines()]
- if args.parallel_process_num > 1:
- try:
- import submitit
- except ImportError:
- logger.warn(
- "submitit is not found and only one job is used to process the data"
- )
- submitit = None
-
- if args.parallel_process_num == 1 or submitit is None:
- out_sents = process_sents(sent_list, args)
- else:
- # process sentences with parallel computation
- lsize = len(sent_list) // args.parallel_process_num + 1
- executor = submitit.AutoExecutor(folder=args.logdir)
- executor.update_parameters(timeout_min=1000, cpus_per_task=4)
- jobs = []
- for i in range(args.parallel_process_num):
- job = executor.submit(
- process_sents, sent_list[lsize * i : lsize * (i + 1)], args
- )
- jobs.append(job)
- is_running = True
- while is_running:
- time.sleep(5)
- is_running = sum([job.done() for job in jobs]) < len(jobs)
- out_sents = list(itertools.chain.from_iterable([job.result() for job in jobs]))
- with open(args.out_path, "w") as fp:
- fp.write("\n".join(out_sents) + "\n")
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_to_text/docs/mustc_example.md b/spaces/ICML2022/OFA/fairseq/examples/speech_to_text/docs/mustc_example.md
deleted file mode 100644
index c95ef3e15660107c3384f87c1680f005044e7f3b..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/speech_to_text/docs/mustc_example.md
+++ /dev/null
@@ -1,155 +0,0 @@
-[[Back]](..)
-
-# S2T Example: Speech Translation (ST) on MuST-C
-
-[MuST-C](https://www.aclweb.org/anthology/N19-1202) is multilingual speech-to-text translation corpus with
-8-language translations on English TED talks. We match the state-of-the-art performance in
-[ESPNet-ST](https://arxiv.org/pdf/2004.10234.pdf) with a simpler model training pipeline.
-
-## Data Preparation
-[Download](https://ict.fbk.eu/must-c) and unpack MuST-C data to a path
-`${MUSTC_ROOT}/en-${TARGET_LANG_ID}`, then preprocess it with
-```bash
-# additional Python packages for S2T data processing/model training
-pip install pandas torchaudio soundfile sentencepiece
-
-# Generate TSV manifests, features, vocabulary
-# and configuration for each language
-python examples/speech_to_text/prep_mustc_data.py \
- --data-root ${MUSTC_ROOT} --task asr \
- --vocab-type unigram --vocab-size 5000
-python examples/speech_to_text/prep_mustc_data.py \
- --data-root ${MUSTC_ROOT} --task st \
- --vocab-type unigram --vocab-size 8000
-
-# Add vocabulary and configuration for joint data
-# (based on the manifests and features generated above)
-python examples/speech_to_text/prep_mustc_data.py \
- --data-root ${MUSTC_ROOT} --task asr --joint \
- --vocab-type unigram --vocab-size 10000
-python examples/speech_to_text/prep_mustc_data.py \
- --data-root ${MUSTC_ROOT} --task st --joint \
- --vocab-type unigram --vocab-size 10000
-```
-The generated files (manifest, features, vocabulary and data configuration) will be added to
-`${MUSTC_ROOT}/en-${TARGET_LANG_ID}` (per-language data) and `MUSTC_ROOT` (joint data).
-
-Download our vocabulary files if you want to use our pre-trained models:
-- ASR: [En-De](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_de_asr_vocab_unigram5000.zip), [En-Nl](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_nl_asr_vocab_unigram5000.zip), [En-Es](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_es_asr_vocab_unigram5000.zip), [En-Fr](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_fr_asr_vocab_unigram5000.zip), [En-It](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_it_asr_vocab_unigram5000.zip), [En-Pt](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_pt_asr_vocab_unigram5000.zip), [En-Ro](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ro_asr_vocab_unigram5000.zip), [En-Ru](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ru_asr_vocab_unigram5000.zip), [Joint](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_joint_asr_vocab_unigram10000.zip)
-- ST: [En-De](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_de_st_vocab_unigram8000.zip), [En-Nl](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_nl_st_vocab_unigram8000.zip), [En-Es](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_es_st_vocab_unigram8000.zip), [En-Fr](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_fr_st_vocab_unigram8000.zip), [En-It](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_it_st_vocab_unigram8000.zip), [En-Pt](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_pt_st_vocab_unigram8000.zip), [En-Ro](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ro_st_vocab_unigram8000.zip), [En-Ru](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ru_st_vocab_unigram8000.zip), [Multilingual](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_multilingual_st_vocab_unigram10000.zip)
-
-## ASR
-#### Training
-En-De as example:
-```bash
-fairseq-train ${MUSTC_ROOT}/en-de \
- --config-yaml config_asr.yaml --train-subset train_asr --valid-subset dev_asr \
- --save-dir ${ASR_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-update 100000 \
- --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \
- --arch s2t_transformer_s --optimizer adam --lr 1e-3 --lr-scheduler inverse_sqrt \
- --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8
-```
-For joint model (using ASR data from all 8 directions):
-```bash
-fairseq-train ${MUSTC_ROOT} \
- --config-yaml config_asr.yaml \
- --train-subset train_de_asr,train_nl_asr,train_es_asr,train_fr_asr,train_it_asr,train_pt_asr,train_ro_asr,train_ru_asr \
- --valid-subset dev_de_asr,dev_nl_asr,dev_es_asr,dev_fr_asr,dev_it_asr,dev_pt_asr,dev_ro_asr,dev_ru_asr \
- --save-dir ${JOINT_ASR_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-update 100000 \
- --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \
- --arch s2t_transformer_s --optimizer adam --lr 1e-3 --lr-scheduler inverse_sqrt \
- --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8
-```
-where `ASR_SAVE_DIR` (`JOINT_ASR_SAVE_DIR`) is the checkpoint root path. We set `--update-freq 8` to simulate 8 GPUs
-with 1 GPU. You may want to update it accordingly when using more than 1 GPU.
-
-#### Inference & Evaluation
-```bash
-CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt
-python scripts/average_checkpoints.py \
- --inputs ${ASR_SAVE_DIR} --num-epoch-checkpoints 10 \
- --output "${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}"
-fairseq-generate ${MUSTC_ROOT}/en-de \
- --config-yaml config_asr.yaml --gen-subset tst-COMMON_asr --task speech_to_text \
- --path ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} --max-tokens 50000 --beam 5 \
- --scoring wer --wer-tokenizer 13a --wer-lowercase --wer-remove-punct
-
-# For models trained on joint data
-python scripts/average_checkpoints.py \
- --inputs ${JOINT_ASR_SAVE_DIR} --num-epoch-checkpoints 10 \
- --output "${JOINT_ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}"
-for LANG in de nl es fr it pt ro ru; do
- fairseq-generate ${MUSTC_ROOT} \
- --config-yaml config_asr.yaml --gen-subset tst-COMMON_${LANG}_asr --task speech_to_text \
- --path ${JOINT_ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} --max-tokens 50000 --beam 5 \
- --scoring wer --wer-tokenizer 13a --wer-lowercase --wer-remove-punct
-done
-```
-#### Results
-| Data | --arch | Params | En-De | En-Nl | En-Es | En-Fr | En-It | En-Pt | En-Ro | En-Ru | Model |
-|---|---|---|---|---|---|---|---|---|---|---|---|
-| Single | s2t_transformer_s | 31M | [18.2](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_de_asr_transformer_s.pt) | [17.6](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_nl_asr_transformer_s.pt) | [17.7](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_es_asr_transformer_s.pt) | [17.2](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_fr_asr_transformer_s.pt) | [17.9](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_it_asr_transformer_s.pt) | [19.1](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_pt_asr_transformer_s.pt) | [18.1](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ro_asr_transformer_s.pt) | [17.7](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ru_asr_transformer_s.pt) | (<-Download) |
-| Joint | s2t_transformer_m | 76M | 16.8 | 16.7 | 16.9 | 16.9 | 17.0 | 17.4 | 17.0 | 16.9 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_joint_asr_transformer_m.pt) |
-
-## ST
-#### Training
-En-De as example:
-```bash
-fairseq-train ${MUSTC_ROOT}/en-de \
- --config-yaml config_st.yaml --train-subset train_st --valid-subset dev_st \
- --save-dir ${ST_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-update 100000 \
- --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \
- --arch s2t_transformer_s --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt \
- --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 \
- --load-pretrained-encoder-from ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}
-```
-For multilingual model (all 8 directions):
-```bash
-fairseq-train ${MUSTC_ROOT} \
- --config-yaml config_st.yaml \
- --train-subset train_de_st,train_nl_st,train_es_st,train_fr_st,train_it_st,train_pt_st,train_ro_st,train_ru_st \
- --valid-subset dev_de_st,dev_nl_st,dev_es_st,dev_fr_st,dev_it_st,dev_pt_st,dev_ro_st,dev_ru_st \
- --save-dir ${MULTILINGUAL_ST_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-update 100000 \
- --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \
- --arch s2t_transformer_s --ignore-prefix-size 1 --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt \
- --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 \
- --load-pretrained-encoder-from ${JOINT_ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}
-```
-where `ST_SAVE_DIR` (`MULTILINGUAL_ST_SAVE_DIR`) is the checkpoint root path. The ST encoder is pre-trained by ASR
-for faster training and better performance: `--load-pretrained-encoder-from <(JOINT_)ASR checkpoint path>`. We set
-`--update-freq 8` to simulate 8 GPUs with 1 GPU. You may want to update it accordingly when using more than 1 GPU.
-For multilingual models, we prepend target language ID token as target BOS, which should be excluded from
-the training loss via `--ignore-prefix-size 1`.
-
-#### Inference & Evaluation
-Average the last 10 checkpoints and evaluate on the `tst-COMMON` split:
-```bash
-CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt
-python scripts/average_checkpoints.py \
- --inputs ${ST_SAVE_DIR} --num-epoch-checkpoints 10 \
- --output "${ST_SAVE_DIR}/${CHECKPOINT_FILENAME}"
-fairseq-generate ${MUSTC_ROOT}/en-de \
- --config-yaml config_st.yaml --gen-subset tst-COMMON_st --task speech_to_text \
- --path ${ST_SAVE_DIR}/${CHECKPOINT_FILENAME} \
- --max-tokens 50000 --beam 5 --scoring sacrebleu
-
-# For multilingual models
-python scripts/average_checkpoints.py \
- --inputs ${MULTILINGUAL_ST_SAVE_DIR} --num-epoch-checkpoints 10 \
- --output "${MULTILINGUAL_ST_SAVE_DIR}/${CHECKPOINT_FILENAME}"
-for LANG in de nl es fr it pt ro ru; do
- fairseq-generate ${MUSTC_ROOT} \
- --config-yaml config_st.yaml --gen-subset tst-COMMON_${LANG}_st --task speech_to_text \
- --prefix-size 1 --path ${MULTILINGUAL_ST_SAVE_DIR}/${CHECKPOINT_FILENAME} \
- --max-tokens 50000 --beam 5 --scoring sacrebleu
-done
-```
-For multilingual models, we force decoding from the target language ID token (as BOS) via `--prefix-size 1`.
-
-#### Results
-| Data | --arch | Params | En-De | En-Nl | En-Es | En-Fr | En-It | En-Pt | En-Ro | En-Ru | Model |
-|---|---|---|---|---|---|---|---|---|---|---|---|
-| Bilingual | s2t_transformer_s | 31M | [22.7](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_de_st_transformer_s.pt) | [27.3](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_nl_st_transformer_s.pt) | [27.2](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_es_st_transformer_s.pt) | [32.9](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_fr_st_transformer_s.pt) | [22.7](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_it_st_transformer_s.pt) | [28.1](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_pt_st_transformer_s.pt) | [21.9](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ro_st_transformer_s.pt) | [15.3](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ru_st_transformer_s.pt) | (<-Download) |
-| Multilingual | s2t_transformer_m | 76M | 24.5 | 28.6 | 28.2 | 34.9 | 24.6 | 31.1 | 23.8 | 16.0 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_multilingual_st_transformer_m.pt) |
-
-[[Back]](..)
diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/loggers/comet/__init__.py b/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/loggers/comet/__init__.py
deleted file mode 100644
index b0318f88d6a63a6ba37fd2bf7ec4869084a45966..0000000000000000000000000000000000000000
--- a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/loggers/comet/__init__.py
+++ /dev/null
@@ -1,508 +0,0 @@
-import glob
-import json
-import logging
-import os
-import sys
-from pathlib import Path
-
-logger = logging.getLogger(__name__)
-
-FILE = Path(__file__).resolve()
-ROOT = FILE.parents[3] # YOLOv5 root directory
-if str(ROOT) not in sys.path:
- sys.path.append(str(ROOT)) # add ROOT to PATH
-
-try:
- import comet_ml
-
- # Project Configuration
- config = comet_ml.config.get_config()
- COMET_PROJECT_NAME = config.get_string(os.getenv("COMET_PROJECT_NAME"), "comet.project_name", default="yolov5")
-except (ModuleNotFoundError, ImportError):
- comet_ml = None
- COMET_PROJECT_NAME = None
-
-import PIL
-import torch
-import torchvision.transforms as T
-import yaml
-
-from utils.dataloaders import img2label_paths
-from utils.general import check_dataset, scale_boxes, xywh2xyxy
-from utils.metrics import box_iou
-
-COMET_PREFIX = "comet://"
-
-COMET_MODE = os.getenv("COMET_MODE", "online")
-
-# Model Saving Settings
-COMET_MODEL_NAME = os.getenv("COMET_MODEL_NAME", "yolov5")
-
-# Dataset Artifact Settings
-COMET_UPLOAD_DATASET = os.getenv("COMET_UPLOAD_DATASET", "false").lower() == "true"
-
-# Evaluation Settings
-COMET_LOG_CONFUSION_MATRIX = os.getenv("COMET_LOG_CONFUSION_MATRIX", "true").lower() == "true"
-COMET_LOG_PREDICTIONS = os.getenv("COMET_LOG_PREDICTIONS", "true").lower() == "true"
-COMET_MAX_IMAGE_UPLOADS = int(os.getenv("COMET_MAX_IMAGE_UPLOADS", 100))
-
-# Confusion Matrix Settings
-CONF_THRES = float(os.getenv("CONF_THRES", 0.001))
-IOU_THRES = float(os.getenv("IOU_THRES", 0.6))
-
-# Batch Logging Settings
-COMET_LOG_BATCH_METRICS = os.getenv("COMET_LOG_BATCH_METRICS", "false").lower() == "true"
-COMET_BATCH_LOGGING_INTERVAL = os.getenv("COMET_BATCH_LOGGING_INTERVAL", 1)
-COMET_PREDICTION_LOGGING_INTERVAL = os.getenv("COMET_PREDICTION_LOGGING_INTERVAL", 1)
-COMET_LOG_PER_CLASS_METRICS = os.getenv("COMET_LOG_PER_CLASS_METRICS", "false").lower() == "true"
-
-RANK = int(os.getenv("RANK", -1))
-
-to_pil = T.ToPILImage()
-
-
-class CometLogger:
- """Log metrics, parameters, source code, models and much more
- with Comet
- """
-
- def __init__(self, opt, hyp, run_id=None, job_type="Training", **experiment_kwargs) -> None:
- self.job_type = job_type
- self.opt = opt
- self.hyp = hyp
-
- # Comet Flags
- self.comet_mode = COMET_MODE
-
- self.save_model = opt.save_period > -1
- self.model_name = COMET_MODEL_NAME
-
- # Batch Logging Settings
- self.log_batch_metrics = COMET_LOG_BATCH_METRICS
- self.comet_log_batch_interval = COMET_BATCH_LOGGING_INTERVAL
-
- # Dataset Artifact Settings
- self.upload_dataset = self.opt.upload_dataset if self.opt.upload_dataset else COMET_UPLOAD_DATASET
- self.resume = self.opt.resume
-
- # Default parameters to pass to Experiment objects
- self.default_experiment_kwargs = {
- "log_code": False,
- "log_env_gpu": True,
- "log_env_cpu": True,
- "project_name": COMET_PROJECT_NAME,}
- self.default_experiment_kwargs.update(experiment_kwargs)
- self.experiment = self._get_experiment(self.comet_mode, run_id)
-
- self.data_dict = self.check_dataset(self.opt.data)
- self.class_names = self.data_dict["names"]
- self.num_classes = self.data_dict["nc"]
-
- self.logged_images_count = 0
- self.max_images = COMET_MAX_IMAGE_UPLOADS
-
- if run_id is None:
- self.experiment.log_other("Created from", "YOLOv5")
- if not isinstance(self.experiment, comet_ml.OfflineExperiment):
- workspace, project_name, experiment_id = self.experiment.url.split("/")[-3:]
- self.experiment.log_other(
- "Run Path",
- f"{workspace}/{project_name}/{experiment_id}",
- )
- self.log_parameters(vars(opt))
- self.log_parameters(self.opt.hyp)
- self.log_asset_data(
- self.opt.hyp,
- name="hyperparameters.json",
- metadata={"type": "hyp-config-file"},
- )
- self.log_asset(
- f"{self.opt.save_dir}/opt.yaml",
- metadata={"type": "opt-config-file"},
- )
-
- self.comet_log_confusion_matrix = COMET_LOG_CONFUSION_MATRIX
-
- if hasattr(self.opt, "conf_thres"):
- self.conf_thres = self.opt.conf_thres
- else:
- self.conf_thres = CONF_THRES
- if hasattr(self.opt, "iou_thres"):
- self.iou_thres = self.opt.iou_thres
- else:
- self.iou_thres = IOU_THRES
-
- self.log_parameters({"val_iou_threshold": self.iou_thres, "val_conf_threshold": self.conf_thres})
-
- self.comet_log_predictions = COMET_LOG_PREDICTIONS
- if self.opt.bbox_interval == -1:
- self.comet_log_prediction_interval = 1 if self.opt.epochs < 10 else self.opt.epochs // 10
- else:
- self.comet_log_prediction_interval = self.opt.bbox_interval
-
- if self.comet_log_predictions:
- self.metadata_dict = {}
- self.logged_image_names = []
-
- self.comet_log_per_class_metrics = COMET_LOG_PER_CLASS_METRICS
-
- self.experiment.log_others({
- "comet_mode": COMET_MODE,
- "comet_max_image_uploads": COMET_MAX_IMAGE_UPLOADS,
- "comet_log_per_class_metrics": COMET_LOG_PER_CLASS_METRICS,
- "comet_log_batch_metrics": COMET_LOG_BATCH_METRICS,
- "comet_log_confusion_matrix": COMET_LOG_CONFUSION_MATRIX,
- "comet_model_name": COMET_MODEL_NAME,})
-
- # Check if running the Experiment with the Comet Optimizer
- if hasattr(self.opt, "comet_optimizer_id"):
- self.experiment.log_other("optimizer_id", self.opt.comet_optimizer_id)
- self.experiment.log_other("optimizer_objective", self.opt.comet_optimizer_objective)
- self.experiment.log_other("optimizer_metric", self.opt.comet_optimizer_metric)
- self.experiment.log_other("optimizer_parameters", json.dumps(self.hyp))
-
- def _get_experiment(self, mode, experiment_id=None):
- if mode == "offline":
- if experiment_id is not None:
- return comet_ml.ExistingOfflineExperiment(
- previous_experiment=experiment_id,
- **self.default_experiment_kwargs,
- )
-
- return comet_ml.OfflineExperiment(**self.default_experiment_kwargs,)
-
- else:
- try:
- if experiment_id is not None:
- return comet_ml.ExistingExperiment(
- previous_experiment=experiment_id,
- **self.default_experiment_kwargs,
- )
-
- return comet_ml.Experiment(**self.default_experiment_kwargs)
-
- except ValueError:
- logger.warning("COMET WARNING: "
- "Comet credentials have not been set. "
- "Comet will default to offline logging. "
- "Please set your credentials to enable online logging.")
- return self._get_experiment("offline", experiment_id)
-
- return
-
- def log_metrics(self, log_dict, **kwargs):
- self.experiment.log_metrics(log_dict, **kwargs)
-
- def log_parameters(self, log_dict, **kwargs):
- self.experiment.log_parameters(log_dict, **kwargs)
-
- def log_asset(self, asset_path, **kwargs):
- self.experiment.log_asset(asset_path, **kwargs)
-
- def log_asset_data(self, asset, **kwargs):
- self.experiment.log_asset_data(asset, **kwargs)
-
- def log_image(self, img, **kwargs):
- self.experiment.log_image(img, **kwargs)
-
- def log_model(self, path, opt, epoch, fitness_score, best_model=False):
- if not self.save_model:
- return
-
- model_metadata = {
- "fitness_score": fitness_score[-1],
- "epochs_trained": epoch + 1,
- "save_period": opt.save_period,
- "total_epochs": opt.epochs,}
-
- model_files = glob.glob(f"{path}/*.pt")
- for model_path in model_files:
- name = Path(model_path).name
-
- self.experiment.log_model(
- self.model_name,
- file_or_folder=model_path,
- file_name=name,
- metadata=model_metadata,
- overwrite=True,
- )
-
- def check_dataset(self, data_file):
- with open(data_file) as f:
- data_config = yaml.safe_load(f)
-
- if data_config['path'].startswith(COMET_PREFIX):
- path = data_config['path'].replace(COMET_PREFIX, "")
- data_dict = self.download_dataset_artifact(path)
-
- return data_dict
-
- self.log_asset(self.opt.data, metadata={"type": "data-config-file"})
-
- return check_dataset(data_file)
-
- def log_predictions(self, image, labelsn, path, shape, predn):
- if self.logged_images_count >= self.max_images:
- return
- detections = predn[predn[:, 4] > self.conf_thres]
- iou = box_iou(labelsn[:, 1:], detections[:, :4])
- mask, _ = torch.where(iou > self.iou_thres)
- if len(mask) == 0:
- return
-
- filtered_detections = detections[mask]
- filtered_labels = labelsn[mask]
-
- image_id = path.split("/")[-1].split(".")[0]
- image_name = f"{image_id}_curr_epoch_{self.experiment.curr_epoch}"
- if image_name not in self.logged_image_names:
- native_scale_image = PIL.Image.open(path)
- self.log_image(native_scale_image, name=image_name)
- self.logged_image_names.append(image_name)
-
- metadata = []
- for cls, *xyxy in filtered_labels.tolist():
- metadata.append({
- "label": f"{self.class_names[int(cls)]}-gt",
- "score": 100,
- "box": {
- "x": xyxy[0],
- "y": xyxy[1],
- "x2": xyxy[2],
- "y2": xyxy[3]},})
- for *xyxy, conf, cls in filtered_detections.tolist():
- metadata.append({
- "label": f"{self.class_names[int(cls)]}",
- "score": conf * 100,
- "box": {
- "x": xyxy[0],
- "y": xyxy[1],
- "x2": xyxy[2],
- "y2": xyxy[3]},})
-
- self.metadata_dict[image_name] = metadata
- self.logged_images_count += 1
-
- return
-
- def preprocess_prediction(self, image, labels, shape, pred):
- nl, _ = labels.shape[0], pred.shape[0]
-
- # Predictions
- if self.opt.single_cls:
- pred[:, 5] = 0
-
- predn = pred.clone()
- scale_boxes(image.shape[1:], predn[:, :4], shape[0], shape[1])
-
- labelsn = None
- if nl:
- tbox = xywh2xyxy(labels[:, 1:5]) # target boxes
- scale_boxes(image.shape[1:], tbox, shape[0], shape[1]) # native-space labels
- labelsn = torch.cat((labels[:, 0:1], tbox), 1) # native-space labels
- scale_boxes(image.shape[1:], predn[:, :4], shape[0], shape[1]) # native-space pred
-
- return predn, labelsn
-
- def add_assets_to_artifact(self, artifact, path, asset_path, split):
- img_paths = sorted(glob.glob(f"{asset_path}/*"))
- label_paths = img2label_paths(img_paths)
-
- for image_file, label_file in zip(img_paths, label_paths):
- image_logical_path, label_logical_path = map(lambda x: os.path.relpath(x, path), [image_file, label_file])
-
- try:
- artifact.add(image_file, logical_path=image_logical_path, metadata={"split": split})
- artifact.add(label_file, logical_path=label_logical_path, metadata={"split": split})
- except ValueError as e:
- logger.error('COMET ERROR: Error adding file to Artifact. Skipping file.')
- logger.error(f"COMET ERROR: {e}")
- continue
-
- return artifact
-
- def upload_dataset_artifact(self):
- dataset_name = self.data_dict.get("dataset_name", "yolov5-dataset")
- path = str((ROOT / Path(self.data_dict["path"])).resolve())
-
- metadata = self.data_dict.copy()
- for key in ["train", "val", "test"]:
- split_path = metadata.get(key)
- if split_path is not None:
- metadata[key] = split_path.replace(path, "")
-
- artifact = comet_ml.Artifact(name=dataset_name, artifact_type="dataset", metadata=metadata)
- for key in metadata.keys():
- if key in ["train", "val", "test"]:
- if isinstance(self.upload_dataset, str) and (key != self.upload_dataset):
- continue
-
- asset_path = self.data_dict.get(key)
- if asset_path is not None:
- artifact = self.add_assets_to_artifact(artifact, path, asset_path, key)
-
- self.experiment.log_artifact(artifact)
-
- return
-
- def download_dataset_artifact(self, artifact_path):
- logged_artifact = self.experiment.get_artifact(artifact_path)
- artifact_save_dir = str(Path(self.opt.save_dir) / logged_artifact.name)
- logged_artifact.download(artifact_save_dir)
-
- metadata = logged_artifact.metadata
- data_dict = metadata.copy()
- data_dict["path"] = artifact_save_dir
-
- metadata_names = metadata.get("names")
- if type(metadata_names) == dict:
- data_dict["names"] = {int(k): v for k, v in metadata.get("names").items()}
- elif type(metadata_names) == list:
- data_dict["names"] = {int(k): v for k, v in zip(range(len(metadata_names)), metadata_names)}
- else:
- raise "Invalid 'names' field in dataset yaml file. Please use a list or dictionary"
-
- data_dict = self.update_data_paths(data_dict)
- return data_dict
-
- def update_data_paths(self, data_dict):
- path = data_dict.get("path", "")
-
- for split in ["train", "val", "test"]:
- if data_dict.get(split):
- split_path = data_dict.get(split)
- data_dict[split] = (f"{path}/{split_path}" if isinstance(split, str) else [
- f"{path}/{x}" for x in split_path])
-
- return data_dict
-
- def on_pretrain_routine_end(self, paths):
- if self.opt.resume:
- return
-
- for path in paths:
- self.log_asset(str(path))
-
- if self.upload_dataset:
- if not self.resume:
- self.upload_dataset_artifact()
-
- return
-
- def on_train_start(self):
- self.log_parameters(self.hyp)
-
- def on_train_epoch_start(self):
- return
-
- def on_train_epoch_end(self, epoch):
- self.experiment.curr_epoch = epoch
-
- return
-
- def on_train_batch_start(self):
- return
-
- def on_train_batch_end(self, log_dict, step):
- self.experiment.curr_step = step
- if self.log_batch_metrics and (step % self.comet_log_batch_interval == 0):
- self.log_metrics(log_dict, step=step)
-
- return
-
- def on_train_end(self, files, save_dir, last, best, epoch, results):
- if self.comet_log_predictions:
- curr_epoch = self.experiment.curr_epoch
- self.experiment.log_asset_data(self.metadata_dict, "image-metadata.json", epoch=curr_epoch)
-
- for f in files:
- self.log_asset(f, metadata={"epoch": epoch})
- self.log_asset(f"{save_dir}/results.csv", metadata={"epoch": epoch})
-
- if not self.opt.evolve:
- model_path = str(best if best.exists() else last)
- name = Path(model_path).name
- if self.save_model:
- self.experiment.log_model(
- self.model_name,
- file_or_folder=model_path,
- file_name=name,
- overwrite=True,
- )
-
- # Check if running Experiment with Comet Optimizer
- if hasattr(self.opt, 'comet_optimizer_id'):
- metric = results.get(self.opt.comet_optimizer_metric)
- self.experiment.log_other('optimizer_metric_value', metric)
-
- self.finish_run()
-
- def on_val_start(self):
- return
-
- def on_val_batch_start(self):
- return
-
- def on_val_batch_end(self, batch_i, images, targets, paths, shapes, outputs):
- if not (self.comet_log_predictions and ((batch_i + 1) % self.comet_log_prediction_interval == 0)):
- return
-
- for si, pred in enumerate(outputs):
- if len(pred) == 0:
- continue
-
- image = images[si]
- labels = targets[targets[:, 0] == si, 1:]
- shape = shapes[si]
- path = paths[si]
- predn, labelsn = self.preprocess_prediction(image, labels, shape, pred)
- if labelsn is not None:
- self.log_predictions(image, labelsn, path, shape, predn)
-
- return
-
- def on_val_end(self, nt, tp, fp, p, r, f1, ap, ap50, ap_class, confusion_matrix):
- if self.comet_log_per_class_metrics:
- if self.num_classes > 1:
- for i, c in enumerate(ap_class):
- class_name = self.class_names[c]
- self.experiment.log_metrics(
- {
- 'mAP@.5': ap50[i],
- 'mAP@.5:.95': ap[i],
- 'precision': p[i],
- 'recall': r[i],
- 'f1': f1[i],
- 'true_positives': tp[i],
- 'false_positives': fp[i],
- 'support': nt[c]},
- prefix=class_name)
-
- if self.comet_log_confusion_matrix:
- epoch = self.experiment.curr_epoch
- class_names = list(self.class_names.values())
- class_names.append("background")
- num_classes = len(class_names)
-
- self.experiment.log_confusion_matrix(
- matrix=confusion_matrix.matrix,
- max_categories=num_classes,
- labels=class_names,
- epoch=epoch,
- column_label='Actual Category',
- row_label='Predicted Category',
- file_name=f"confusion-matrix-epoch-{epoch}.json",
- )
-
- def on_fit_epoch_end(self, result, epoch):
- self.log_metrics(result, epoch=epoch)
-
- def on_model_save(self, last, epoch, final_epoch, best_fitness, fi):
- if ((epoch + 1) % self.opt.save_period == 0 and not final_epoch) and self.opt.save_period != -1:
- self.log_model(last.parent, self.opt, epoch, fi, best_model=best_fitness == fi)
-
- def on_params_update(self, params):
- self.log_parameters(params)
-
- def finish_run(self):
- self.experiment.end()
diff --git a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/spaghetti/custom_types.py b/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/spaghetti/custom_types.py
deleted file mode 100644
index 9e29951ed9cf690a34bb99e92b8a0ebe59f457a2..0000000000000000000000000000000000000000
--- a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/spaghetti/custom_types.py
+++ /dev/null
@@ -1,53 +0,0 @@
-# import open3d
-import enum
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as nnf
-# from .constants import DEBUG
-from typing import Tuple, List, Union, Callable, Type, Iterator, Dict, Set, Optional, Any, Sized, Iterable
-from types import DynamicClassAttribute
-from enum import Enum, unique
-import torch.optim.optimizer
-import torch.utils.data
-
-# if DEBUG:
-# seed = 99
-# torch.manual_seed(seed)
-# np.random.seed(seed)
-
-N = type(None)
-V = np.array
-ARRAY = np.ndarray
-ARRAYS = Union[Tuple[ARRAY, ...], List[ARRAY]]
-VS = Union[Tuple[V, ...], List[V]]
-VN = Union[V, N]
-VNS = Union[VS, N]
-T = torch.Tensor
-TS = Union[Tuple[T, ...], List[T]]
-TN = Optional[T]
-TNS = Union[Tuple[TN, ...], List[TN]]
-TSN = Optional[TS]
-TA = Union[T, ARRAY]
-
-V_Mesh = Tuple[ARRAY, ARRAY]
-T_Mesh = Tuple[T, Optional[T]]
-T_Mesh_T = Union[T_Mesh, T]
-COLORS = Union[T, ARRAY, Tuple[int, int, int]]
-
-D = torch.device
-CPU = torch.device('cpu')
-
-
-def get_device(device_id: int) -> D:
- if not torch.cuda.is_available():
- return CPU
- device_id = min(torch.cuda.device_count() - 1, device_id)
- return torch.device(f'cuda:{device_id}')
-
-
-CUDA = get_device
-Optimizer = torch.optim.Adam
-Dataset = torch.utils.data.Dataset
-DataLoader = torch.utils.data.DataLoader
-Subset = torch.utils.data.Subset
diff --git a/spaces/KPCGD/bingo/src/lib/hooks/use-bing.ts b/spaces/KPCGD/bingo/src/lib/hooks/use-bing.ts
deleted file mode 100644
index dcdb1667ced0cba299b0825c0e91c4732411308c..0000000000000000000000000000000000000000
--- a/spaces/KPCGD/bingo/src/lib/hooks/use-bing.ts
+++ /dev/null
@@ -1,173 +0,0 @@
-'use client'
-
-import { useState, useCallback, useEffect, useMemo } from 'react'
-import { useAtom, useAtomValue } from 'jotai'
-import { chatFamily, bingConversationStyleAtom, GreetMessages, hashAtom, voiceAtom } from '@/state'
-import { setConversationMessages } from './chat-history'
-import { ChatMessageModel, BotId, FileItem } from '@/lib/bots/bing/types'
-import { nanoid } from '../utils'
-import { TTS } from '../bots/bing/tts'
-
-export function useBing(botId: BotId = 'bing') {
- const chatAtom = useMemo(() => chatFamily({ botId, page: 'singleton' }), [botId])
- const [enableTTS] = useAtom(voiceAtom)
- const speaker = useMemo(() => new TTS(), [])
- const [hash, setHash] = useAtom(hashAtom)
- const bingConversationStyle = useAtomValue(bingConversationStyleAtom)
- const [chatState, setChatState] = useAtom(chatAtom)
- const [input, setInput] = useState('')
- const [attachmentList, setAttachmentList] = useState([])
-
- const updateMessage = useCallback(
- (messageId: string, updater: (message: ChatMessageModel) => void) => {
- setChatState((draft) => {
- const message = draft.messages.find((m) => m.id === messageId)
- if (message) {
- updater(message)
- }
- })
- },
- [setChatState],
- )
-
- const sendMessage = useCallback(
- async (input: string, options = {}) => {
- const botMessageId = nanoid()
- const imageUrl = attachmentList?.[0]?.status === 'loaded' ? attachmentList[0].url : undefined
- setChatState((draft) => {
- const text = imageUrl ? `${input}\n\n` : input
- draft.messages.push({ id: nanoid(), text, author: 'user' }, { id: botMessageId, text: '', author: 'bot' })
- setAttachmentList([])
- })
- const abortController = new AbortController()
- setChatState((draft) => {
- draft.generatingMessageId = botMessageId
- draft.abortController = abortController
- })
- speaker.reset()
- await chatState.bot.sendMessage({
- prompt: input,
- imageUrl: /\?bcid=([^&]+)/.test(imageUrl ?? '') ? `https://www.bing.com/images/blob?bcid=${RegExp.$1}` : imageUrl,
- options: {
- ...options,
- bingConversationStyle,
- },
- signal: abortController.signal,
- onEvent(event) {
- if (event.type === 'UPDATE_ANSWER') {
- updateMessage(botMessageId, (message) => {
- if (event.data.text.length > message.text.length) {
- message.text = event.data.text
- }
-
- if (event.data.spokenText && enableTTS) {
- speaker.speak(event.data.spokenText)
- }
-
- message.throttling = event.data.throttling || message.throttling
- message.sourceAttributions = event.data.sourceAttributions || message.sourceAttributions
- message.suggestedResponses = event.data.suggestedResponses || message.suggestedResponses
- })
- } else if (event.type === 'ERROR') {
- updateMessage(botMessageId, (message) => {
- message.error = event.error
- })
- setChatState((draft) => {
- draft.abortController = undefined
- draft.generatingMessageId = ''
- })
- } else if (event.type === 'DONE') {
- setChatState((draft) => {
- draft.abortController = undefined
- draft.generatingMessageId = ''
- })
- }
- },
- })
- },
- [botId, attachmentList, chatState.bot, setChatState, updateMessage],
- )
-
- const uploadImage = useCallback(async (imgUrl: string) => {
- setAttachmentList([{ url: imgUrl, status: 'loading' }])
- const response = await chatState.bot.uploadImage(imgUrl, bingConversationStyle)
- if (response?.blobId) {
- setAttachmentList([{ url: `/api/blob?bcid=${response.blobId}`, status: 'loaded' }])
- } else {
- setAttachmentList([{ url: imgUrl, status: 'error' }])
- }
- }, [chatState.bot])
-
- const resetConversation = useCallback(() => {
- chatState.bot.resetConversation()
- speaker.abort()
- setChatState((draft) => {
- draft.abortController = undefined
- draft.generatingMessageId = ''
- draft.messages = [{ author: 'bot', text: GreetMessages[Math.floor(GreetMessages.length * Math.random())], id: nanoid() }]
- draft.conversationId = nanoid()
- })
- }, [chatState.bot, setChatState])
-
- const stopGenerating = useCallback(() => {
- chatState.abortController?.abort()
- if (chatState.generatingMessageId) {
- updateMessage(chatState.generatingMessageId, (message) => {
- if (!message.text && !message.error) {
- message.text = 'Cancelled'
- }
- })
- }
- setChatState((draft) => {
- draft.generatingMessageId = ''
- })
- }, [chatState.abortController, chatState.generatingMessageId, setChatState, updateMessage])
-
- useEffect(() => {
- if (chatState.messages.length) {
- setConversationMessages(botId, chatState.conversationId, chatState.messages)
- }
- }, [botId, chatState.conversationId, chatState.messages])
-
- useEffect(() => {
- if (hash === 'reset') {
- resetConversation()
- setHash('')
- }
- }, [hash, setHash])
-
- const chat = useMemo(
- () => ({
- botId,
- bot: chatState.bot,
- isSpeaking: speaker.isSpeaking,
- messages: chatState.messages,
- sendMessage,
- setInput,
- input,
- resetConversation,
- generating: !!chatState.generatingMessageId,
- stopGenerating,
- uploadImage,
- setAttachmentList,
- attachmentList,
- }),
- [
- botId,
- bingConversationStyle,
- chatState.bot,
- chatState.generatingMessageId,
- chatState.messages,
- speaker.isSpeaking,
- setInput,
- input,
- setAttachmentList,
- attachmentList,
- resetConversation,
- sendMessage,
- stopGenerating,
- ],
- )
-
- return chat
-}
diff --git a/spaces/Kaludi/Food-Category-Classification_App/app.py b/spaces/Kaludi/Food-Category-Classification_App/app.py
deleted file mode 100644
index 1fef378a26fa7bdfb44114f8b72d4e97cecd7991..0000000000000000000000000000000000000000
--- a/spaces/Kaludi/Food-Category-Classification_App/app.py
+++ /dev/null
@@ -1,20 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-
-examples = ["examples/example_0.jpg",
- "examples/example_1.jpg",
- "examples/example_2.jpg",
- "examples/example_3.jpg",
- "examples/example_4.jpg",
- "examples/example_5.jpg",
- "examples/example_6.jpg",
- "examples/example_7.jpg"]
-
-pipe = pipeline(task="image-classification",
- model="Kaludi/food-category-classification-v2.0")
-gr.Interface.from_pipeline(pipe,
- title="Food Category Classification App",
- description = "This is a Food Category Image Classifier model that has been trained by Kaludi to recognize 12 different categories of foods, which includes Bread, Dairy, Dessert, Egg, Fried Food, Fruit, Meat, Noodles, Rice, Seafood, Soup, and Vegetable. It can accurately classify an image of food into one of these categories by analyzing its visual features. This model can be used by food bloggers, restaurants, and recipe websites to quickly categorize and sort their food images, making it easier to manage their content and provide a better user experience.",
- article = "
NOTE: All models are trained using speech audio datasets ONLY! (24kHz models: LibriTTS, 22kHz models: LibriTTS + VCTK + LJSpeech).
-
- """)
-
- with gr.Group():
- with gr.Box():
- model_choice = gr.Radio(label="Select the model. Default: bigvgan_24khz_100band",
- value="bigvgan_24khz_100band",
- choices=[m for m in list_model_name],
- type="index",
- interactive=True)
- audio_input = gr.Audio(label="Input Audio",
- elem_id="input-audio",
- interactive=True)
- button = gr.Button("Submit").style(full_width=True)
- output_video = gr.Video(label="Output Audio",
- elem_id="output-video")
- output_image_gen = gr.Image(label="Output Mel Spectrogram",
- elem_id="output-image-gen")
- button.click(inference_gradio,
- inputs=[audio_input, model_choice],
- outputs=[output_video, output_image_gen])
-
- gr.Examples(
- [
- [os.path.join(os.path.dirname(__file__), "examples/jensen.wav"), "bigvgan_24khz_100band"],
- [os.path.join(os.path.dirname(__file__), "examples/libritts.wav"), "bigvgan_24khz_100band"],
- [os.path.join(os.path.dirname(__file__), "examples/queen.wav"), "bigvgan_24khz_100band"],
- [os.path.join(os.path.dirname(__file__), "examples/dance.wav"), "bigvgan_24khz_100band"],
- [os.path.join(os.path.dirname(__file__), "examples/megalovania.wav"), "bigvgan_24khz_100band"],
- ],
- fn=inference_gradio,
- inputs=[audio_input, model_choice],
- outputs=[output_video, output_image_gen],
- cache_examples=True
- )
-
-iface.queue(concurrency_count=3)
-iface.launch()
diff --git a/spaces/LUCKky/QQsign/Dockerfile b/spaces/LUCKky/QQsign/Dockerfile
deleted file mode 100644
index 535624113f3b520e4829240a48bd3652430de828..0000000000000000000000000000000000000000
--- a/spaces/LUCKky/QQsign/Dockerfile
+++ /dev/null
@@ -1,23 +0,0 @@
-FROM openjdk:17-slim
-
-# 设置时区
-ENV TZ Asia/Shanghai
-
-# 设置工作目录
-WORKDIR /app
-
-# 复制文件到工作目录
-COPY bin /app/bin
-COPY lib /app/lib
-COPY txlib /app/txlib
-
-# 设置命令
-RUN chmod -R 777 /tmp
-RUN chmod -R 777 /app
-RUN sed 's/"key": ".*"/"key": "'"$KEY_VALUE"'"/' txlib/$TXLIB_VERSION/config.json > /app/txlib/$TXLIB_VERSION/config.json
-
-# 运行
-CMD bash bin/unidbg-fetch-qsign --basePath=txlib/$TXLIB_VERSION
-
-# 暴露端口
-EXPOSE 7860
\ No newline at end of file
diff --git a/spaces/LaynzKunz/REMAKE-AI-COVER/README.md b/spaces/LaynzKunz/REMAKE-AI-COVER/README.md
deleted file mode 100644
index 7f2b5687c9b2337af9abc6c98f27e1b63e4487b8..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/REMAKE-AI-COVER/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: REMAKE AI COVER
-emoji: 🚀
-colorFrom: blue
-colorTo: blue
-sdk: gradio
-sdk_version: 3.50.2
-app_file: app.py
-pinned: true
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Lianjd/stock_dashboard/backtrader/indicators/directionalmove.py b/spaces/Lianjd/stock_dashboard/backtrader/indicators/directionalmove.py
deleted file mode 100644
index 21b1f0a6c8910d05680f4a9189cdb46a699057ac..0000000000000000000000000000000000000000
--- a/spaces/Lianjd/stock_dashboard/backtrader/indicators/directionalmove.py
+++ /dev/null
@@ -1,383 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8; py-indent-offset:4 -*-
-###############################################################################
-#
-# Copyright (C) 2015-2020 Daniel Rodriguez
-#
-# This program is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program. If not, see .
-#
-###############################################################################
-from __future__ import (absolute_import, division, print_function,
- unicode_literals)
-
-from . import Indicator, And, If, MovAv, ATR
-
-
-class UpMove(Indicator):
- '''
- Defined by J. Welles Wilder, Jr. in 1978 in his book *"New Concepts in
- Technical Trading Systems"* as part of the Directional Move System to
- calculate Directional Indicators.
-
- Positive if the given data has moved higher than the previous day
-
- Formula:
- - upmove = data - data(-1)
-
- See:
- - https://en.wikipedia.org/wiki/Average_directional_movement_index
- '''
- lines = ('upmove',)
-
- def __init__(self):
- self.lines.upmove = self.data - self.data(-1)
- super(UpMove, self).__init__()
-
-
-class DownMove(Indicator):
- '''
- Defined by J. Welles Wilder, Jr. in 1978 in his book *"New Concepts in
- Technical Trading Systems"* as part of the Directional Move System to
- calculate Directional Indicators.
-
- Positive if the given data has moved lower than the previous day
-
- Formula:
- - downmove = data(-1) - data
-
- See:
- - https://en.wikipedia.org/wiki/Average_directional_movement_index
- '''
- lines = ('downmove',)
-
- def __init__(self):
- self.lines.downmove = self.data(-1) - self.data
- super(DownMove, self).__init__()
-
-
-class _DirectionalIndicator(Indicator):
- '''
- This class serves as the root base class for all "Directional Movement
- System" related indicators, given that the calculations are first common
- and then derived from the common calculations.
-
- It can calculate the +DI and -DI values (using kwargs as the hint as to
- what to calculate) but doesn't assign them to lines. This is left for
- sublcases of this class.
- '''
- params = (('period', 14), ('movav', MovAv.Smoothed))
-
- plotlines = dict(plusDI=dict(_name='+DI'), minusDI=dict(_name='-DI'))
-
- def _plotlabel(self):
- plabels = [self.p.period]
- plabels += [self.p.movav] * self.p.notdefault('movav')
- return plabels
-
- def __init__(self, _plus=True, _minus=True):
- atr = ATR(self.data, period=self.p.period, movav=self.p.movav)
-
- upmove = self.data.high - self.data.high(-1)
- downmove = self.data.low(-1) - self.data.low
-
- if _plus:
- plus = And(upmove > downmove, upmove > 0.0)
- plusDM = If(plus, upmove, 0.0)
- plusDMav = self.p.movav(plusDM, period=self.p.period)
-
- self.DIplus = 100.0 * plusDMav / atr
-
- if _minus:
- minus = And(downmove > upmove, downmove > 0.0)
- minusDM = If(minus, downmove, 0.0)
- minusDMav = self.p.movav(minusDM, period=self.p.period)
-
- self.DIminus = 100.0 * minusDMav / atr
-
- super(_DirectionalIndicator, self).__init__()
-
-
-class DirectionalIndicator(_DirectionalIndicator):
- '''
- Defined by J. Welles Wilder, Jr. in 1978 in his book *"New Concepts in
- Technical Trading Systems"*.
-
- Intended to measure trend strength
-
- This indicator shows +DI, -DI:
- - Use PlusDirectionalIndicator (PlusDI) to get +DI
- - Use MinusDirectionalIndicator (MinusDI) to get -DI
- - Use AverageDirectionalIndex (ADX) to get ADX
- - Use AverageDirectionalIndexRating (ADXR) to get ADX, ADXR
- - Use DirectionalMovementIndex (DMI) to get ADX, +DI, -DI
- - Use DirectionalMovement (DM) to get ADX, ADXR, +DI, -DI
-
- Formula:
- - upmove = high - high(-1)
- - downmove = low(-1) - low
- - +dm = upmove if upmove > downmove and upmove > 0 else 0
- - -dm = downmove if downmove > upmove and downmove > 0 else 0
- - +di = 100 * MovingAverage(+dm, period) / atr(period)
- - -di = 100 * MovingAverage(-dm, period) / atr(period)
-
- The moving average used is the one originally defined by Wilder,
- the SmoothedMovingAverage
-
- See:
- - https://en.wikipedia.org/wiki/Average_directional_movement_index
- '''
- alias = ('DI',)
- lines = ('plusDI', 'minusDI',)
-
- def __init__(self):
- super(DirectionalIndicator, self).__init__()
-
- self.lines.plusDI = self.DIplus
- self.lines.minusDI = self.DIminus
-
-
-class PlusDirectionalIndicator(_DirectionalIndicator):
- '''
- Defined by J. Welles Wilder, Jr. in 1978 in his book *"New Concepts in
- Technical Trading Systems"*.
-
- Intended to measure trend strength
-
- This indicator shows +DI:
- - Use MinusDirectionalIndicator (MinusDI) to get -DI
- - Use Directional Indicator (DI) to get +DI, -DI
- - Use AverageDirectionalIndex (ADX) to get ADX
- - Use AverageDirectionalIndexRating (ADXR) to get ADX, ADXR
- - Use DirectionalMovementIndex (DMI) to get ADX, +DI, -DI
- - Use DirectionalMovement (DM) to get ADX, ADXR, +DI, -DI
-
- Formula:
- - upmove = high - high(-1)
- - downmove = low(-1) - low
- - +dm = upmove if upmove > downmove and upmove > 0 else 0
- - +di = 100 * MovingAverage(+dm, period) / atr(period)
-
- The moving average used is the one originally defined by Wilder,
- the SmoothedMovingAverage
-
- See:
- - https://en.wikipedia.org/wiki/Average_directional_movement_index
- '''
- alias = (('PlusDI', '+DI'),)
- lines = ('plusDI',)
-
- plotinfo = dict(plotname='+DirectionalIndicator')
-
- def __init__(self):
- super(PlusDirectionalIndicator, self).__init__(_minus=False)
-
- self.lines.plusDI = self.DIplus
-
-
-class MinusDirectionalIndicator(_DirectionalIndicator):
- '''
- Defined by J. Welles Wilder, Jr. in 1978 in his book *"New Concepts in
- Technical Trading Systems"*.
-
- Intended to measure trend strength
-
- This indicator shows -DI:
- - Use PlusDirectionalIndicator (PlusDI) to get +DI
- - Use Directional Indicator (DI) to get +DI, -DI
- - Use AverageDirectionalIndex (ADX) to get ADX
- - Use AverageDirectionalIndexRating (ADXR) to get ADX, ADXR
- - Use DirectionalMovementIndex (DMI) to get ADX, +DI, -DI
- - Use DirectionalMovement (DM) to get ADX, ADXR, +DI, -DI
-
- Formula:
- - upmove = high - high(-1)
- - downmove = low(-1) - low
- - -dm = downmove if downmove > upmove and downmove > 0 else 0
- - -di = 100 * MovingAverage(-dm, period) / atr(period)
-
- The moving average used is the one originally defined by Wilder,
- the SmoothedMovingAverage
-
- See:
- - https://en.wikipedia.org/wiki/Average_directional_movement_index
- '''
- alias = (('MinusDI', '-DI'),)
- lines = ('minusDI',)
-
- plotinfo = dict(plotname='-DirectionalIndicator')
-
- def __init__(self):
- super(MinusDirectionalIndicator, self).__init__(_plus=False)
-
- self.lines.minusDI = self.DIminus
-
-
-class AverageDirectionalMovementIndex(_DirectionalIndicator):
- '''
- Defined by J. Welles Wilder, Jr. in 1978 in his book *"New Concepts in
- Technical Trading Systems"*.
-
- Intended to measure trend strength
-
- This indicator only shows ADX:
- - Use PlusDirectionalIndicator (PlusDI) to get +DI
- - Use MinusDirectionalIndicator (MinusDI) to get -DI
- - Use Directional Indicator (DI) to get +DI, -DI
- - Use AverageDirectionalIndexRating (ADXR) to get ADX, ADXR
- - Use DirectionalMovementIndex (DMI) to get ADX, +DI, -DI
- - Use DirectionalMovement (DM) to get ADX, ADXR, +DI, -DI
-
- Formula:
- - upmove = high - high(-1)
- - downmove = low(-1) - low
- - +dm = upmove if upmove > downmove and upmove > 0 else 0
- - -dm = downmove if downmove > upmove and downmove > 0 else 0
- - +di = 100 * MovingAverage(+dm, period) / atr(period)
- - -di = 100 * MovingAverage(-dm, period) / atr(period)
- - dx = 100 * abs(+di - -di) / (+di + -di)
- - adx = MovingAverage(dx, period)
-
- The moving average used is the one originally defined by Wilder,
- the SmoothedMovingAverage
-
- See:
- - https://en.wikipedia.org/wiki/Average_directional_movement_index
- '''
- alias = ('ADX',)
-
- lines = ('adx',)
-
- plotlines = dict(adx=dict(_name='ADX'))
-
- def __init__(self):
- super(AverageDirectionalMovementIndex, self).__init__()
-
- dx = abs(self.DIplus - self.DIminus) / (self.DIplus + self.DIminus)
- self.lines.adx = 100.0 * self.p.movav(dx, period=self.p.period)
-
-
-class AverageDirectionalMovementIndexRating(AverageDirectionalMovementIndex):
- '''
- Defined by J. Welles Wilder, Jr. in 1978 in his book *"New Concepts in
- Technical Trading Systems"*.
-
- Intended to measure trend strength.
-
- ADXR is the average of ADX with a value period bars ago
-
- This indicator shows the ADX and ADXR:
- - Use PlusDirectionalIndicator (PlusDI) to get +DI
- - Use MinusDirectionalIndicator (MinusDI) to get -DI
- - Use Directional Indicator (DI) to get +DI, -DI
- - Use AverageDirectionalIndex (ADX) to get ADX
- - Use DirectionalMovementIndex (DMI) to get ADX, +DI, -DI
- - Use DirectionalMovement (DM) to get ADX, ADXR, +DI, -DI
-
- Formula:
- - upmove = high - high(-1)
- - downmove = low(-1) - low
- - +dm = upmove if upmove > downmove and upmove > 0 else 0
- - -dm = downmove if downmove > upmove and downmove > 0 else 0
- - +di = 100 * MovingAverage(+dm, period) / atr(period)
- - -di = 100 * MovingAverage(-dm, period) / atr(period)
- - dx = 100 * abs(+di - -di) / (+di + -di)
- - adx = MovingAverage(dx, period)
- - adxr = (adx + adx(-period)) / 2
-
- The moving average used is the one originally defined by Wilder,
- the SmoothedMovingAverage
-
- See:
- - https://en.wikipedia.org/wiki/Average_directional_movement_index
- '''
- alias = ('ADXR',)
-
- lines = ('adxr',)
- plotlines = dict(adxr=dict(_name='ADXR'))
-
- def __init__(self):
- super(AverageDirectionalMovementIndexRating, self).__init__()
-
- self.lines.adxr = (self.l.adx + self.l.adx(-self.p.period)) / 2.0
-
-
-class DirectionalMovementIndex(AverageDirectionalMovementIndex,
- DirectionalIndicator):
- '''
- Defined by J. Welles Wilder, Jr. in 1978 in his book *"New Concepts in
- Technical Trading Systems"*.
-
- Intended to measure trend strength
-
- This indicator shows the ADX, +DI, -DI:
- - Use PlusDirectionalIndicator (PlusDI) to get +DI
- - Use MinusDirectionalIndicator (MinusDI) to get -DI
- - Use Directional Indicator (DI) to get +DI, -DI
- - Use AverageDirectionalIndex (ADX) to get ADX
- - Use AverageDirectionalIndexRating (ADXRating) to get ADX, ADXR
- - Use DirectionalMovement (DM) to get ADX, ADXR, +DI, -DI
-
- Formula:
- - upmove = high - high(-1)
- - downmove = low(-1) - low
- - +dm = upmove if upmove > downmove and upmove > 0 else 0
- - -dm = downmove if downmove > upmove and downmove > 0 else 0
- - +di = 100 * MovingAverage(+dm, period) / atr(period)
- - -di = 100 * MovingAverage(-dm, period) / atr(period)
- - dx = 100 * abs(+di - -di) / (+di + -di)
- - adx = MovingAverage(dx, period)
-
- The moving average used is the one originally defined by Wilder,
- the SmoothedMovingAverage
-
- See:
- - https://en.wikipedia.org/wiki/Average_directional_movement_index
- '''
- alias = ('DMI',)
-
-
-class DirectionalMovement(AverageDirectionalMovementIndexRating,
- DirectionalIndicator):
- '''
- Defined by J. Welles Wilder, Jr. in 1978 in his book *"New Concepts in
- Technical Trading Systems"*.
-
- Intended to measure trend strength
-
- This indicator shows ADX, ADXR, +DI, -DI.
-
- - Use PlusDirectionalIndicator (PlusDI) to get +DI
- - Use MinusDirectionalIndicator (MinusDI) to get -DI
- - Use Directional Indicator (DI) to get +DI, -DI
- - Use AverageDirectionalIndex (ADX) to get ADX
- - Use AverageDirectionalIndexRating (ADXR) to get ADX, ADXR
- - Use DirectionalMovementIndex (DMI) to get ADX, +DI, -DI
-
- Formula:
- - upmove = high - high(-1)
- - downmove = low(-1) - low
- - +dm = upmove if upmove > downmove and upmove > 0 else 0
- - -dm = downmove if downmove > upmove and downmove > 0 else 0
- - +di = 100 * MovingAverage(+dm, period) / atr(period)
- - -di = 100 * MovingAverage(-dm, period) / atr(period)
- - dx = 100 * abs(+di - -di) / (+di + -di)
- - adx = MovingAverage(dx, period)
-
- The moving average used is the one originally defined by Wilder,
- the SmoothedMovingAverage
-
- See:
- - https://en.wikipedia.org/wiki/Average_directional_movement_index
- '''
- alias = ('DM',)
diff --git a/spaces/Lilflerkin/WellNexus/app.py b/spaces/Lilflerkin/WellNexus/app.py
deleted file mode 100644
index a9a23043e252575a632d6a4f11738b4a3853d8a7..0000000000000000000000000000000000000000
--- a/spaces/Lilflerkin/WellNexus/app.py
+++ /dev/null
@@ -1,73 +0,0 @@
-import gradio as gr
-import pandas as pd
-import numpy as np
-from joblib import load
-
-
-def predict_disease_from_symptom(symptom_list):
- symptoms = {'itching': 0, 'skin_rash': 0, 'nodal_skin_eruptions': 0, 'continuous_sneezing': 0,
- 'shivering': 0, 'chills': 0, 'joint_pain': 0, 'stomach_pain': 0, 'acidity': 0, 'ulcers_on_tongue': 0,
- 'muscle_wasting': 0, 'vomiting': 0, 'burning_micturition': 0, 'spotting_ urination': 0, 'fatigue': 0,
- 'weight_gain': 0, 'anxiety': 0, 'cold_hands_and_feets': 0, 'mood_swings': 0, 'weight_loss': 0,
- 'restlessness': 0, 'lethargy': 0, 'patches_in_throat': 0, 'irregular_sugar_level': 0, 'cough': 0,
- 'high_fever': 0, 'sunken_eyes': 0, 'breathlessness': 0, 'sweating': 0, 'dehydration': 0,
- 'indigestion': 0, 'headache': 0, 'yellowish_skin': 0, 'dark_urine': 0, 'nausea': 0, 'loss_of_appetite': 0,
- 'pain_behind_the_eyes': 0, 'back_pain': 0, 'constipation': 0, 'abdominal_pain': 0, 'diarrhoea': 0, 'mild_fever': 0,
- 'yellow_urine': 0, 'yellowing_of_eyes': 0, 'acute_liver_failure': 0, 'fluid_overload': 0, 'swelling_of_stomach': 0,
- 'swelled_lymph_nodes': 0, 'malaise': 0, 'blurred_and_distorted_vision': 0, 'phlegm': 0, 'throat_irritation': 0,
- 'redness_of_eyes': 0, 'sinus_pressure': 0, 'runny_nose': 0, 'congestion': 0, 'chest_pain': 0, 'weakness_in_limbs': 0,
- 'fast_heart_rate': 0, 'pain_during_bowel_movements': 0, 'pain_in_anal_region': 0, 'bloody_stool': 0,
- 'irritation_in_anus': 0, 'neck_pain': 0, 'dizziness': 0, 'cramps': 0, 'bruising': 0, 'obesity': 0, 'swollen_legs': 0,
- 'swollen_blood_vessels': 0, 'puffy_face_and_eyes': 0, 'enlarged_thyroid': 0, 'brittle_nails': 0, 'swollen_extremeties': 0,
- 'excessive_hunger': 0, 'extra_marital_contacts': 0, 'drying_and_tingling_lips': 0, 'slurred_speech': 0,
- 'knee_pain': 0, 'hip_joint_pain': 0, 'muscle_weakness': 0, 'stiff_neck': 0, 'swelling_joints': 0, 'movement_stiffness': 0,
- 'spinning_movements': 0, 'loss_of_balance': 0, 'unsteadiness': 0, 'weakness_of_one_body_side': 0, 'loss_of_smell': 0,
- 'bladder_discomfort': 0, 'foul_smell_of urine': 0, 'continuous_feel_of_urine': 0, 'passage_of_gases': 0, 'internal_itching': 0,
- 'toxic_look_(typhos)': 0, 'depression': 0, 'irritability': 0, 'muscle_pain': 0, 'altered_sensorium': 0,
- 'red_spots_over_body': 0, 'belly_pain': 0, 'abnormal_menstruation': 0, 'dischromic _patches': 0, 'watering_from_eyes': 0,
- 'increased_appetite': 0, 'polyuria': 0, 'family_history': 0, 'mucoid_sputum': 0, 'rusty_sputum': 0, 'lack_of_concentration': 0,
- 'visual_disturbances': 0, 'receiving_blood_transfusion': 0, 'receiving_unsterile_injections': 0, 'coma': 0,
- 'stomach_bleeding': 0, 'distention_of_abdomen': 0, 'history_of_alcohol_consumption': 0, 'fluid_overload.1': 0,
- 'blood_in_sputum': 0, 'prominent_veins_on_calf': 0, 'palpitations': 0, 'painful_walking': 0, 'pus_filled_pimples': 0,
- 'blackheads': 0, 'scurring': 0, 'skin_peeling': 0, 'silver_like_dusting': 0, 'small_dents_in_nails': 0, 'inflammatory_nails': 0,
- 'blister': 0, 'red_sore_around_nose': 0, 'yellow_crust_ooze': 0}
-
- for s in symptom_list:
- symptoms[s] = 1
-
-
- df_test = pd.DataFrame(columns=list(symptoms.keys()))
- df_test.loc[0] = np.array(list(symptoms.values()))
-
-
- clf = load(str("./saved_model/random_forest.joblib"))
- result = clf.predict(df_test)
-
-
- del df_test
-
- return f"{result[0]}"
-
-
-iface = gr.Interface(
- predict_disease_from_symptom,
- [
- gr.inputs.CheckboxGroup(['itching', 'skin_rash', 'nodal_skin_eruptions', 'continuous_sneezing', 'shivering', 'chills', 'joint_pain', 'stomach_pain', 'acidity', 'ulcers_on_tongue',
- 'muscle_wasting', 'vomiting', 'burning_micturition', 'spotting_ urination', 'fatigue', 'weight_gain', 'anxiety', 'cold_hands_and_feets', 'mood_swings', 'weight_loss',
- 'restlessness', 'lethargy', 'patches_in_throat', 'irregular_sugar_level', 'cough', 'high_fever', 'sunken_eyes', 'breathlessness', 'sweating', 'dehydration',
- 'indigestion', 'headache', 'yellowish_skin', 'dark_urine', 'nausea', 'loss_of_appetite', 'pain_behind_the_eyes', 'back_pain', 'constipation', 'abdominal_pain', 'diarrhoea', 'mild_fever',
- 'yellow_urine', 'yellowing_of_eyes', 'acute_liver_failure', 'fluid_overload', 'swelling_of_stomach', 'swelled_lymph_nodes', 'malaise', 'blurred_and_distorted_vision', 'phlegm', 'throat_irritation',
- 'redness_of_eyes', 'sinus_pressure', 'runny_nose', 'congestion', 'chest_pain', 'weakness_in_limbs', 'fast_heart_rate', 'pain_during_bowel_movements', 'pain_in_anal_region', 'bloody_stool',
- 'irritation_in_anus', 'neck_pain', 'dizziness', 'cramps', 'bruising', 'obesity', 'swollen_legs', 'swollen_blood_vessels', 'puffy_face_and_eyes', 'enlarged_thyroid', 'brittle_nails', 'swollen_extremeties',
- 'excessive_hunger', 'extra_marital_contacts', 'drying_and_tingling_lips', 'slurred_speech', 'knee_pain', 'hip_joint_pain', 'muscle_weakness', 'stiff_neck', 'swelling_joints', 'movement_stiffness',
- 'spinning_movements', 'loss_of_balance', 'unsteadiness', 'weakness_of_one_body_side', 'loss_of_smell', 'bladder_discomfort', 'foul_smell_of urine', 'continuous_feel_of_urine', 'passage_of_gases', 'internal_itching',
- 'toxic_look_(typhos)', 'depression', 'irritability', 'muscle_pain', 'altered_sensorium', 'red_spots_over_body', 'belly_pain', 'abnormal_menstruation', 'dischromic _patches', 'watering_from_eyes',
- 'increased_appetite', 'polyuria', 'family_history', 'mucoid_sputum', 'rusty_sputum', 'lack_of_concentration', 'visual_disturbances', 'receiving_blood_transfusion', 'receiving_unsterile_injections', 'coma',
- 'stomach_bleeding', 'distention_of_abdomen', 'history_of_alcohol_consumption', 'fluid_overload.1', 'blood_in_sputum', 'prominent_veins_on_calf', 'palpitations', 'painful_walking', 'pus_filled_pimples',
- 'blackheads', 'scurring', 'skin_peeling', 'silver_like_dusting', 'small_dents_in_nails', 'inflammatory_nails', 'blister', 'red_sore_around_nose', 'yellow_crust_ooze']),
- ],
- "text",
- description="Select a symptom from the list and click submit to get predicted Disease as the Output."
-)
-
-iface.launch(inline=False)
\ No newline at end of file
diff --git a/spaces/ML701G7/taim-gan/src/test_project/example.py b/spaces/ML701G7/taim-gan/src/test_project/example.py
deleted file mode 100644
index 073fbb6ff1f6d54a671927d7e61d93f6e0ba7417..0000000000000000000000000000000000000000
--- a/spaces/ML701G7/taim-gan/src/test_project/example.py
+++ /dev/null
@@ -1,18 +0,0 @@
-"""doing some stuff here"""
-
-
-class Foo:
- """sample text"""
-
- def __init__(self, first_var: int, second_var: int) -> None:
- """init the bar"""
- self.first = first_var
- self.second = second_var
-
- def get_bar(self) -> int:
- """return bar"""
- return self.first
-
- def get_foo(self) -> int:
- """return bar"""
- return self.second
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/eval.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/eval.py
deleted file mode 100644
index 2f1310a62b91392ba4aa205b21e916be894d3bdc..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/eval.py
+++ /dev/null
@@ -1,368 +0,0 @@
-import argparse
-import datetime
-import logging
-import inspect
-import math
-import os
-from typing import Dict, Optional, Tuple
-from omegaconf import OmegaConf
-
-import torch
-import torch.nn.functional as F
-import torch.utils.checkpoint
-import numpy as np
-from PIL import Image
-
-import diffusers
-import transformers
-from accelerate import Accelerator
-from accelerate.logging import get_logger
-from accelerate.utils import set_seed
-from diffusers import AutoencoderKL, DDPMScheduler, DDIMScheduler, PNDMScheduler, ControlNetModel, PriorTransformer, UnCLIPScheduler
-from diffusers.pipelines.stable_diffusion.stable_unclip_image_normalizer import StableUnCLIPImageNormalizer
-from diffusers.optimization import get_scheduler
-from diffusers.utils import check_min_version
-from diffusers.utils.import_utils import is_xformers_available
-from tqdm.auto import tqdm
-from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer, CLIPVisionModelWithProjection, CLIPTextModelWithProjection
-
-from makeaprotagonist.models.unet import UNet3DConditionModel
-from makeaprotagonist.dataset.dataset import MakeAProtagonistDataset
-from makeaprotagonist.pipelines.pipeline_stable_unclip_controlavideo import MakeAProtagonistStableUnCLIPPipeline, MultiControlNetModel
-from makeaprotagonist.util import save_videos_grid, ddim_inversion_unclip, ddim_inversion_prior
-from einops import rearrange
-from makeaprotagonist.args_util import DictAction, config_merge_dict
-import ipdb
-import random
-from glob import glob
-import sys
-
-
-
-# Will error if the minimal version of diffusers is not installed. Remove at your own risks.
-check_min_version("0.15.0.dev0")
-
-logger = get_logger(__name__, log_level="INFO")
-
-
-def main(
- pretrained_model_path: str,
- controlnet_pretrained_model_path: str,
- output_dir: str,
- train_data: Dict,
- validation_data: Dict,
- validation_steps: int = 100,
- trainable_modules: Tuple[str] = (
- "attn1.to_q",
- "attn2.to_q",
- "attn_temp",
- ),
- trainable_params: Tuple[str] = (),
- train_batch_size: int = 1,
- max_train_steps: int = 500,
- learning_rate: float = 3e-5,
- scale_lr: bool = False,
- lr_scheduler: str = "constant",
- lr_warmup_steps: int = 0,
- adam_beta1: float = 0.9,
- adam_beta2: float = 0.999,
- adam_weight_decay: float = 1e-2,
- adam_epsilon: float = 1e-08,
- max_grad_norm: float = 1.0,
- gradient_accumulation_steps: int = 1,
- gradient_checkpointing: bool = True,
- checkpointing_steps: int = 500,
- resume_from_checkpoint: Optional[str] = None,
- mixed_precision: Optional[str] = "fp16",
- use_8bit_adam: bool = False,
- enable_xformers_memory_efficient_attention: bool = True,
- seed: Optional[int] = None,
- adapter_config=None, # the config for adapter
- use_temporal_conv=False, ## use temporal conv in resblocks
-):
- *_, config = inspect.getargvalues(inspect.currentframe())
-
- accelerator = Accelerator(
- gradient_accumulation_steps=gradient_accumulation_steps,
- mixed_precision=mixed_precision,
- )
-
- # Make one log on every process with the configuration for debugging.
- logging.basicConfig(
- format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
- datefmt="%m/%d/%Y %H:%M:%S",
- level=logging.INFO,
- )
- logger.info(accelerator.state, main_process_only=False)
- if accelerator.is_local_main_process:
- transformers.utils.logging.set_verbosity_warning()
- diffusers.utils.logging.set_verbosity_info()
- else:
- transformers.utils.logging.set_verbosity_error()
- diffusers.utils.logging.set_verbosity_error()
-
- # If passed along, set the training seed now.
- if seed is not None:
- set_seed(seed)
-
- # Handle the output folder creation
- if accelerator.is_main_process:
- # now = datetime.datetime.now().strftime("%Y-%m-%dT%H-%M-%S")
- # output_dir = os.path.join(output_dir, now)
- os.makedirs(output_dir, exist_ok=True)
- os.makedirs(f"{output_dir}/samples", exist_ok=True)
- os.makedirs(f"{output_dir}/inv_latents", exist_ok=True)
- OmegaConf.save(config, os.path.join(output_dir, 'config.yaml'))
-
- prior_model_id = "kakaobrain/karlo-v1-alpha"
- data_type = torch.float16
- prior = PriorTransformer.from_pretrained(prior_model_id, subfolder="prior", torch_dtype=data_type)
-
- prior_text_model_id = "openai/clip-vit-large-patch14"
- prior_tokenizer = CLIPTokenizer.from_pretrained(prior_text_model_id)
- prior_text_model = CLIPTextModelWithProjection.from_pretrained(prior_text_model_id, torch_dtype=data_type)
- prior_scheduler = UnCLIPScheduler.from_pretrained(prior_model_id, subfolder="prior_scheduler")
- prior_scheduler = DDPMScheduler.from_config(prior_scheduler.config)
-
-
- # image encoding components
- feature_extractor = CLIPImageProcessor.from_pretrained(pretrained_model_path, subfolder="feature_extractor")
- image_encoder = CLIPVisionModelWithProjection.from_pretrained(pretrained_model_path, subfolder="image_encoder")
- # image noising components
- image_normalizer = StableUnCLIPImageNormalizer.from_pretrained(pretrained_model_path, subfolder="image_normalizer")
- image_noising_scheduler = DDPMScheduler.from_pretrained(pretrained_model_path, subfolder="image_noising_scheduler")
- # regular denoising components
- tokenizer = CLIPTokenizer.from_pretrained(pretrained_model_path, subfolder="tokenizer")
- text_encoder = CLIPTextModel.from_pretrained(pretrained_model_path, subfolder="text_encoder")
- unet = UNet3DConditionModel.from_pretrained_2d(pretrained_model_path, subfolder="unet", use_temporal_conv=use_temporal_conv)
-
-
- # vae
- vae = AutoencoderKL.from_pretrained(pretrained_model_path, subfolder="vae")
- ## controlnet
- assert not isinstance(controlnet_pretrained_model_path, str)
- controlnet = MultiControlNetModel( [ControlNetModel.from_pretrained(_control_model_path) for _control_model_path in controlnet_pretrained_model_path] )
-
- # Freeze vae and text_encoder and adapter
- vae.requires_grad_(False)
- text_encoder.requires_grad_(False)
-
- ## freeze image embed
- image_encoder.requires_grad_(False)
-
- unet.requires_grad_(False)
- ## freeze controlnet
- controlnet.requires_grad_(False)
-
- ## freeze prior
- prior.requires_grad_(False)
- prior_text_model.requires_grad_(False)
-
-
- if enable_xformers_memory_efficient_attention:
- if is_xformers_available():
- unet.enable_xformers_memory_efficient_attention()
- controlnet.enable_xformers_memory_efficient_attention()
- else:
- raise ValueError("xformers is not available. Make sure it is installed correctly")
-
- if gradient_checkpointing:
- unet.enable_gradient_checkpointing()
-
- if scale_lr:
- learning_rate = (
- learning_rate * gradient_accumulation_steps * train_batch_size * accelerator.num_processes
- )
-
- # Get the training dataset
- train_dataset = MakeAProtagonistDataset(**train_data)
-
- # Preprocessing the dataset
- train_dataset.prompt_ids = tokenizer(
- train_dataset.prompt, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt"
- ).input_ids[0]
-
- train_dataset.preprocess_img_embedding(feature_extractor, image_encoder)
- # DataLoaders creation:
- train_dataloader = torch.utils.data.DataLoader(
- train_dataset, batch_size=train_batch_size, num_workers=0,
- )
-
- prior_val_scheduler = DDIMScheduler.from_config(prior_scheduler.config) if validation_data.get("prior_val_scheduler", "") == "DDIM" else prior_scheduler
- # ipdb.set_trace()
- validation_pipeline = MakeAProtagonistStableUnCLIPPipeline(
- prior_tokenizer=prior_tokenizer,
- prior_text_encoder=prior_text_model,
- prior=prior,
- prior_scheduler=prior_val_scheduler,
- feature_extractor=feature_extractor,
- image_encoder=image_encoder,
- image_normalizer=image_normalizer,
- image_noising_scheduler=image_noising_scheduler,
- vae=vae,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- controlnet=controlnet,
- scheduler=DDIMScheduler.from_pretrained(pretrained_model_path, subfolder="scheduler")
- )
-
-
- validation_pipeline.enable_vae_slicing()
- ddim_inv_scheduler = DDIMScheduler.from_pretrained(pretrained_model_path, subfolder='scheduler')
- ddim_inv_scheduler.set_timesteps(validation_data.num_inv_steps)
-
- ddim_inv_prior_scheduler = None
- if validation_data.get("use_prior_inv_latent", False):
- ddim_inv_prior_scheduler = DDIMScheduler.from_config(prior_scheduler.config)
- ddim_inv_prior_scheduler.set_timesteps(validation_data.prior_num_inv_steps)
-
- unet, train_dataloader = accelerator.prepare(
- unet, train_dataloader
- )
-
- # For mixed precision training we cast the text_encoder and vae weights to half-precision
- # as these models are only used for inference, keeping weights in full precision is not required.
- weight_dtype = torch.float32
- if accelerator.mixed_precision == "fp16":
- weight_dtype = torch.float16
- elif accelerator.mixed_precision == "bf16":
- weight_dtype = torch.bfloat16
-
- # Move text_encode and vae to gpu and cast to weight_dtype
- text_encoder.to(accelerator.device, dtype=weight_dtype)
- vae.to(accelerator.device, dtype=weight_dtype)
- image_encoder.to(accelerator.device, dtype=weight_dtype)
- ## note controlnet use the unet dtype
- controlnet.to(accelerator.device, dtype=weight_dtype)
- ## prior
- prior.to(accelerator.device, dtype=weight_dtype)
- prior_text_model.to(accelerator.device, dtype=weight_dtype)
-
- # We need to initialize the trackers we use, and also store our configuration.
- # The trackers initializes automatically on the main process.
- if accelerator.is_main_process:
- accelerator.init_trackers("text2video-fine-tune")
-
- global_step = 0
- # Potentially load in the weights and states from a previous save
- if resume_from_checkpoint:
- ## resume_from_checkpoint is the path to the checkpoint-300 dir
- accelerator.load_state(resume_from_checkpoint)
- path = os.path.basename(resume_from_checkpoint)
- global_step = int(path.split("-")[1])
-
-
- if not "noise_level" in validation_data:
- validation_data.noise_level = train_data.noise_level
- if not "noise_level_inv" in validation_data:
- validation_data.noise_level_inv = validation_data.noise_level
- # Checks if the accelerator has performed an optimization step behind the scenes
-
- if accelerator.is_main_process:
-
- batch = next(iter(train_dataloader))
-
- # ipdb.set_trace()
- pixel_values = batch["pixel_values"].to(weight_dtype)
- video_length = pixel_values.shape[1]
- pixel_values = rearrange(pixel_values, "b f c h w -> (b f) c h w")
- latents = vae.encode(pixel_values).latent_dist.sample()
- latents = rearrange(latents, "(b f) c h w -> b c f h w", f=video_length)
- latents = latents * vae.config.scaling_factor
-
-
- # ControlNet
- # ipdb.set_trace()
- conditions = [_condition.to(weight_dtype) for _, _condition in batch["conditions"].items()] # b f c h w
- masks = batch["masks"].to(weight_dtype) # b,f,1,h,w
- # ipdb.set_trace()
- if not validation_data.get("use_masks", False):
- masks = torch.ones_like(masks)
- # conditions = rearrange(conditions, "b f c h w -> (b f) c h w") ## here is rgb
- ## NOTE in this pretrained model, the config is also rgb
- ## https://huggingface.co/thibaud/controlnet-sd21-openpose-diffusers/blob/main/config.json
-
- # ipdb.set_trace()
- ddim_inv_latent = None
- if validation_data.use_inv_latent: #
- emb_dim = train_dataset.img_embeddings[0].size(0)
- key_frame_embed = torch.zeros((1, emb_dim)).to(device=latents.device, dtype=latents.dtype) ## this is dim 0
- ddim_inv_latent = ddim_inversion_unclip(
- validation_pipeline, ddim_inv_scheduler, video_latent=latents,
- num_inv_steps=validation_data.num_inv_steps, prompt="", image_embed=key_frame_embed, noise_level=validation_data.noise_level, seed=seed)[-1].to(weight_dtype)
-
- set_noise = validation_data.pop("noise_level")
- v_noise = set_noise
-
- if not validation_data.get("interpolate_embed_weight", False):
- validation_data.interpolate_embed_weight = 0
-
-
- samples = []
-
- generator = torch.Generator(device=accelerator.device)
- generator.manual_seed(seed)
-
- for idx, prompt in enumerate(validation_data.prompts):
-
- _ref_image = Image.open(validation_data.ref_images[idx])
- image_embed = None
- ## prior latents
- prior_embeds = None
- prior_denoised_embeds = None
- if validation_data.get("source_background", False):
- ## using source background and changing the protagonist
- prior_denoised_embeds = train_dataset.img_embeddings[0][None].to(device=latents.device, dtype=latents.dtype) # 1, 768 for UnCLIP-small
-
- if validation_data.get("source_protagonist", False):
- # using source protagonist and changing the background
- sample_indices = batch["sample_indices"][0]
- image_embed = [train_dataset.img_embeddings[idx] for idx in sample_indices]
- image_embed = torch.stack(image_embed, dim=0).to(device=latents.device, dtype=latents.dtype) # F, 768 for UnCLIP-small # F,C
- _ref_image = None
-
- sample = validation_pipeline(image=_ref_image, prompt=prompt, control_image=conditions, generator=generator, latents=ddim_inv_latent, image_embeds=image_embed, noise_level=v_noise, masks=masks, prior_latents=prior_embeds, prior_denoised_embeds=prior_denoised_embeds, **validation_data).videos
-
- save_videos_grid(sample, f"{output_dir}/samples/sample-{global_step}-seed{seed}/{idx}-{prompt}.gif")
- samples.append(sample)
-
- #
- samples = [sample.float() for sample in samples]
- samples = torch.concat(samples)
- save_path = f"{output_dir}/samples/sample-{global_step}-s{validation_data.start_step}-e{validation_data.end_step}-seed{seed}.gif" # noise level and noise level for inv
- save_videos_grid(samples, save_path, n_rows=len(samples))
- logger.info(f"Saved samples to {save_path}")
-
-
-
- accelerator.end_training()
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--config", type=str, default="./configs/tuneavideo.yaml")
- parser.add_argument(
- '--options',
- nargs='+',
- action=DictAction, ##NOTE cannot support multi-level config change
- help="--options is deprecated in favor of --cfg_options' and it will "
- 'not be supported in version v0.22.0. Override some settings in the '
- 'used config, the key-value pair in xxx=yyy format will be merged '
- 'into config file. If the value to be overwritten is a list, it '
- 'should be like key="[a,b]" or key=a,b It also allows nested '
- 'list/tuple values, e.g. key="[(a,b),(c,d)]" Note that the quotation '
- 'marks are necessary and that no white space is allowed.')
-
- args = parser.parse_args()
-
- ## read from cmd line
- # ipdb.set_trace()
- # Load the YAML configuration file
- config = OmegaConf.load(args.config)
- # Merge the command-line arguments with the configuration file
- if args.options is not None:
- # config = OmegaConf.merge(config, args.options)
- config_merge_dict(args.options, config)
-
- main(**config)
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/inference/predictors/__init__.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/inference/predictors/__init__.py
deleted file mode 100644
index 04b8b8618cd33efabdaec69328de2f5a8a58d2f9..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/inference/predictors/__init__.py
+++ /dev/null
@@ -1,95 +0,0 @@
-from .base import BasePredictor
-from .brs import InputBRSPredictor, FeatureBRSPredictor, HRNetFeatureBRSPredictor
-from .brs_functors import InputOptimizer, ScaleBiasOptimizer
-from ..transforms import ZoomIn
-from ...model.is_hrnet_model import DistMapsHRNetModel
-
-
-def get_predictor(net, brs_mode, device,
- prob_thresh=0.49,
- with_flip=True,
- zoom_in_params=dict(),
- predictor_params=None,
- brs_opt_func_params=None,
- lbfgs_params=None):
- lbfgs_params_ = {
- 'm': 20,
- 'factr': 0,
- 'pgtol': 1e-8,
- 'maxfun': 20,
- }
-
- predictor_params_ = {
- 'optimize_after_n_clicks': 1
- }
-
- if zoom_in_params is not None:
- zoom_in = ZoomIn(**zoom_in_params)
- else:
- zoom_in = None
-
- if lbfgs_params is not None:
- lbfgs_params_.update(lbfgs_params)
- lbfgs_params_['maxiter'] = 2 * lbfgs_params_['maxfun']
-
- if brs_opt_func_params is None:
- brs_opt_func_params = dict()
-
- if brs_mode == 'NoBRS':
- if predictor_params is not None:
- predictor_params_.update(predictor_params)
- predictor = BasePredictor(net, device, zoom_in=zoom_in, with_flip=with_flip, **predictor_params_)
- elif brs_mode.startswith('f-BRS'):
- predictor_params_.update({
- 'net_clicks_limit': 8,
- })
- if predictor_params is not None:
- predictor_params_.update(predictor_params)
-
- insertion_mode = {
- 'f-BRS-A': 'after_c4',
- 'f-BRS-B': 'after_aspp',
- 'f-BRS-C': 'after_deeplab'
- }[brs_mode]
-
- opt_functor = ScaleBiasOptimizer(prob_thresh=prob_thresh,
- with_flip=with_flip,
- optimizer_params=lbfgs_params_,
- **brs_opt_func_params)
-
- if isinstance(net, DistMapsHRNetModel):
- FeaturePredictor = HRNetFeatureBRSPredictor
- insertion_mode = {'after_c4': 'A', 'after_aspp': 'A', 'after_deeplab': 'C'}[insertion_mode]
- else:
- FeaturePredictor = FeatureBRSPredictor
-
- predictor = FeaturePredictor(net, device,
- opt_functor=opt_functor,
- with_flip=with_flip,
- insertion_mode=insertion_mode,
- zoom_in=zoom_in,
- **predictor_params_)
- elif brs_mode == 'RGB-BRS' or brs_mode == 'DistMap-BRS':
- use_dmaps = brs_mode == 'DistMap-BRS'
-
- predictor_params_.update({
- 'net_clicks_limit': 5,
- })
- if predictor_params is not None:
- predictor_params_.update(predictor_params)
-
- opt_functor = InputOptimizer(prob_thresh=prob_thresh,
- with_flip=with_flip,
- optimizer_params=lbfgs_params_,
- **brs_opt_func_params)
-
- predictor = InputBRSPredictor(net, device,
- optimize_target='dmaps' if use_dmaps else 'rgb',
- opt_functor=opt_functor,
- with_flip=with_flip,
- zoom_in=zoom_in,
- **predictor_params_)
- else:
- raise NotImplementedError
-
- return predictor
diff --git a/spaces/Makiing/coolb-in-gtest/src/pages/api/kblob.ts b/spaces/Makiing/coolb-in-gtest/src/pages/api/kblob.ts
deleted file mode 100644
index 0ce7e6063cdc06838e76f1cff1d5982d34ef52de..0000000000000000000000000000000000000000
--- a/spaces/Makiing/coolb-in-gtest/src/pages/api/kblob.ts
+++ /dev/null
@@ -1,56 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import FormData from 'form-data'
-import { fetch } from '@/lib/isomorphic'
-import { KBlobRequest } from '@/lib/bots/bing/types'
-
-const API_DOMAIN = 'https://bing.vcanbb.top'
-
-export const config = {
- api: {
- bodyParser: {
- sizeLimit: '10mb' // Set desired value here
- }
- }
-}
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- try {
- const { knowledgeRequest, imageBase64 } = req.body as KBlobRequest
-
- const formData = new FormData()
- formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest))
- if (imageBase64) {
- formData.append('imageBase64', imageBase64)
- }
-
- const response = await fetch(`${API_DOMAIN}/images/kblob`,
- {
- method: 'POST',
- body: formData.getBuffer(),
- headers: {
- "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"",
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-platform": "\"Windows\"",
- "Referer": `${API_DOMAIN}/web/index.html`,
- "Referrer-Policy": "origin-when-cross-origin",
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
- ...formData.getHeaders()
- }
- }
- ).then(res => res.text())
-
- res.writeHead(200, {
- 'Content-Type': 'application/json',
- })
- res.end(response || JSON.stringify({ result: { value: 'UploadFailed', message: '请更换 IP 或代理后重试' } }))
- } catch (e) {
- return res.json({
- result: {
- value: 'UploadFailed',
- message: `${e}`
- }
- })
- }
-}
diff --git a/spaces/MarcusSu1216/XingTong/inference/infer_tool.py b/spaces/MarcusSu1216/XingTong/inference/infer_tool.py
deleted file mode 100644
index def9246201c607f06a3e240feef7f46af9d9fef1..0000000000000000000000000000000000000000
--- a/spaces/MarcusSu1216/XingTong/inference/infer_tool.py
+++ /dev/null
@@ -1,355 +0,0 @@
-import hashlib
-import io
-import json
-import logging
-import os
-import time
-from pathlib import Path
-from inference import slicer
-
-import librosa
-import numpy as np
-# import onnxruntime
-import parselmouth
-import soundfile
-import torch
-import hashlib
-import io
-import json
-import logging
-import os
-import time
-from pathlib import Path
-from inference import slicer
-
-import librosa
-import numpy as np
-# import onnxruntime
-import parselmouth
-import soundfile
-import torch
-import torchaudio
-
-import cluster
-from hubert import hubert_model
-import utils
-from models import SynthesizerTrn
-
-logging.getLogger('matplotlib').setLevel(logging.WARNING)
-
-
-def read_temp(file_name):
- if not os.path.exists(file_name):
- with open(file_name, "w") as f:
- f.write(json.dumps({"info": "temp_dict"}))
- return {}
- else:
- try:
- with open(file_name, "r") as f:
- data = f.read()
- data_dict = json.loads(data)
- if os.path.getsize(file_name) > 50 * 1024 * 1024:
- f_name = file_name.replace("\\", "/").split("/")[-1]
- print(f"clean {f_name}")
- for wav_hash in list(data_dict.keys()):
- if int(time.time()) - int(data_dict[wav_hash]["time"]) > 14 * 24 * 3600:
- del data_dict[wav_hash]
- except Exception as e:
- print(e)
- print(f"{file_name} error,auto rebuild file")
- data_dict = {"info": "temp_dict"}
- return data_dict
-
-
-def write_temp(file_name, data):
- with open(file_name, "w") as f:
- f.write(json.dumps(data))
-
-
-def timeit(func):
- def run(*args, **kwargs):
- t = time.time()
- res = func(*args, **kwargs)
- print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t))
- return res
-
- return run
-
-
-def format_wav(audio_path):
- if Path(audio_path).suffix == '.wav':
- return
- raw_audio, raw_sample_rate = librosa.load(audio_path, mono=True, sr=None)
- soundfile.write(Path(audio_path).with_suffix(".wav"), raw_audio, raw_sample_rate)
-
-
-def get_end_file(dir_path, end):
- file_lists = []
- for root, dirs, files in os.walk(dir_path):
- files = [f for f in files if f[0] != '.']
- dirs[:] = [d for d in dirs if d[0] != '.']
- for f_file in files:
- if f_file.endswith(end):
- file_lists.append(os.path.join(root, f_file).replace("\\", "/"))
- return file_lists
-
-
-def get_md5(content):
- return hashlib.new("md5", content).hexdigest()
-
-def fill_a_to_b(a, b):
- if len(a) < len(b):
- for _ in range(0, len(b) - len(a)):
- a.append(a[0])
-
-def mkdir(paths: list):
- for path in paths:
- if not os.path.exists(path):
- os.mkdir(path)
-
-def pad_array(arr, target_length):
- current_length = arr.shape[0]
- if current_length >= target_length:
- return arr
- else:
- pad_width = target_length - current_length
- pad_left = pad_width // 2
- pad_right = pad_width - pad_left
- padded_arr = np.pad(arr, (pad_left, pad_right), 'constant', constant_values=(0, 0))
- return padded_arr
-
-def split_list_by_n(list_collection, n, pre=0):
- for i in range(0, len(list_collection), n):
- yield list_collection[i-pre if i-pre>=0 else i: i + n]
-
-
-class F0FilterException(Exception):
- pass
-
-class Svc(object):
- def __init__(self, net_g_path, config_path,
- device=None,
- cluster_model_path="logs/44k/kmeans_10000.pt",
- nsf_hifigan_enhance = False
- ):
- self.net_g_path = net_g_path
- if device is None:
- self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- else:
- self.dev = torch.device(device)
- self.net_g_ms = None
- self.hps_ms = utils.get_hparams_from_file(config_path)
- self.target_sample = self.hps_ms.data.sampling_rate
- self.hop_size = self.hps_ms.data.hop_length
- self.spk2id = self.hps_ms.spk
- self.nsf_hifigan_enhance = nsf_hifigan_enhance
- # 加载hubert
- self.hubert_model = utils.get_hubert_model().to(self.dev)
- self.load_model()
- if os.path.exists(cluster_model_path):
- self.cluster_model = cluster.get_cluster_model(cluster_model_path)
- if self.nsf_hifigan_enhance:
- from modules.enhancer import Enhancer
- self.enhancer = Enhancer('nsf-hifigan', 'pretrain/nsf_hifigan/model',device=self.dev)
-
- def load_model(self):
- # 获取模型配置
- self.net_g_ms = SynthesizerTrn(
- self.hps_ms.data.filter_length // 2 + 1,
- self.hps_ms.train.segment_size // self.hps_ms.data.hop_length,
- **self.hps_ms.model)
- _ = utils.load_checkpoint(self.net_g_path, self.net_g_ms, None)
- if "half" in self.net_g_path and torch.cuda.is_available():
- _ = self.net_g_ms.half().eval().to(self.dev)
- else:
- _ = self.net_g_ms.eval().to(self.dev)
-
-
-
- def get_unit_f0(self, in_path, tran, cluster_infer_ratio, speaker, f0_filter ,F0_mean_pooling):
-
- wav, sr = librosa.load(in_path, sr=self.target_sample)
-
- if F0_mean_pooling == True:
- f0, uv = utils.compute_f0_uv_torchcrepe(torch.FloatTensor(wav), sampling_rate=self.target_sample, hop_length=self.hop_size,device=self.dev)
- if f0_filter and sum(f0) == 0:
- raise F0FilterException("未检测到人声")
- f0 = torch.FloatTensor(list(f0))
- uv = torch.FloatTensor(list(uv))
- if F0_mean_pooling == False:
- f0 = utils.compute_f0_parselmouth(wav, sampling_rate=self.target_sample, hop_length=self.hop_size)
- if f0_filter and sum(f0) == 0:
- raise F0FilterException("未检测到人声")
- f0, uv = utils.interpolate_f0(f0)
- f0 = torch.FloatTensor(f0)
- uv = torch.FloatTensor(uv)
-
- f0 = f0 * 2 ** (tran / 12)
- f0 = f0.unsqueeze(0).to(self.dev)
- uv = uv.unsqueeze(0).to(self.dev)
-
- wav16k = librosa.resample(wav, orig_sr=self.target_sample, target_sr=16000)
- wav16k = torch.from_numpy(wav16k).to(self.dev)
- c = utils.get_hubert_content(self.hubert_model, wav_16k_tensor=wav16k)
- c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[1])
-
- if cluster_infer_ratio !=0:
- cluster_c = cluster.get_cluster_center_result(self.cluster_model, c.cpu().numpy().T, speaker).T
- cluster_c = torch.FloatTensor(cluster_c).to(self.dev)
- c = cluster_infer_ratio * cluster_c + (1 - cluster_infer_ratio) * c
-
- c = c.unsqueeze(0)
- return c, f0, uv
-
- def infer(self, speaker, tran, raw_path,
- cluster_infer_ratio=0,
- auto_predict_f0=False,
- noice_scale=0.4,
- f0_filter=False,
- F0_mean_pooling=False,
- enhancer_adaptive_key = 0
- ):
-
- speaker_id = self.spk2id.__dict__.get(speaker)
- if not speaker_id and type(speaker) is int:
- if len(self.spk2id.__dict__) >= speaker:
- speaker_id = speaker
- sid = torch.LongTensor([int(speaker_id)]).to(self.dev).unsqueeze(0)
- c, f0, uv = self.get_unit_f0(raw_path, tran, cluster_infer_ratio, speaker, f0_filter,F0_mean_pooling)
- if "half" in self.net_g_path and torch.cuda.is_available():
- c = c.half()
- with torch.no_grad():
- start = time.time()
- audio = self.net_g_ms.infer(c, f0=f0, g=sid, uv=uv, predict_f0=auto_predict_f0, noice_scale=noice_scale)[0,0].data.float()
- if self.nsf_hifigan_enhance:
- audio, _ = self.enhancer.enhance(
- audio[None,:],
- self.target_sample,
- f0[:,:,None],
- self.hps_ms.data.hop_length,
- adaptive_key = enhancer_adaptive_key)
- use_time = time.time() - start
- print("vits use time:{}".format(use_time))
- return audio, audio.shape[-1]
-
- def clear_empty(self):
- # 清理显存
- torch.cuda.empty_cache()
-
- def slice_inference(self,
- raw_audio_path,
- spk,
- tran,
- slice_db,
- cluster_infer_ratio,
- auto_predict_f0,
- noice_scale,
- pad_seconds=0.5,
- clip_seconds=0,
- lg_num=0,
- lgr_num =0.75,
- F0_mean_pooling = False,
- enhancer_adaptive_key = 0
- ):
- wav_path = raw_audio_path
- chunks = slicer.cut(wav_path, db_thresh=slice_db)
- audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks)
- per_size = int(clip_seconds*audio_sr)
- lg_size = int(lg_num*audio_sr)
- lg_size_r = int(lg_size*lgr_num)
- lg_size_c_l = (lg_size-lg_size_r)//2
- lg_size_c_r = lg_size-lg_size_r-lg_size_c_l
- lg = np.linspace(0,1,lg_size_r) if lg_size!=0 else 0
-
- audio = []
- for (slice_tag, data) in audio_data:
- print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======')
- # padd
- length = int(np.ceil(len(data) / audio_sr * self.target_sample))
- if slice_tag:
- print('jump empty segment')
- _audio = np.zeros(length)
- audio.extend(list(pad_array(_audio, length)))
- continue
- if per_size != 0:
- datas = split_list_by_n(data, per_size,lg_size)
- else:
- datas = [data]
- for k,dat in enumerate(datas):
- per_length = int(np.ceil(len(dat) / audio_sr * self.target_sample)) if clip_seconds!=0 else length
- if clip_seconds!=0: print(f'###=====segment clip start, {round(len(dat) / audio_sr, 3)}s======')
- # padd
- pad_len = int(audio_sr * pad_seconds)
- dat = np.concatenate([np.zeros([pad_len]), dat, np.zeros([pad_len])])
- raw_path = io.BytesIO()
- soundfile.write(raw_path, dat, audio_sr, format="wav")
- raw_path.seek(0)
- out_audio, out_sr = self.infer(spk, tran, raw_path,
- cluster_infer_ratio=cluster_infer_ratio,
- auto_predict_f0=auto_predict_f0,
- noice_scale=noice_scale,
- F0_mean_pooling = F0_mean_pooling,
- enhancer_adaptive_key = enhancer_adaptive_key
- )
- _audio = out_audio.cpu().numpy()
- pad_len = int(self.target_sample * pad_seconds)
- _audio = _audio[pad_len:-pad_len]
- _audio = pad_array(_audio, per_length)
- if lg_size!=0 and k!=0:
- lg1 = audio[-(lg_size_r+lg_size_c_r):-lg_size_c_r] if lgr_num != 1 else audio[-lg_size:]
- lg2 = _audio[lg_size_c_l:lg_size_c_l+lg_size_r] if lgr_num != 1 else _audio[0:lg_size]
- lg_pre = lg1*(1-lg)+lg2*lg
- audio = audio[0:-(lg_size_r+lg_size_c_r)] if lgr_num != 1 else audio[0:-lg_size]
- audio.extend(lg_pre)
- _audio = _audio[lg_size_c_l+lg_size_r:] if lgr_num != 1 else _audio[lg_size:]
- audio.extend(list(_audio))
- return np.array(audio)
-
-class RealTimeVC:
- def __init__(self):
- self.last_chunk = None
- self.last_o = None
- self.chunk_len = 16000 # 区块长度
- self.pre_len = 3840 # 交叉淡化长度,640的倍数
-
- """输入输出都是1维numpy 音频波形数组"""
-
- def process(self, svc_model, speaker_id, f_pitch_change, input_wav_path,
- cluster_infer_ratio=0,
- auto_predict_f0=False,
- noice_scale=0.4,
- f0_filter=False):
-
- import maad
- audio, sr = torchaudio.load(input_wav_path)
- audio = audio.cpu().numpy()[0]
- temp_wav = io.BytesIO()
- if self.last_chunk is None:
- input_wav_path.seek(0)
-
- audio, sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path,
- cluster_infer_ratio=cluster_infer_ratio,
- auto_predict_f0=auto_predict_f0,
- noice_scale=noice_scale,
- f0_filter=f0_filter)
-
- audio = audio.cpu().numpy()
- self.last_chunk = audio[-self.pre_len:]
- self.last_o = audio
- return audio[-self.chunk_len:]
- else:
- audio = np.concatenate([self.last_chunk, audio])
- soundfile.write(temp_wav, audio, sr, format="wav")
- temp_wav.seek(0)
-
- audio, sr = svc_model.infer(speaker_id, f_pitch_change, temp_wav,
- cluster_infer_ratio=cluster_infer_ratio,
- auto_predict_f0=auto_predict_f0,
- noice_scale=noice_scale,
- f0_filter=f0_filter)
-
- audio = audio.cpu().numpy()
- ret = maad.util.crossfade(self.last_o, audio, self.pre_len)
- self.last_chunk = audio[-self.pre_len:]
- self.last_o = audio
- return ret[self.chunk_len:2 * self.chunk_len]
\ No newline at end of file
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/lraspp_m-v3-d8.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/lraspp_m-v3-d8.py
deleted file mode 100644
index 93258242a90695cc94a7c6bd41562d6a75988771..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/lraspp_m-v3-d8.py
+++ /dev/null
@@ -1,25 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', eps=0.001, requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- backbone=dict(
- type='MobileNetV3',
- arch='large',
- out_indices=(1, 3, 16),
- norm_cfg=norm_cfg),
- decode_head=dict(
- type='LRASPPHead',
- in_channels=(16, 24, 960),
- in_index=(0, 1, 2),
- channels=128,
- input_transform='multiple_select',
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- act_cfg=dict(type='ReLU'),
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/lib/model/HGPIFuNet.py b/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/lib/model/HGPIFuNet.py
deleted file mode 100644
index 4771715345afcf326b3b0e64717517801fe75a1c..0000000000000000000000000000000000000000
--- a/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/lib/model/HGPIFuNet.py
+++ /dev/null
@@ -1,142 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from .BasePIFuNet import BasePIFuNet
-from .SurfaceClassifier import SurfaceClassifier
-from .DepthNormalizer import DepthNormalizer
-from .HGFilters import *
-from ..net_util import init_net
-
-
-class HGPIFuNet(BasePIFuNet):
- '''
- HG PIFu network uses Hourglass stacks as the image filter.
- It does the following:
- 1. Compute image feature stacks and store it in self.im_feat_list
- self.im_feat_list[-1] is the last stack (output stack)
- 2. Calculate calibration
- 3. If training, it index on every intermediate stacks,
- If testing, it index on the last stack.
- 4. Classification.
- 5. During training, error is calculated on all stacks.
- '''
-
- def __init__(self,
- opt,
- projection_mode='orthogonal',
- error_term=nn.MSELoss(),
- ):
- super(HGPIFuNet, self).__init__(
- projection_mode=projection_mode,
- error_term=error_term)
-
- self.name = 'hgpifu'
-
- self.opt = opt
- self.num_views = self.opt.num_views
-
- self.image_filter = HGFilter(opt)
-
- self.surface_classifier = SurfaceClassifier(
- filter_channels=self.opt.mlp_dim,
- num_views=self.opt.num_views,
- no_residual=self.opt.no_residual,
- last_op=nn.Sigmoid())
-
- self.normalizer = DepthNormalizer(opt)
-
- # This is a list of [B x Feat_i x H x W] features
- self.im_feat_list = []
- self.tmpx = None
- self.normx = None
-
- self.intermediate_preds_list = []
-
- init_net(self)
-
- def filter(self, images):
- '''
- Filter the input images
- store all intermediate features.
- :param images: [B, C, H, W] input images
- '''
- self.im_feat_list, self.tmpx, self.normx = self.image_filter(images)
- # If it is not in training, only produce the last im_feat
- if not self.training:
- self.im_feat_list = [self.im_feat_list[-1]]
-
- def query(self, points, calibs, transforms=None, labels=None):
- '''
- Given 3D points, query the network predictions for each point.
- Image features should be pre-computed before this call.
- store all intermediate features.
- query() function may behave differently during training/testing.
- :param points: [B, 3, N] world space coordinates of points
- :param calibs: [B, 3, 4] calibration matrices for each image
- :param transforms: Optional [B, 2, 3] image space coordinate transforms
- :param labels: Optional [B, Res, N] gt labeling
- :return: [B, Res, N] predictions for each point
- '''
- if labels is not None:
- self.labels = labels
-
- xyz = self.projection(points, calibs, transforms)
- xy = xyz[:, :2, :]
- z = xyz[:, 2:3, :]
-
- in_img = (xy[:, 0] >= -1.0) & (xy[:, 0] <= 1.0) & (xy[:, 1] >= -1.0) & (xy[:, 1] <= 1.0)
-
- z_feat = self.normalizer(z, calibs=calibs)
-
- if self.opt.skip_hourglass:
- tmpx_local_feature = self.index(self.tmpx, xy)
-
- self.intermediate_preds_list = []
-
- for im_feat in self.im_feat_list:
- # [B, Feat_i + z, N]
- point_local_feat_list = [self.index(im_feat, xy), z_feat]
-
- if self.opt.skip_hourglass:
- point_local_feat_list.append(tmpx_local_feature)
-
- point_local_feat = torch.cat(point_local_feat_list, 1)
-
- # out of image plane is always set to 0
- pred = in_img[:,None].float() * self.surface_classifier(point_local_feat)
- self.intermediate_preds_list.append(pred)
-
- self.preds = self.intermediate_preds_list[-1]
-
- def get_im_feat(self):
- '''
- Get the image filter
- :return: [B, C_feat, H, W] image feature after filtering
- '''
- return self.im_feat_list[-1]
-
- def get_error(self):
- '''
- Hourglass has its own intermediate supervision scheme
- '''
- error = 0
- for preds in self.intermediate_preds_list:
- error += self.error_term(preds, self.labels)
- error /= len(self.intermediate_preds_list)
-
- return error
-
- def forward(self, images, points, calibs, transforms=None, labels=None):
- # Get image feature
- self.filter(images)
-
- # Phase 2: point query
- self.query(points=points, calibs=calibs, transforms=transforms, labels=labels)
-
- # get the prediction
- res = self.get_preds()
-
- # get the error
- error = self.get_error()
-
- return res, error
\ No newline at end of file
diff --git a/spaces/MingGatsby/multi-query-sentiment/app.py b/spaces/MingGatsby/multi-query-sentiment/app.py
deleted file mode 100644
index a327fe6f15c6a5b70e73be0cca82b75924cec475..0000000000000000000000000000000000000000
--- a/spaces/MingGatsby/multi-query-sentiment/app.py
+++ /dev/null
@@ -1,61 +0,0 @@
-from pathlib import Path
-
-from htmltools import HTMLDependency, tags
-from shiny import App, reactive, ui
-
-from query import query_output_server, query_output_ui
-
-button_style = {"style": "margin: 15px"}
-
-www_dir = Path(__file__).parent / "www"
-app_ui = ui.page_fluid(
- HTMLDependency(
- "bootstrap",
- version="9.99",
- source={"subdir": str(www_dir)},
- script={"src": "bootstrap.bundle.min.js"},
- stylesheet={"href": "theme.css"},
- ),
- ui.row(
- ui.column(
- 2,
- ui.row(
- button_style,
- ui.input_action_button("add_query", "Add Query"),
- ),
- ui.row(
- button_style,
- ui.input_action_button("remove_query", "Remove Query"),
- ),
- ),
- ui.column(
- 10,
- ui.tags.div(query_output_ui("initial_query"), id="module_container"),
- ),
- ),
-)
-
-
-def server(input, output, session):
- mod_counter = reactive.Value(0)
-
- query_output_server("initial_query")
-
- @reactive.Effect
- @reactive.event(input.add_query)
- def _():
- counter = mod_counter.get() + 1
- mod_counter.set(counter)
- id = "query_" + str(counter)
- ui.insert_ui(
- selector="#module_container", where="afterBegin", ui=query_output_ui(id)
- )
- query_output_server(id)
-
- @reactive.Effect
- @reactive.event(input.remove_query)
- def _():
- ui.remove_ui(selector=f"#module_container .row:first-child")
-
-
-app = App(app_ui, server)
diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/kie/extractors/__init__.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/kie/extractors/__init__.py
deleted file mode 100644
index 914d0f6903cefec1236107346e59901ac9d64fd4..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/kie/extractors/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .sdmgr import SDMGR
-
-__all__ = ['SDMGR']
diff --git a/spaces/MrSalman/Image_captioning/app.py b/spaces/MrSalman/Image_captioning/app.py
deleted file mode 100644
index 39c09845f447b3e8027561ff5d050583d97e6b5c..0000000000000000000000000000000000000000
--- a/spaces/MrSalman/Image_captioning/app.py
+++ /dev/null
@@ -1,47 +0,0 @@
-# impoprt packages
-import torch
-import requests
-from PIL import Image
-from transformers import BlipProcessor, BlipForConditionalGeneration, AutoTokenizer, pipeline
-import sentencepiece
-import gradio as gr
-
-# Image captioning model
-processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
-model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base")
-
-# Translate en to ar
-model_translater = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-ar")
-
-# conditional image captioning (with prefix-)
-def image_captioning(image, prefix="a "):
- """ Return text (As str) to describe an image """
- # Process the image
- inputs = processor(image, prefix, return_tensors="pt")
-
- # Generate text to describe the image
- output = model.generate(**inputs)
-
- # Decode the output
- output = processor.decode(output[0], skip_special_tokens=True, max_length=80)
- return output
-
-def translate_text(text, to="ar"):
- """ Return translated text """
- translated_text = model_translater(str(text))
- return translated_text[0]['translation_text']
-
-def image_captioning_ar(image, prefix = "a "):
- if image:
- text = image_captioning(image, prefix=prefix)
- return text, translate_text(text)
- return null
-
-input_image = gr.inputs.Image(type="pil", label = 'Upload your image')
-imageCaptioning_interface = gr.Interface(
- fn = image_captioning_ar,
- inputs=input_image,
- outputs=[gr.outputs.Textbox(label="Caption (en)"), gr.outputs.Textbox(label="Caption (ar)")],
- title = 'Image captioning',
-)
-imageCaptioning_interface.launch()
\ No newline at end of file
diff --git a/spaces/Mrchuw/text-to-image_6_by_6/css.css b/spaces/Mrchuw/text-to-image_6_by_6/css.css
deleted file mode 100644
index 45350b7c27b8177a67a10d66e3c5090df2cbdab5..0000000000000000000000000000000000000000
--- a/spaces/Mrchuw/text-to-image_6_by_6/css.css
+++ /dev/null
@@ -1,113 +0,0 @@
-.app.svelte-p7tiy3.svelte-p7tiy3{
- background:None;
-}
-.unpadded_box.large.svelte-1vhybi6{
- background:#6fbcffa8;
- min-height:100%;
-}
-span.svelte-1l2rj76{
- color:white;!important;
-}
-div.svelte-1fwqiwq .block{
- background:#4d8df1;
-}
-.lg.svelte-1h4gtph{
- background:#4d8df1;
- color:white;
- height:100px;
-}
-#restart{
- position: relative;
- font-family: "Poppins",sans-serif;
- text-align: center;
- border-radius: 8px;
- background: #0063f787;
- border-style: solid;
- border-width: 1px;
- border-color: #ffffff;
- width: 100%;
- height: 50%;
- max-height: 200px;
- padding: 0px 10px;
- transform: translate(-50%,0%);
- left: 50%;
-}
-#head{
- color:white;
- margin-top:15px;
- margin-bottom:5px;
-}
-#cont{
- color: white;
- margin-top: 5px;
- margin-bottom: 15px;
- font-size: 1.1rem;
-}
-
-.lds-ellipsis {
- display: inline-block;
- position: relative;
- width: 80px;
- height: 80px;
-
-}
-.lds-ellipsis div {
- position: absolute;
- z-index:199999;
-
- top: 33px;
- width: 13px;
- height: 13px;
- border-radius: 50%;
- background: blue;
- animation-timing-function: cubic-bezier(0, 1, 1, 0);
-}
-.lds-ellipsis div:nth-child(1) {
- left: 8px;
- animation: lds-ellipsis1 0.6s infinite;
-}
-.lds-ellipsis div:nth-child(2) {
- left: 8px;
- animation: lds-ellipsis2 0.6s infinite;
-}
-.lds-ellipsis div:nth-child(3) {
- left: 32px;
- animation: lds-ellipsis2 0.6s infinite;
-}
-.lds-ellipsis div:nth-child(4) {
- left: 56px;
- animation: lds-ellipsis3 0.6s infinite;
-}
-@keyframes lds-ellipsis1 {
- 0% {
- transform: scale(0);
- }
- 100% {
- transform: scale(1);
- }
-}
-@keyframes lds-ellipsis3 {
- 0% {
- transform: scale(1);
- }
- 100% {
- transform: scale(0);
- }frames lds-ellipsis2 {
- 0% {
- transform: translate(0, 0);
- }
- 100% {
- transform: translate(24px, 0);
- }
-}
-
-}
-@keyframes lds-ellipsis2 {
- 0% {
- transform: translate(0, 0);
- }
- 100% {
- transform: translate(24px, 0);
- }
-}
-
diff --git a/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/sequence_layers_test.py b/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/sequence_layers_test.py
deleted file mode 100644
index fd41e2d824c014084129707631d45de334ec741b..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/sequence_layers_test.py
+++ /dev/null
@@ -1,112 +0,0 @@
-# Copyright 2017 The TensorFlow Authors All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-
-"""Tests for sequence_layers."""
-
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-import numpy as np
-import tensorflow as tf
-from tensorflow.contrib import slim
-
-import model
-import sequence_layers
-
-
-def fake_net(batch_size, num_features, feature_size):
- return tf.convert_to_tensor(
- np.random.uniform(size=(batch_size, num_features, feature_size)),
- dtype=tf.float32)
-
-
-def fake_labels(batch_size, seq_length, num_char_classes):
- labels_np = tf.convert_to_tensor(
- np.random.randint(
- low=0, high=num_char_classes, size=(batch_size, seq_length)))
- return slim.one_hot_encoding(labels_np, num_classes=num_char_classes)
-
-
-def create_layer(layer_class, batch_size, seq_length, num_char_classes):
- model_params = model.ModelParams(
- num_char_classes=num_char_classes,
- seq_length=seq_length,
- num_views=1,
- null_code=num_char_classes)
- net = fake_net(
- batch_size=batch_size, num_features=seq_length * 5, feature_size=6)
- labels_one_hot = fake_labels(batch_size, seq_length, num_char_classes)
- layer_params = sequence_layers.SequenceLayerParams(
- num_lstm_units=10, weight_decay=0.00004, lstm_state_clip_value=10.0)
- return layer_class(net, labels_one_hot, model_params, layer_params)
-
-
-class SequenceLayersTest(tf.test.TestCase):
- def test_net_slice_char_logits_with_correct_shape(self):
- batch_size = 2
- seq_length = 4
- num_char_classes = 3
-
- layer = create_layer(sequence_layers.NetSlice, batch_size, seq_length,
- num_char_classes)
- char_logits = layer.create_logits()
-
- self.assertEqual(
- tf.TensorShape([batch_size, seq_length, num_char_classes]),
- char_logits.get_shape())
-
- def test_net_slice_with_autoregression_char_logits_with_correct_shape(self):
- batch_size = 2
- seq_length = 4
- num_char_classes = 3
-
- layer = create_layer(sequence_layers.NetSliceWithAutoregression,
- batch_size, seq_length, num_char_classes)
- char_logits = layer.create_logits()
-
- self.assertEqual(
- tf.TensorShape([batch_size, seq_length, num_char_classes]),
- char_logits.get_shape())
-
- def test_attention_char_logits_with_correct_shape(self):
- batch_size = 2
- seq_length = 4
- num_char_classes = 3
-
- layer = create_layer(sequence_layers.Attention, batch_size, seq_length,
- num_char_classes)
- char_logits = layer.create_logits()
-
- self.assertEqual(
- tf.TensorShape([batch_size, seq_length, num_char_classes]),
- char_logits.get_shape())
-
- def test_attention_with_autoregression_char_logits_with_correct_shape(self):
- batch_size = 2
- seq_length = 4
- num_char_classes = 3
-
- layer = create_layer(sequence_layers.AttentionWithAutoregression,
- batch_size, seq_length, num_char_classes)
- char_logits = layer.create_logits()
-
- self.assertEqual(
- tf.TensorShape([batch_size, seq_length, num_char_classes]),
- char_logits.get_shape())
-
-
-if __name__ == '__main__':
- tf.test.main()
diff --git a/spaces/NN520/AI/src/lib/bots/bing/tts.ts b/spaces/NN520/AI/src/lib/bots/bing/tts.ts
deleted file mode 100644
index cd10b7d1d7581bf9cf46ff6755fcca550c558c9b..0000000000000000000000000000000000000000
--- a/spaces/NN520/AI/src/lib/bots/bing/tts.ts
+++ /dev/null
@@ -1,82 +0,0 @@
-import { sleep } from './utils'
-
-const synth = window.speechSynthesis
-
-export class TTS {
- currentText = ''
- speakText = ''
- private controller = new AbortController()
- speaking = false
- get isSpeaking() {
- return this.speaking
- }
- finished = false
- constructor() {}
- abort = () => {
- this.controller.abort()
- }
-
- reset = () => {
- this.speaking = false
- this.finished = true
- this.currentText = ''
- this.speakText = ''
- this.abort()
- }
-
- speak = (text: string) => {
- if (!synth || text?.trim()?.length < 2) {
- return
- }
- this.currentText = text.replace(/[^\u4e00-\u9fa5_a-zA-Z0-9,。?,:;\.,:]+/g, '')
- this.finished = false
- this.loop()
- }
-
- private async doSpeek() {
- return new Promise((resolve) => {
- const endIndex = this.finished ? this.currentText.length :
- Math.max(
- this.currentText.lastIndexOf('。'),
- this.currentText.lastIndexOf(';'),
- this.currentText.lastIndexOf('、'),
- this.currentText.lastIndexOf('?'),
- this.currentText.lastIndexOf('\n')
- )
- const startIndex = this.speakText.length ? Math.max(0, this.currentText.lastIndexOf(this.speakText) + this.speakText.length) : 0
-
- if (startIndex >= endIndex) {
- return resolve(true)
- }
- const text = this.currentText.slice(startIndex, endIndex)
- this.speakText = text
- const utterThis = new SpeechSynthesisUtterance(text)
- this.controller.signal.onabort = () => {
- synth.cancel()
- this.finished = true
- resolve(false)
- }
-
- utterThis.onend = function (event) {
- resolve(true)
- }
-
- utterThis.onerror = function (event) {
- resolve(false)
- }
-
- const voice = synth.getVoices().find(v => v.name.includes('Microsoft Yunxi Online')) ?? null
- utterThis.voice = voice
- synth.speak(utterThis)
- })
- }
-
- private async loop() {
- if (this.speaking) return
- this.speaking = true
- while(!this.finished) {
- await Promise.all([sleep(1000), this.doSpeek()])
- }
- this.speaking = false
- }
-}
diff --git a/spaces/Ntabukiraniro/Recipe/utils/ims2file.py b/spaces/Ntabukiraniro/Recipe/utils/ims2file.py
deleted file mode 100644
index 13007007fd936b4a02b500bb480a4dae84e6785e..0000000000000000000000000000000000000000
--- a/spaces/Ntabukiraniro/Recipe/utils/ims2file.py
+++ /dev/null
@@ -1,92 +0,0 @@
-import pickle
-from tqdm import tqdm
-import os
-import numpy as np
-from PIL import Image
-import argparse
-import lmdb
-from torchvision import transforms
-
-
-MAX_SIZE = 1e12
-
-
-def load_and_resize(root, path, imscale):
-
- transf_list = []
- transf_list.append(transforms.Resize(imscale))
- transf_list.append(transforms.CenterCrop(imscale))
- transform = transforms.Compose(transf_list)
-
- img = Image.open(os.path.join(root, path[0], path[1], path[2], path[3], path)).convert('RGB')
- img = transform(img)
-
- return img
-
-
-def main(args):
-
- parts = {}
- datasets = {}
- imname2pos = {'train': {}, 'val': {}, 'test': {}}
- for split in ['train', 'val', 'test']:
- datasets[split] = pickle.load(open(os.path.join(args.save_dir, args.suff + 'recipe1m_' + split + '.pkl'), 'rb'))
-
- parts[split] = lmdb.open(os.path.join(args.save_dir, 'lmdb_'+split), map_size=int(MAX_SIZE))
- with parts[split].begin() as txn:
- present_entries = [key for key, _ in txn.cursor()]
- j = 0
- for i, entry in tqdm(enumerate(datasets[split])):
- impaths = entry['images'][0:5]
-
- for n, p in enumerate(impaths):
- if n == args.maxnumims:
- break
- if p.encode() not in present_entries:
- im = load_and_resize(os.path.join(args.root, 'images', split), p, args.imscale)
- im = np.array(im).astype(np.uint8)
- with parts[split].begin(write=True) as txn:
- txn.put(p.encode(), im)
- imname2pos[split][p] = j
- j += 1
- pickle.dump(imname2pos, open(os.path.join(args.save_dir, 'imname2pos.pkl'), 'wb'))
-
-
-def test(args):
-
- imname2pos = pickle.load(open(os.path.join(args.save_dir, 'imname2pos.pkl'), 'rb'))
- paths = imname2pos['val']
-
- for k, v in paths.items():
- path = k
- break
- image_file = lmdb.open(os.path.join(args.save_dir, 'lmdb_' + 'val'), max_readers=1, readonly=True,
- lock=False, readahead=False, meminit=False)
- with image_file.begin(write=False) as txn:
- image = txn.get(path.encode())
- image = np.fromstring(image, dtype=np.uint8)
- image = np.reshape(image, (args.imscale, args.imscale, 3))
- image = Image.fromarray(image.astype('uint8'), 'RGB')
- print (np.shape(image))
-
-
-if __name__ == "__main__":
-
- parser = argparse.ArgumentParser()
- parser.add_argument('--root', type=str, default='path/to/recipe1m',
- help='path to the recipe1m dataset')
- parser.add_argument('--save_dir', type=str, default='../data',
- help='path where the lmdbs will be saved')
- parser.add_argument('--imscale', type=int, default=256,
- help='size of images (will be rescaled and center cropped)')
- parser.add_argument('--maxnumims', type=int, default=5,
- help='maximum number of images to allow for each sample')
- parser.add_argument('--suff', type=str, default='',
- help='id of the vocabulary to use')
- parser.add_argument('--test_only', dest='test_only', action='store_true')
- parser.set_defaults(test_only=False)
- args = parser.parse_args()
-
- if not args.test_only:
- main(args)
- test(args)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/bart/summarize.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/bart/summarize.py
deleted file mode 100644
index 04435f80e39c2d9d894696dae7cba5b381e13da9..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/bart/summarize.py
+++ /dev/null
@@ -1,100 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-from fairseq.models.bart import BARTModel
-import argparse
-
-XSUM_KWARGS = dict(beam=6, lenpen=1.0, max_len_b=60, min_len=10, no_repeat_ngram_size=3)
-CNN_KWARGS = dict(beam=4, lenpen=2.0, max_len_b=140, min_len=55, no_repeat_ngram_size=3)
-
-
-@torch.no_grad()
-def generate(bart, infile, outfile="bart_hypo.txt", bsz=32, n_obs=None, **eval_kwargs):
- count = 1
-
- # if n_obs is not None: bsz = min(bsz, n_obs)
-
- with open(infile) as source, open(outfile, "w") as fout:
- sline = source.readline().strip()
- slines = [sline]
- for sline in source:
- if n_obs is not None and count > n_obs:
- break
- if count % bsz == 0:
- hypotheses_batch = bart.sample(slines, **eval_kwargs)
- for hypothesis in hypotheses_batch:
- fout.write(hypothesis + "\n")
- fout.flush()
- slines = []
-
- slines.append(sline.strip())
- count += 1
-
- if slines != []:
- hypotheses_batch = bart.sample(slines, **eval_kwargs)
- for hypothesis in hypotheses_batch:
- fout.write(hypothesis + "\n")
- fout.flush()
-
-
-def main():
- """
- Usage::
-
- python examples/bart/summarize.py \
- --model-dir $HOME/bart.large.cnn \
- --model-file model.pt \
- --src $HOME/data-bin/cnn_dm/test.source
- """
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "--model-dir",
- required=True,
- type=str,
- default="bart.large.cnn/",
- help="path containing model file and src_dict.txt",
- )
- parser.add_argument(
- "--model-file",
- default="checkpoint_best.pt",
- help="where in model_dir are weights saved",
- )
- parser.add_argument(
- "--src", default="test.source", help="text to summarize", type=str
- )
- parser.add_argument(
- "--out", default="test.hypo", help="where to save summaries", type=str
- )
- parser.add_argument("--bsz", default=32, help="where to save summaries", type=int)
- parser.add_argument(
- "--n", default=None, help="how many examples to summarize", type=int
- )
- parser.add_argument(
- "--xsum-kwargs",
- action="store_true",
- default=False,
- help="if true use XSUM_KWARGS else CNN_KWARGS",
- )
- args = parser.parse_args()
- eval_kwargs = XSUM_KWARGS if args.xsum_kwargs else CNN_KWARGS
- if args.model_dir == "pytorch/fairseq":
- bart = torch.hub.load("pytorch/fairseq", args.model_file)
- else:
- bart = BARTModel.from_pretrained(
- args.model_dir,
- checkpoint_file=args.model_file,
- data_name_or_path=args.model_dir,
- )
- bart = bart.eval()
- if torch.cuda.is_available():
- bart = bart.cuda().half()
- generate(
- bart, args.src, bsz=args.bsz, n_obs=args.n, outfile=args.out, **eval_kwargs
- )
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/get_feature_manifest.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/get_feature_manifest.py
deleted file mode 100644
index 516f2cc469af9b417126dea1988698adac41d8ab..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/get_feature_manifest.py
+++ /dev/null
@@ -1,233 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import logging
-from pathlib import Path
-import shutil
-from tempfile import NamedTemporaryFile
-from collections import Counter, defaultdict
-
-import pandas as pd
-import torchaudio
-from tqdm import tqdm
-
-from fairseq.data.audio.audio_utils import convert_waveform
-from examples.speech_to_text.data_utils import (
- create_zip,
- gen_config_yaml,
- gen_vocab,
- get_zip_manifest,
- load_tsv_to_dicts,
- save_df_to_tsv
-)
-from examples.speech_synthesis.data_utils import (
- extract_logmel_spectrogram, extract_pitch, extract_energy, get_global_cmvn,
- ipa_phonemize, get_mfa_alignment, get_unit_alignment
-)
-
-
-log = logging.getLogger(__name__)
-
-
-def process(args):
- assert "train" in args.splits
- out_root = Path(args.output_root).absolute()
- out_root.mkdir(exist_ok=True)
-
- print("Fetching data...")
- audio_manifest_root = Path(args.audio_manifest_root).absolute()
- samples = []
- for s in args.splits:
- for e in load_tsv_to_dicts(audio_manifest_root / f"{s}.audio.tsv"):
- e["split"] = s
- samples.append(e)
- sample_ids = [s["id"] for s in samples]
-
- # Get alignment info
- id_to_alignment = None
- if args.textgrid_zip is not None:
- assert args.id_to_units_tsv is None
- id_to_alignment = get_mfa_alignment(
- args.textgrid_zip, sample_ids, args.sample_rate, args.hop_length
- )
- elif args.id_to_units_tsv is not None:
- # assume identical hop length on the unit sequence
- id_to_alignment = get_unit_alignment(args.id_to_units_tsv, sample_ids)
-
- # Extract features and pack features into ZIP
- feature_name = "logmelspec80"
- zip_path = out_root / f"{feature_name}.zip"
- pitch_zip_path = out_root / "pitch.zip"
- energy_zip_path = out_root / "energy.zip"
- gcmvn_npz_path = out_root / "gcmvn_stats.npz"
- if zip_path.exists() and gcmvn_npz_path.exists():
- print(f"{zip_path} and {gcmvn_npz_path} exist.")
- else:
- feature_root = out_root / feature_name
- feature_root.mkdir(exist_ok=True)
- pitch_root = out_root / "pitch"
- energy_root = out_root / "energy"
- if args.add_fastspeech_targets:
- pitch_root.mkdir(exist_ok=True)
- energy_root.mkdir(exist_ok=True)
- print("Extracting Mel spectrogram features...")
- for sample in tqdm(samples):
- waveform, sample_rate = torchaudio.load(sample["audio"])
- waveform, sample_rate = convert_waveform(
- waveform, sample_rate, normalize_volume=args.normalize_volume,
- to_sample_rate=args.sample_rate
- )
- sample_id = sample["id"]
- target_length = None
- if id_to_alignment is not None:
- a = id_to_alignment[sample_id]
- target_length = sum(a.frame_durations)
- if a.start_sec is not None and a.end_sec is not None:
- start_frame = int(a.start_sec * sample_rate)
- end_frame = int(a.end_sec * sample_rate)
- waveform = waveform[:, start_frame: end_frame]
- extract_logmel_spectrogram(
- waveform, sample_rate, feature_root / f"{sample_id}.npy",
- win_length=args.win_length, hop_length=args.hop_length,
- n_fft=args.n_fft, n_mels=args.n_mels, f_min=args.f_min,
- f_max=args.f_max, target_length=target_length
- )
- if args.add_fastspeech_targets:
- assert id_to_alignment is not None
- extract_pitch(
- waveform, sample_rate, pitch_root / f"{sample_id}.npy",
- hop_length=args.hop_length, log_scale=True,
- phoneme_durations=id_to_alignment[sample_id].frame_durations
- )
- extract_energy(
- waveform, energy_root / f"{sample_id}.npy",
- hop_length=args.hop_length, n_fft=args.n_fft,
- log_scale=True,
- phoneme_durations=id_to_alignment[sample_id].frame_durations
- )
- print("ZIPing features...")
- create_zip(feature_root, zip_path)
- get_global_cmvn(feature_root, gcmvn_npz_path)
- shutil.rmtree(feature_root)
- if args.add_fastspeech_targets:
- create_zip(pitch_root, pitch_zip_path)
- shutil.rmtree(pitch_root)
- create_zip(energy_root, energy_zip_path)
- shutil.rmtree(energy_root)
-
- print("Fetching ZIP manifest...")
- audio_paths, audio_lengths = get_zip_manifest(zip_path)
- pitch_paths, pitch_lengths, energy_paths, energy_lengths = [None] * 4
- if args.add_fastspeech_targets:
- pitch_paths, pitch_lengths = get_zip_manifest(pitch_zip_path)
- energy_paths, energy_lengths = get_zip_manifest(energy_zip_path)
- # Generate TSV manifest
- print("Generating manifest...")
- manifest_by_split = {split: defaultdict(list) for split in args.splits}
- for sample in tqdm(samples):
- sample_id, split = sample["id"], sample["split"]
- normalized_utt = sample["tgt_text"]
- if id_to_alignment is not None:
- normalized_utt = " ".join(id_to_alignment[sample_id].tokens)
- elif args.ipa_vocab:
- normalized_utt = ipa_phonemize(
- normalized_utt, lang=args.lang, use_g2p=args.use_g2p
- )
- manifest_by_split[split]["id"].append(sample_id)
- manifest_by_split[split]["audio"].append(audio_paths[sample_id])
- manifest_by_split[split]["n_frames"].append(audio_lengths[sample_id])
- manifest_by_split[split]["tgt_text"].append(normalized_utt)
- manifest_by_split[split]["speaker"].append(sample["speaker"])
- manifest_by_split[split]["src_text"].append(sample["src_text"])
- if args.add_fastspeech_targets:
- assert id_to_alignment is not None
- duration = " ".join(
- str(d) for d in id_to_alignment[sample_id].frame_durations
- )
- manifest_by_split[split]["duration"].append(duration)
- manifest_by_split[split]["pitch"].append(pitch_paths[sample_id])
- manifest_by_split[split]["energy"].append(energy_paths[sample_id])
- for split in args.splits:
- save_df_to_tsv(
- pd.DataFrame.from_dict(manifest_by_split[split]),
- out_root / f"{split}.tsv"
- )
- # Generate vocab
- vocab_name, spm_filename = None, None
- if id_to_alignment is not None or args.ipa_vocab:
- vocab = Counter()
- for t in manifest_by_split["train"]["tgt_text"]:
- vocab.update(t.split(" "))
- vocab_name = "vocab.txt"
- with open(out_root / vocab_name, "w") as f:
- for s, c in vocab.most_common():
- f.write(f"{s} {c}\n")
- else:
- spm_filename_prefix = "spm_char"
- spm_filename = f"{spm_filename_prefix}.model"
- with NamedTemporaryFile(mode="w") as f:
- for t in manifest_by_split["train"]["tgt_text"]:
- f.write(t + "\n")
- f.flush() # needed to ensure gen_vocab sees dumped text
- gen_vocab(Path(f.name), out_root / spm_filename_prefix, "char")
- # Generate speaker list
- speakers = sorted({sample["speaker"] for sample in samples})
- speakers_path = out_root / "speakers.txt"
- with open(speakers_path, "w") as f:
- for speaker in speakers:
- f.write(f"{speaker}\n")
- # Generate config YAML
- win_len_t = args.win_length / args.sample_rate
- hop_len_t = args.hop_length / args.sample_rate
- extra = {
- "sample_rate": args.sample_rate,
- "features": {
- "type": "spectrogram+melscale+log",
- "eps": 1e-2, "n_mels": args.n_mels, "n_fft": args.n_fft,
- "window_fn": "hann", "win_length": args.win_length,
- "hop_length": args.hop_length, "sample_rate": args.sample_rate,
- "win_len_t": win_len_t, "hop_len_t": hop_len_t,
- "f_min": args.f_min, "f_max": args.f_max,
- "n_stft": args.n_fft // 2 + 1
- }
- }
- if len(speakers) > 1:
- extra["speaker_set_filename"] = "speakers.txt"
- gen_config_yaml(
- out_root, spm_filename=spm_filename, vocab_name=vocab_name,
- audio_root=out_root.as_posix(), input_channels=None,
- input_feat_per_channel=None, specaugment_policy=None,
- cmvn_type="global", gcmvn_path=gcmvn_npz_path, extra=extra
- )
-
-
-def main():
- parser = argparse.ArgumentParser()
- parser.add_argument("--audio-manifest-root", "-m", required=True, type=str)
- parser.add_argument("--output-root", "-o", required=True, type=str)
- parser.add_argument("--splits", "-s", type=str, nargs="+",
- default=["train", "dev", "test"])
- parser.add_argument("--ipa-vocab", action="store_true")
- parser.add_argument("--use-g2p", action="store_true")
- parser.add_argument("--lang", type=str, default="en-us")
- parser.add_argument("--win-length", type=int, default=1024)
- parser.add_argument("--hop-length", type=int, default=256)
- parser.add_argument("--n-fft", type=int, default=1024)
- parser.add_argument("--n-mels", type=int, default=80)
- parser.add_argument("--f-min", type=int, default=20)
- parser.add_argument("--f-max", type=int, default=8000)
- parser.add_argument("--sample-rate", type=int, default=22050)
- parser.add_argument("--normalize-volume", "-n", action="store_true")
- parser.add_argument("--textgrid-zip", type=str, default=None)
- parser.add_argument("--id-to-units-tsv", type=str, default=None)
- parser.add_argument("--add-fastspeech-targets", action="store_true")
- args = parser.parse_args()
-
- process(args)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/scripts/pca.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/scripts/pca.py
deleted file mode 100644
index 948cf5319fd86ba1bccff65270b2881048faf9b1..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/scripts/pca.py
+++ /dev/null
@@ -1,53 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import os
-import os.path as osp
-import numpy as np
-
-import faiss
-
-
-
-def get_parser():
- parser = argparse.ArgumentParser(
- description="compute a pca matrix given an array of numpy features"
- )
- # fmt: off
- parser.add_argument('data', help='numpy file containing features')
- parser.add_argument('--output', help='where to save the pca matrix', required=True)
- parser.add_argument('--dim', type=int, help='dim for pca reduction', required=True)
- parser.add_argument('--eigen-power', type=float, default=0, help='eigen power, -0.5 for whitening')
-
- return parser
-
-
-def main():
- parser = get_parser()
- args = parser.parse_args()
-
- print("Reading features")
- x = np.load(args.data, mmap_mode="r")
-
- print("Computing PCA")
- pca = faiss.PCAMatrix(x.shape[-1], args.dim, args.eigen_power)
- pca.train(x)
- b = faiss.vector_to_array(pca.b)
- A = faiss.vector_to_array(pca.A).reshape(pca.d_out, pca.d_in)
-
- os.makedirs(args.output, exist_ok=True)
-
- prefix = str(args.dim)
- if args.eigen_power != 0:
- prefix += f"_{args.eigen_power}"
-
- np.save(osp.join(args.output, f"{prefix}_pca_A"), A.T)
- np.save(osp.join(args.output, f"{prefix}_pca_b"), b)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/benchmark/dummy_masked_lm.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/benchmark/dummy_masked_lm.py
deleted file mode 100644
index 12b9c5d0f55993bf8750564882a351fc3f8055f0..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/benchmark/dummy_masked_lm.py
+++ /dev/null
@@ -1,94 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-from dataclasses import dataclass, field
-from typing import Optional
-
-import torch
-from omegaconf import II
-
-from .dummy_dataset import DummyDataset
-from fairseq.data import Dictionary
-from fairseq.dataclass import FairseqDataclass
-from fairseq.tasks import FairseqTask, register_task
-
-logger = logging.getLogger(__name__)
-
-
-@dataclass
-class DummyMaskedLMConfig(FairseqDataclass):
- dict_size: int = 49996
- dataset_size: int = 100000
- tokens_per_sample: int = field(
- default=512,
- metadata={
- "help": "max number of total tokens over all"
- " segments per sample for BERT dataset"
- },
- )
- batch_size: Optional[int] = II("dataset.batch_size")
- max_tokens: Optional[int] = II("dataset.max_tokens")
- max_target_positions: int = II("task.tokens_per_sample")
-
-
-@register_task("dummy_masked_lm", dataclass=DummyMaskedLMConfig)
-class DummyMaskedLMTask(FairseqTask):
- def __init__(self, cfg: DummyMaskedLMConfig):
- super().__init__(cfg)
-
- self.dictionary = Dictionary()
- for i in range(cfg.dict_size):
- self.dictionary.add_symbol("word{}".format(i))
- logger.info("dictionary: {} types".format(len(self.dictionary)))
- # add mask token
- self.mask_idx = self.dictionary.add_symbol("")
- self.dictionary.pad_to_multiple_(8) # often faster if divisible by 8
-
- mask_idx = 0
- pad_idx = 1
- seq = torch.arange(cfg.tokens_per_sample) + pad_idx + 1
- mask = torch.arange(2, cfg.tokens_per_sample, 7) # ~15%
- src = seq.clone()
- src[mask] = mask_idx
- tgt = torch.full_like(seq, pad_idx)
- tgt[mask] = seq[mask]
-
- self.dummy_src = src
- self.dummy_tgt = tgt
-
- def load_dataset(self, split, epoch=1, combine=False, **kwargs):
- """Load a given dataset split.
- Args:
- split (str): name of the split (e.g., train, valid, test)
- """
- if self.cfg.batch_size is not None:
- bsz = self.cfg.batch_size
- else:
- bsz = max(1, self.cfg.max_tokens // self.cfg.tokens_per_sample)
- self.datasets[split] = DummyDataset(
- {
- "id": 1,
- "net_input": {
- "src_tokens": torch.stack([self.dummy_src for _ in range(bsz)]),
- "src_lengths": torch.full(
- (bsz,), self.cfg.tokens_per_sample, dtype=torch.long
- ),
- },
- "target": torch.stack([self.dummy_tgt for _ in range(bsz)]),
- "nsentences": bsz,
- "ntokens": bsz * self.cfg.tokens_per_sample,
- },
- num_items=self.cfg.dataset_size,
- item_size=self.cfg.tokens_per_sample,
- )
-
- @property
- def source_dictionary(self):
- return self.dictionary
-
- @property
- def target_dictionary(self):
- return self.dictionary
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/layer_norm.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/layer_norm.py
deleted file mode 100644
index 234609d9e213a650e0032aaa0ca0462a818bfead..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/layer_norm.py
+++ /dev/null
@@ -1,50 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-try:
- from apex.normalization import FusedLayerNorm as _FusedLayerNorm
-
- has_fused_layernorm = True
-
- class FusedLayerNorm(_FusedLayerNorm):
- @torch.jit.unused
- def forward(self, x):
- if not x.is_cuda:
- return super().forward(x)
- else:
- with torch.cuda.device(x.device):
- return super().forward(x)
-
-
-except ImportError:
- has_fused_layernorm = False
-
-
-def LayerNorm(normalized_shape, eps=1e-5, elementwise_affine=True, export=False):
- if torch.jit.is_scripting():
- export = True
- if not export and torch.cuda.is_available() and has_fused_layernorm:
- return FusedLayerNorm(normalized_shape, eps, elementwise_affine)
- return torch.nn.LayerNorm(normalized_shape, eps, elementwise_affine)
-
-
-class Fp32LayerNorm(nn.LayerNorm):
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
-
- def forward(self, input):
- output = F.layer_norm(
- input.float(),
- self.normalized_shape,
- self.weight.float() if self.weight is not None else None,
- self.bias.float() if self.bias is not None else None,
- self.eps,
- )
- return output.type_as(input)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq_cli/eval_lm.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq_cli/eval_lm.py
deleted file mode 100644
index ab6e77029ef738291efd190b1cfe2435dd403dea..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq_cli/eval_lm.py
+++ /dev/null
@@ -1,347 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Evaluate the perplexity of a trained language model.
-"""
-
-import logging
-import math
-import os
-import sys
-from argparse import Namespace
-from typing import Iterable, List, Optional
-
-import torch
-import fairseq
-from fairseq import checkpoint_utils, distributed_utils, options, tasks, utils
-from fairseq.dataclass.utils import convert_namespace_to_omegaconf
-from fairseq.logging import progress_bar
-from fairseq.logging.meters import StopwatchMeter
-from fairseq.sequence_scorer import SequenceScorer
-from omegaconf import DictConfig
-
-
-logging.basicConfig(
- format="%(asctime)s | %(levelname)s | %(name)s | %(message)s",
- datefmt="%Y-%m-%d %H:%M:%S",
- level=os.environ.get("LOGLEVEL", "INFO").upper(),
- stream=sys.stdout,
-)
-logger = logging.getLogger("fairseq_cli.eval_lm")
-
-
-def eval_lm(
- models: List[fairseq.models.FairseqModel],
- source_dictionary: fairseq.data.Dictionary,
- batch_iterator: Iterable,
- post_process: Optional[str] = None,
- output_word_probs: bool = False,
- output_word_stats: bool = False,
- target_dictionary: Optional[fairseq.data.Dictionary] = None,
- softmax_batch: int = 0,
- remove_bos_token: bool = False,
- device: Optional[torch.device] = None,
-):
- """
- Args:
- models (List[~fairseq.models.FairseqModel]): list of models to
- evaluate. Models are essentially `nn.Module` instances, but
- must be compatible with fairseq's `SequenceScorer`.
- source_dictionary (~fairseq.data.Dictionary): dictionary for
- applying any relevant post processing or outputing word
- probs/stats.
- batch_iterator (Iterable): yield batches of data
- post_process (Optional[str]): post-process text by removing BPE,
- letter segmentation, etc. Valid options can be found in
- fairseq.data.utils.post_process, although not all options
- are implemented here.
- output_word_probs (Optional[bool]): output words and their
- predicted log probabilities
- output_word_stats (Optional[bool]): output word statistics such
- as word count and average probability
- target_dictionary (Optional[~fairseq.data.Dictionary]): output
- dictionary (defaults to *source_dictionary*)
- softmax_batch (Optional[bool]): if BxT is more than this, will
- batch the softmax over vocab to this amount of tokens, in
- order to fit into GPU memory
- remove_bos_token (Optional[bool]): if True, confirm that the
- first token is the beginning-of-sentence symbol (according
- to the relevant dictionary) and remove it from the output
- device (Optional[torch.device]): device to use for evaluation
- (defaults to device of first model parameter)
- """
- if target_dictionary is None:
- target_dictionary = source_dictionary
- if device is None:
- device = next(models[0].parameters()).device
-
- gen_timer = StopwatchMeter()
- scorer = SequenceScorer(target_dictionary, softmax_batch)
-
- score_sum = 0.0
- count = 0
-
- if post_process is not None:
- if post_process in {"subword_nmt", "@@ "}:
- bpe_cont = post_process.rstrip()
- bpe_toks = {
- i
- for i in range(len(source_dictionary))
- if source_dictionary[i].endswith(bpe_cont)
- }
- else:
- raise NotImplementedError(
- "--post-process={post_process} is not implemented"
- )
- bpe_len = len(bpe_cont)
- else:
- bpe_toks = None
- bpe_len = 0
-
- word_stats = dict()
-
- for sample in batch_iterator:
- if "net_input" not in sample:
- continue
-
- sample = utils.move_to_cuda(sample, device=device)
-
- gen_timer.start()
- hypos = scorer.generate(models, sample)
- gen_timer.stop(sample["ntokens"])
-
- for i, hypos_i in enumerate(hypos):
- hypo = hypos_i[0]
- sample_id = sample["id"][i]
-
- tokens = hypo["tokens"]
- tgt_len = tokens.numel()
- pos_scores = hypo["positional_scores"].float()
-
- if remove_bos_token:
- assert hypo["tokens"][0].item() == target_dictionary.bos()
- tokens = tokens[1:]
- pos_scores = pos_scores[1:]
-
- skipped_toks = 0
- if bpe_toks is not None:
- for i in range(tgt_len - 1):
- if tokens[i].item() in bpe_toks:
- skipped_toks += 1
- pos_scores[i + 1] += pos_scores[i]
- pos_scores[i] = 0
-
- inf_scores = pos_scores.eq(float("inf")) | pos_scores.eq(float("-inf"))
- if inf_scores.any():
- logger.info(
- "skipping tokens with inf scores:",
- target_dictionary.string(tokens[inf_scores.nonzero()]),
- )
- pos_scores = pos_scores[(~inf_scores).nonzero()]
- score_sum += pos_scores.sum().cpu()
- count += pos_scores.numel() - skipped_toks
-
- if output_word_probs or output_word_stats:
- w = ""
- word_prob = []
- is_bpe = False
- for i in range(len(tokens)):
- w_ind = tokens[i].item()
- w += source_dictionary[w_ind]
- if bpe_toks is not None and w_ind in bpe_toks:
- w = w[:-bpe_len]
- is_bpe = True
- else:
- word_prob.append((w, pos_scores[i].item()))
-
- next_prob = None
- ind = i + 1
- while ind < len(tokens):
- if pos_scores[ind].item() != 0:
- next_prob = pos_scores[ind]
- break
- ind += 1
-
- word_stats.setdefault(w, WordStat(w, is_bpe)).add(
- pos_scores[i].item(), next_prob
- )
- is_bpe = False
- w = ""
- if output_word_probs:
- logger.info(
- str(int(sample_id))
- + " "
- + (
- "\t".join(
- "{} [{:2f}]".format(x[0], x[1]) for x in word_prob
- )
- )
- )
-
- avg_nll_loss = (
- -score_sum / count / math.log(2) if count > 0 else 0
- ) # convert to base 2
- logger.info(
- "Evaluated {:,} tokens in {:.1f}s ({:.2f} tokens/s)".format(
- gen_timer.n, gen_timer.sum, 1.0 / gen_timer.avg if gen_timer.avg > 0 else 0
- )
- )
-
- if output_word_stats:
- for ws in sorted(word_stats.values(), key=lambda x: x.count, reverse=True):
- logger.info(ws)
-
- return {
- "loss": avg_nll_loss,
- "perplexity": 2 ** avg_nll_loss,
- }
-
-
-class WordStat(object):
- def __init__(self, word, is_bpe):
- self.word = word
- self.is_bpe = is_bpe
- self.log_prob = 0
- self.next_word_prob = 0
- self.count = 0
- self.missing_next_words = 0
-
- def add(self, log_prob, next_word_prob):
- """increments counters for the sum of log probs of current word and next
- word (given context ending at current word). Since the next word might be at the end of the example,
- or it might be not counted because it is not an ending subword unit,
- also keeps track of how many of those we have seen"""
- if next_word_prob is not None:
- self.next_word_prob += next_word_prob
- else:
- self.missing_next_words += 1
- self.log_prob += log_prob
- self.count += 1
-
- def __str__(self):
- return "{}\t{}\t{}\t{}\t{}\t{}".format(
- self.word,
- self.count,
- self.log_prob,
- self.is_bpe,
- self.next_word_prob,
- self.count - self.missing_next_words,
- )
-
-
-def main(cfg: DictConfig, **unused_kwargs):
- if isinstance(cfg, Namespace):
- cfg = convert_namespace_to_omegaconf(cfg)
-
- utils.import_user_module(cfg.common)
-
- logger.info(cfg)
-
- if cfg.eval_lm.context_window > 0:
- # reduce tokens per sample by the required context window size
- cfg.task.tokens_per_sample -= cfg.eval_lm.context_window
-
- # Initialize the task using the current *cfg*
- task = tasks.setup_task(cfg.task)
-
- # Load ensemble
- logger.info("loading model(s) from {}".format(cfg.common_eval.path))
- models, model_args, task = checkpoint_utils.load_model_ensemble_and_task(
- [cfg.common_eval.path],
- arg_overrides=eval(cfg.common_eval.model_overrides),
- suffix=cfg.checkpoint.checkpoint_suffix,
- strict=(cfg.checkpoint.checkpoint_shard_count == 1),
- num_shards=cfg.checkpoint.checkpoint_shard_count,
- task=task,
- )
-
- use_fp16 = cfg.common.fp16
- use_cuda = torch.cuda.is_available() and not cfg.common.cpu
- if use_cuda:
- torch.cuda.set_device(cfg.distributed_training.device_id)
-
- # Optimize ensemble for generation and set the source and dest dicts on the model
- # (required by scorer)
- for model in models:
- if use_fp16:
- model.half()
- if use_cuda and not cfg.distributed_training.pipeline_model_parallel:
- model.cuda()
- model.prepare_for_inference_(cfg)
-
- assert len(models) > 0
-
- logger.info(
- "num. model params: {:,}".format(sum(p.numel() for p in models[0].parameters()))
- )
-
- # Load dataset splits
- task.load_dataset(cfg.dataset.gen_subset)
- dataset = task.dataset(cfg.dataset.gen_subset)
- logger.info(
- "{} {} {:,} examples".format(
- cfg.task.data, cfg.dataset.gen_subset, len(dataset)
- )
- )
-
- itr = task.eval_lm_dataloader(
- dataset=dataset,
- max_tokens=cfg.dataset.max_tokens or 36000,
- batch_size=cfg.dataset.batch_size,
- max_positions=utils.resolve_max_positions(
- *[model.max_positions() for model in models]
- ),
- num_shards=max(
- cfg.dataset.num_shards,
- cfg.distributed_training.distributed_world_size,
- ),
- shard_id=max(
- cfg.dataset.shard_id,
- cfg.distributed_training.distributed_rank,
- ),
- num_workers=cfg.dataset.num_workers,
- data_buffer_size=cfg.dataset.data_buffer_size,
- context_window=cfg.eval_lm.context_window,
- )
-
- itr = progress_bar.progress_bar(
- itr,
- log_format=cfg.common.log_format,
- log_interval=cfg.common.log_interval,
- default_log_format=("tqdm" if not cfg.common.no_progress_bar else "simple"),
- )
-
- results = eval_lm(
- models=models,
- source_dictionary=task.source_dictionary,
- batch_iterator=itr,
- post_process=cfg.common_eval.post_process,
- output_word_probs=cfg.eval_lm.output_word_probs,
- output_word_stats=cfg.eval_lm.output_word_stats,
- target_dictionary=task.target_dictionary,
- softmax_batch=cfg.eval_lm.softmax_batch,
- remove_bos_token=getattr(cfg.task, "add_bos_token", False),
- )
-
- logger.info(
- "Loss (base 2): {:.4f}, Perplexity: {:.2f}".format(
- results["loss"], results["perplexity"]
- )
- )
-
- return results
-
-
-def cli_main():
- parser = options.get_eval_lm_parser()
- args = options.parse_args_and_arch(parser)
-
- distributed_utils.call_main(convert_namespace_to_omegaconf(args), main)
-
-
-if __name__ == "__main__":
- cli_main()
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/layerdrop/README.md b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/layerdrop/README.md
deleted file mode 100644
index 4d48ee9615e1458e1e889635dc9938e427a7f64a..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/layerdrop/README.md
+++ /dev/null
@@ -1,154 +0,0 @@
-# Reducing Transformer Depth on Demand with Structured Dropout (Fan et al., 2019)
-This page contains information for how to train models with LayerDrop, based on this [paper](https://arxiv.org/abs/1909.11556).
-
-## Citation:
-If you found this technique useful, please cite our paper:
-```bibtex
-@article{fan2019reducing,
- title={Reducing Transformer Depth on Demand with Structured Dropout},
- author={Fan, Angela and Grave, Edouard and Joulin, Armand},
- journal={arXiv preprint arXiv:1909.11556},
- year={2019}
-}
-```
-
-## Pre-trained models
-
-Model | Description | Download
----|---|---
-`layerdrop_wmt_en_de_12_6` | Transformer + LayerDrop 0.2 trained on WMT16 en-de with 12 encoder and 6 decoder layers | [layerdrop_wmt_en_de_12_6.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/layerdrop_wmt_en_de_12_6.tar.gz)
-`roberta_layerdrop.base` | RoBERTa Base + LayerDrop 0.2 | [roberta_layerdrop.base.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta_layerdrop.base.qnli.tar.gz)
-`roberta_layerdrop.large` | RoBERTa Large + LayerDrop 0.2 | [roberta_layerdrop.large.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta_layerdrop.large.tar.gz)
-`roberta_layerdrop.large.mnli` | `roberta_layerdrop.large` finetuned on [MNLI](http://www.nyu.edu/projects/bowman/multinli) | [roberta_layerdrop.large.mnli.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta_layerdrop.large.mnli.tar.gz)
-`roberta_layerdrop.large.qnli` | `roberta_layerdrop.large` finetuned on [QNLI](https://arxiv.org/abs/1804.07461) | [roberta_layerdrop.large.mnli.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta_layerdrop.large.qnli.tar.gz)
-
-
-Evaluate performance of these pre-trained models:
-```bash
-# Example for Machine Translation
-fairseq-generate /path/to/bped/wmt/data --path nmt_checkpoint.pt \
- --beam 8 --lenpen 0.4 \
- --batch-size 64 \
- --remove-bpe \
- --gen-subset test > wmt16_gen.txt
-bash scripts/compound_split_bleu.sh wmt16_gen.txt
-# prints BLEU4 = 30.17
-```
-
-```python
-# Example for RoBERTa + LayerDrop finetuned on MNLI:
-from fairseq.models.roberta import RobertaModel
-
-roberta_layerdrop = RobertaModel.from_pretrained(
- '/path/to/MNLI/model',
- checkpoint_file='mnli_checkpoint.pt',
- data_name_or_path='/path/to/MNLI/data/MNLI-bin'
-)
-label_map = {0: 'contradiction', 2: 'neutral', 1: 'entailment'}
-ncorrect, nsamples = 0, 0
-roberta_layerdrop.cuda()
-roberta_layerdrop.eval()
-with open('/path/to/MNLI/data/dev_matched.tsv') as fin:
- fin.readline()
- for index, line in enumerate(fin):
- tokens = line.strip().split('\t')
- sent1, sent2, target = tokens[8], tokens[9], tokens[-1]
- tokens = roberta_layerdrop.encode(sent1, sent2)
- prediction = roberta_layerdrop.predict('sentence_classification_head', tokens).argmax().item()
- prediction_label = label_map[prediction]
- ncorrect += int(prediction_label == target)
- nsamples += 1
-print('| Accuracy: ', float(ncorrect)/float(nsamples))
-# prints | Accuracy: 0.9026999490575649
-
-
-# Example for RoBERTa + LayerDrop finetuned on QNLI:
-roberta = RobertaModel.from_pretrained(
- '/path/to/QNLI/model',
- checkpoint_file='qnli_checkpoint.pt',
- data_name_or_path='/path/to/QNLI/data/QNLI-bin'
-)
-
-label_fn = lambda label: roberta.task.label_dictionary.string(
- [label + roberta.task.target_dictionary.nspecial]
-)
-ncorrect, nsamples = 0, 0
-roberta.cuda()
-roberta.eval()
-with open('/path/to/QNLI/data/dev.tsv') as fin:
- fin.readline()
- for index, line in enumerate(fin):
- tokens = line.strip().split('\t')
- sent1, sent2, target = tokens[1], tokens[2], tokens[3]
- tokens = roberta.encode(sent1, sent2)
- prediction = roberta.predict('sentence_classification_head', tokens).argmax().item()
- prediction_label = label_fn(prediction)
- ncorrect += int(prediction_label == target)
- nsamples += 1
-print('| Accuracy: ', float(ncorrect)/float(nsamples))
-# prints | Accuracy: 0.9480139117700896
-```
-
-
-## Example usage
-
-To train a model with LayerDrop, add the following flags. We recommend 0.2, a value that worked well in our experiments. For Language Models that are decoder-only, you need only the decoder flag. For RoBERTa, an encoder, you need only the encoder flag. The encoder and decoder LayerDrop values can be set differently.
-```
---encoder-layerdrop 0.2 --decoder-layerdrop 0.2
-```
-
-To prune a model that has been trained with LayerDrop, add the following flags followed by a comma separated list of which layers you would like to keep.
-```
---encoder-layers-to-keep 0,2,4,6,8,10,12,14 --decoder-layers-to-keep 0,2,4,6,8,10,12,14
-```
-Setting these flags should print a message such as:
-```
-| Pruning model to specified layer configuration
-```
-You should also see a smaller number of parameters in the model, for example the 16-Layer Transformer Language Model prints:
-```
-num. model params: 246933504
-```
-while a model pruned to 8 Layers prints:
-```
-num. model params: 146163712
-```
-
-If you would like to pick up training with a model that has been pruned, simply adding these flags is sufficient. If you would like to use a script that only does evaluation (no training), you may need to pass an override command. A specific example would be for language modeling:
-```bash
-fairseq-eval-lm /path/to/wikitext-103 \
- --path /path/to/model/checkpoint.pt \
- --model-overrides "{'decoder_layers_to_keep':'0,2,4,6,8,10,12,14'}"
-```
-This model override command overrides the training parameters and updates the model arguments so that the pruned model is run instead of the full model.
-
-## Reproduce Paper Results
-
-Looking to reproduce the results in the paper?
-
-1. For Translation on WMT16 en-de, we followed this setting [here](https://github.com/pytorch/fairseq/blob/main/examples/scaling_nmt/README.md)
-2. To train RoBERTa, we followed this setting [here](https://github.com/pytorch/fairseq/tree/main/examples/roberta)
-3. To train Language Models on Wikitext-103, we followed this setting [here](https://github.com/pytorch/fairseq/tree/main/examples/language_model)
-
-
-## Tips
-
-1. If you would like to train large models with better performance, LayerDrop should be set to a smaller value such as 0.1 or 0.2. Too much LayerDrop will mean the model has too much regularization, so may not reach the best performance. Since LayerDrop adds regularization, you may achieve the best performance by slightly reducing the amount of standard dropout (for example, reduce by 0.1).
-
-2. If you would like to train large models to be pruned and made smaller, LayerDrop should be set to a larger value such as 0.5 if you want to prune very aggressively (such as removing half the network or more). If you would like to prune fewer layers away, LayerDrop can be set to a smaller value such as 0.2. Our experiments were conducted with low values of LayerDrop (such as 0.1 and 0.2), for reference.
-
-3. When pruning layers at inference time, it is best to spread out the layers remaining so they are evenly spaced throughout the network. For example, if you want to remove 50% of the network, keeping every other layer is good.
-
-
-## FAQ
-
-1. How did the sharing layers experiment work? In an appendix (https://openreview.net/pdf?id=SylO2yStDr) we added an experiment on Wikitext-103 language modeling that combined LayerDrop with Weight Sharing. We shared chunks of 2 layers such that every other layer had shared weights. For example, if our network has layers 1 through 6, then layer 1 and 2 are shared, layer 3 and 4 are shared, and layer 5 and 6 are shared.
-
-2. LayerDrop hasn't been helping in my setting? During training time, LayerDrop can help regularize your network. This is most important if your network is already overfitting - if your network is underfitting, it is possible LayerDrop is adding too much regularization. We recommend using smaller values (such as 0.1 or 0.2) and also decreasing the quantity of standard dropout (for example, reduce by 0.1).
-
-3. Can you train a model without LayerDrop and finetune with LayerDrop (e.g. for BERT)? In our experiments, we did not see great performance. Models such as RoBERTa have trained for a long time in the pre-training setting, so only finetuning with LayerDrop for a few epochs on a downstream task such as MNLI does not achieve the robustness required for successful pruning.
-
-
-## Having an issue or have a question?
-
-Please open an issue in this repository with the details of your question. Thanks!
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/strip_token_dataset.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/strip_token_dataset.py
deleted file mode 100644
index cae39ba4d2f8106398eccd7eb0cf5c2194ec0db5..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/strip_token_dataset.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from . import BaseWrapperDataset
-
-
-class StripTokenDataset(BaseWrapperDataset):
- def __init__(self, dataset, id_to_strip):
- super().__init__(dataset)
- self.id_to_strip = id_to_strip
-
- def __getitem__(self, index):
- item = self.dataset[index]
- while len(item) > 0 and item[-1] == self.id_to_strip:
- item = item[:-1]
- while len(item) > 0 and item[0] == self.id_to_strip:
- item = item[1:]
- return item
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/noisychannel/rerank_score_bw.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/noisychannel/rerank_score_bw.py
deleted file mode 100644
index b0bc913651bd76667e25c214acb70f2bca19e185..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/noisychannel/rerank_score_bw.py
+++ /dev/null
@@ -1,143 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import os
-from contextlib import redirect_stdout
-
-from fairseq import options
-from fairseq_cli import generate
-
-from examples.noisychannel import rerank_options, rerank_utils
-
-
-def score_bw(args):
- if args.backwards1:
- scorer1_src = args.target_lang
- scorer1_tgt = args.source_lang
- else:
- scorer1_src = args.source_lang
- scorer1_tgt = args.target_lang
-
- if args.score_model2 is not None:
- if args.backwards2:
- scorer2_src = args.target_lang
- scorer2_tgt = args.source_lang
- else:
- scorer2_src = args.source_lang
- scorer2_tgt = args.target_lang
-
- rerank1_is_gen = (
- args.gen_model == args.score_model1 and args.source_prefix_frac is None
- )
- rerank2_is_gen = (
- args.gen_model == args.score_model2 and args.source_prefix_frac is None
- )
-
- (
- pre_gen,
- left_to_right_preprocessed_dir,
- right_to_left_preprocessed_dir,
- backwards_preprocessed_dir,
- lm_preprocessed_dir,
- ) = rerank_utils.get_directories(
- args.data_dir_name,
- args.num_rescore,
- args.gen_subset,
- args.gen_model_name,
- args.shard_id,
- args.num_shards,
- args.sampling,
- args.prefix_len,
- args.target_prefix_frac,
- args.source_prefix_frac,
- )
-
- score1_file = rerank_utils.rescore_file_name(
- pre_gen,
- args.prefix_len,
- args.model1_name,
- target_prefix_frac=args.target_prefix_frac,
- source_prefix_frac=args.source_prefix_frac,
- backwards=args.backwards1,
- )
-
- if args.score_model2 is not None:
- score2_file = rerank_utils.rescore_file_name(
- pre_gen,
- args.prefix_len,
- args.model2_name,
- target_prefix_frac=args.target_prefix_frac,
- source_prefix_frac=args.source_prefix_frac,
- backwards=args.backwards2,
- )
-
- if args.right_to_left1:
- rerank_data1 = right_to_left_preprocessed_dir
- elif args.backwards1:
- rerank_data1 = backwards_preprocessed_dir
- else:
- rerank_data1 = left_to_right_preprocessed_dir
-
- gen_param = ["--batch-size", str(128), "--score-reference", "--gen-subset", "train"]
- if not rerank1_is_gen and not os.path.isfile(score1_file):
- print("STEP 4: score the translations for model 1")
-
- model_param1 = [
- "--path",
- args.score_model1,
- "--source-lang",
- scorer1_src,
- "--target-lang",
- scorer1_tgt,
- ]
- gen_model1_param = [rerank_data1] + gen_param + model_param1
-
- gen_parser = options.get_generation_parser()
- input_args = options.parse_args_and_arch(gen_parser, gen_model1_param)
-
- with open(score1_file, "w") as f:
- with redirect_stdout(f):
- generate.main(input_args)
-
- if (
- args.score_model2 is not None
- and not os.path.isfile(score2_file)
- and not rerank2_is_gen
- ):
- print("STEP 4: score the translations for model 2")
-
- if args.right_to_left2:
- rerank_data2 = right_to_left_preprocessed_dir
- elif args.backwards2:
- rerank_data2 = backwards_preprocessed_dir
- else:
- rerank_data2 = left_to_right_preprocessed_dir
-
- model_param2 = [
- "--path",
- args.score_model2,
- "--source-lang",
- scorer2_src,
- "--target-lang",
- scorer2_tgt,
- ]
- gen_model2_param = [rerank_data2] + gen_param + model_param2
-
- gen_parser = options.get_generation_parser()
- input_args = options.parse_args_and_arch(gen_parser, gen_model2_param)
-
- with open(score2_file, "w") as f:
- with redirect_stdout(f):
- generate.main(input_args)
-
-
-def cli_main():
- parser = rerank_options.get_reranking_parser()
- args = options.parse_args_and_arch(parser)
- score_bw(args)
-
-
-if __name__ == "__main__":
- cli_main()
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/legacy/block_pair_dataset.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/legacy/block_pair_dataset.py
deleted file mode 100644
index ba069b46052286c531b4f9706d96788732cd2ad2..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/legacy/block_pair_dataset.py
+++ /dev/null
@@ -1,311 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-
-import numpy as np
-import torch
-from fairseq.data import FairseqDataset
-
-
-class BlockPairDataset(FairseqDataset):
- """Break a Dataset of tokens into sentence pair blocks for next sentence
- prediction as well as masked language model.
-
- High-level logics are:
- 1. break input tensor to tensor blocks
- 2. pair the blocks with 50% next sentence and 50% random sentence
- 3. return paired blocks as well as related segment labels
-
- Args:
- dataset (~torch.utils.data.Dataset): dataset to break into blocks
- sizes: array of sentence lengths
- dictionary: dictionary for the task
- block_size: maximum block size
- break_mode: mode for breaking copurs into block pairs. currently we support
- 2 modes
- doc: respect document boundaries and each part of the pair should belong to on document
- none: don't respect any boundary and cut tokens evenly
- short_seq_prob: probability for generating shorter block pairs
- doc_break_size: Size for empty line separating documents. Typically 1 if
- the sentences have eos, 0 otherwise.
- """
-
- def __init__(
- self,
- dataset,
- dictionary,
- sizes,
- block_size,
- break_mode="doc",
- short_seq_prob=0.1,
- doc_break_size=1,
- ):
- super().__init__()
- self.dataset = dataset
- self.pad = dictionary.pad()
- self.eos = dictionary.eos()
- self.cls = dictionary.cls()
- self.mask = dictionary.mask()
- self.sep = dictionary.sep()
- self.break_mode = break_mode
- self.dictionary = dictionary
- self.short_seq_prob = short_seq_prob
- self.block_indices = []
-
- assert len(dataset) == len(sizes)
-
- if break_mode == "doc":
- cur_doc = []
- for sent_id, sz in enumerate(sizes):
- assert doc_break_size == 0 or sz != 0, (
- "when doc_break_size is non-zero, we expect documents to be"
- "separated by a blank line with a single eos."
- )
- # empty line as document separator
- if sz == doc_break_size:
- if len(cur_doc) == 0:
- continue
- self.block_indices.append(cur_doc)
- cur_doc = []
- else:
- cur_doc.append(sent_id)
- max_num_tokens = block_size - 3 # Account for [CLS], [SEP], [SEP]
- self.sent_pairs = []
- self.sizes = []
- for doc_id, doc in enumerate(self.block_indices):
- self._generate_sentence_pair(doc, doc_id, max_num_tokens, sizes)
- elif break_mode is None or break_mode == "none":
- # each block should have half of the block size since we are constructing block pair
- sent_length = (block_size - 3) // 2
- total_len = sum(dataset.sizes)
- length = math.ceil(total_len / sent_length)
-
- def block_at(i):
- start = i * sent_length
- end = min(start + sent_length, total_len)
- return (start, end)
-
- sent_indices = np.array([block_at(i) for i in range(length)])
- sent_sizes = np.array([e - s for s, e in sent_indices])
- dataset_index = self._sent_to_dataset_index(sent_sizes)
-
- # pair sentences
- self._pair_sentences(dataset_index)
- else:
- raise ValueError("Invalid break_mode: " + break_mode)
-
- def _pair_sentences(self, dataset_index):
- """
- Give a list of evenly cut blocks/sentences, pair these sentences with 50%
- consecutive sentences and 50% random sentences.
- This is used for none break mode
- """
- # pair sentences
- for sent_id, sent in enumerate(dataset_index):
- next_sent_label = (
- 1 if np.random.rand() > 0.5 and sent_id != len(dataset_index) - 1 else 0
- )
- if next_sent_label:
- next_sent = dataset_index[sent_id + 1]
- else:
- next_sent = dataset_index[
- self._skip_sampling(len(dataset_index), [sent_id, sent_id + 1])
- ]
- self.sent_pairs.append((sent, next_sent, next_sent_label))
-
- # The current blocks don't include the special tokens but the
- # sizes already account for this
- self.sizes.append(3 + sent[3] + next_sent[3])
-
- def _sent_to_dataset_index(self, sent_sizes):
- """
- Build index mapping block indices to the underlying dataset indices
- """
- dataset_index = []
- ds_idx, ds_remaining = -1, 0
- for to_consume in sent_sizes:
- sent_size = to_consume
- if ds_remaining == 0:
- ds_idx += 1
- ds_remaining = sent_sizes[ds_idx]
- start_ds_idx = ds_idx
- start_offset = sent_sizes[ds_idx] - ds_remaining
- while to_consume > ds_remaining:
- to_consume -= ds_remaining
- ds_idx += 1
- ds_remaining = sent_sizes[ds_idx]
- ds_remaining -= to_consume
- dataset_index.append(
- (
- start_ds_idx, # starting index in dataset
- start_offset, # starting offset within starting index
- ds_idx, # ending index in dataset
- sent_size, # sentence length
- )
- )
- assert ds_remaining == 0
- assert ds_idx == len(self.dataset) - 1
- return dataset_index
-
- def _generate_sentence_pair(self, doc, doc_id, max_num_tokens, sizes):
- """
- Go through a single document and genrate sentence paris from it
- """
- current_chunk = []
- current_length = 0
- curr = 0
- # To provide more randomness, we decrease target seq length for parts of
- # samples (10% by default). Note that max_num_tokens is the hard threshold
- # for batching and will never be changed.
- target_seq_length = max_num_tokens
- if np.random.random() < self.short_seq_prob:
- target_seq_length = np.random.randint(2, max_num_tokens)
- # loop through all sentences in document
- while curr < len(doc):
- sent_id = doc[curr]
- current_chunk.append(sent_id)
- current_length = sum(sizes[current_chunk])
- # split chunk and generate pair when exceed target_seq_length or
- # finish the loop
- if curr == len(doc) - 1 or current_length >= target_seq_length:
- # split the chunk into 2 parts
- a_end = 1
- if len(current_chunk) > 2:
- a_end = np.random.randint(1, len(current_chunk) - 1)
- sent_a = current_chunk[:a_end]
- len_a = sum(sizes[sent_a])
- # generate next sentence label, note that if there is only 1 sentence
- # in current chunk, label is always 0
- next_sent_label = (
- 1 if np.random.rand() > 0.5 and len(current_chunk) != 1 else 0
- )
- if not next_sent_label:
- # if next sentence label is 0, sample sent_b from a random doc
- target_b_length = target_seq_length - len_a
- rand_doc_id = self._skip_sampling(len(self.block_indices), [doc_id])
- random_doc = self.block_indices[rand_doc_id]
- random_start = np.random.randint(0, len(random_doc))
- sent_b = []
- len_b = 0
- for j in range(random_start, len(random_doc)):
- sent_b.append(random_doc[j])
- len_b = sum(sizes[sent_b])
- if len_b >= target_b_length:
- break
- # return the second part of the chunk since it's not used
- num_unused_segments = len(current_chunk) - a_end
- curr -= num_unused_segments
- else:
- # if next sentence label is 1, use the second part of chunk as sent_B
- sent_b = current_chunk[a_end:]
- len_b = sum(sizes[sent_b])
- # currently sent_a and sent_B may be longer than max_num_tokens,
- # truncate them and return block idx and offsets for them
- sent_a, sent_b = self._truncate_sentences(
- sent_a, sent_b, max_num_tokens
- )
- self.sent_pairs.append((sent_a, sent_b, next_sent_label))
- self.sizes.append(3 + sent_a[3] + sent_b[3])
- current_chunk = []
- curr += 1
-
- def _skip_sampling(self, total, skip_ids):
- """
- Generate a random integer which is not in skip_ids. Sample range is [0, total)
- TODO: ids in skip_ids should be consecutive, we can extend it to more generic version later
- """
- rand_id = np.random.randint(total - len(skip_ids))
- return rand_id if rand_id < min(skip_ids) else rand_id + len(skip_ids)
-
- def _truncate_sentences(self, sent_a, sent_b, max_num_tokens):
- """
- Trancate a pair of sentence to limit total length under max_num_tokens
- Logics:
- 1. Truncate longer sentence
- 2. Tokens to be truncated could be at the beginning or the end of the sentnce
- Returns:
- Truncated sentences represented by dataset idx
- """
- len_a, len_b = sum(self.dataset.sizes[sent_a]), sum(self.dataset.sizes[sent_b])
- front_cut_a = front_cut_b = end_cut_a = end_cut_b = 0
-
- while True:
- total_length = (
- len_a + len_b - front_cut_a - front_cut_b - end_cut_a - end_cut_b
- )
- if total_length <= max_num_tokens:
- break
-
- if len_a - front_cut_a - end_cut_a > len_b - front_cut_b - end_cut_b:
- if np.random.rand() < 0.5:
- front_cut_a += 1
- else:
- end_cut_a += 1
- else:
- if np.random.rand() < 0.5:
- front_cut_b += 1
- else:
- end_cut_b += 1
-
- # calculate ds indices as well as offsets and return
- truncated_sent_a = self._cut_sentence(sent_a, front_cut_a, end_cut_a)
- truncated_sent_b = self._cut_sentence(sent_b, front_cut_b, end_cut_b)
- return truncated_sent_a, truncated_sent_b
-
- def _cut_sentence(self, sent, front_cut, end_cut):
- """
- Cut a sentence based on the numbers of tokens to be cut from beginning and end
- Represent the sentence as dataset idx and return
- """
- start_ds_idx, end_ds_idx, offset = sent[0], sent[-1], 0
- target_len = sum(self.dataset.sizes[sent]) - front_cut - end_cut
- while front_cut > 0:
- if self.dataset.sizes[start_ds_idx] > front_cut:
- offset += front_cut
- break
- else:
- front_cut -= self.dataset.sizes[start_ds_idx]
- start_ds_idx += 1
- while end_cut > 0:
- if self.dataset.sizes[end_ds_idx] > end_cut:
- break
- else:
- end_cut -= self.dataset.sizes[end_ds_idx]
- end_ds_idx -= 1
- return start_ds_idx, offset, end_ds_idx, target_len
-
- def _fetch_block(self, start_ds_idx, offset, end_ds_idx, length):
- """
- Fetch a block of tokens based on its dataset idx
- """
- buffer = torch.cat(
- [self.dataset[idx] for idx in range(start_ds_idx, end_ds_idx + 1)]
- )
- s, e = offset, offset + length
- return buffer[s:e]
-
- def __getitem__(self, index):
- block1, block2, next_sent_label = self.sent_pairs[index]
- block1 = self._fetch_block(*block1)
- block2 = self._fetch_block(*block2)
- return block1, block2, next_sent_label
-
- def __len__(self):
- return len(self.sizes)
-
- @property
- def supports_prefetch(self):
- return getattr(self.dataset, "supports_prefetch", False)
-
- def prefetch(self, indices):
- prefetch_idx = set()
- for index in indices:
- for block1, block2, _ in [self.sent_pairs[index]]:
- for ds_idx in range(block1[0], block1[2] + 1):
- prefetch_idx.add(ds_idx)
- for ds_idx in range(block2[0], block2[2] + 1):
- prefetch_idx.add(ds_idx)
- self.dataset.prefetch(prefetch_idx)
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/numel_dataset.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/numel_dataset.py
deleted file mode 100644
index ac86dfd2f1d89055de909656d61d6aca85523f00..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/numel_dataset.py
+++ /dev/null
@@ -1,31 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-import torch
-
-from . import BaseWrapperDataset
-
-
-class NumelDataset(BaseWrapperDataset):
- def __init__(self, dataset, reduce=False):
- super().__init__(dataset)
- self.reduce = reduce
-
- def __getitem__(self, index):
- item = self.dataset[index]
- if torch.is_tensor(item):
- return torch.numel(item)
- else:
- return np.size(item)
-
- def __len__(self):
- return len(self.dataset)
-
- def collater(self, samples):
- if self.reduce:
- return sum(samples)
- else:
- return torch.tensor(samples)
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/models/roberta/model.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/models/roberta/model.py
deleted file mode 100644
index 77a80ef72057219110b34678a38705549910edd3..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/models/roberta/model.py
+++ /dev/null
@@ -1,225 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""
-RoBERTa: A Robustly Optimized BERT Pretraining Approach.
-"""
-
-import logging
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from fairseq import utils
-from fairseq.model_parallel.models.transformer import ModelParallelTransformerEncoder
-from fairseq.models import register_model, register_model_architecture
-from fairseq.models.roberta import (
- roberta_base_architecture,
- roberta_prenorm_architecture,
- RobertaEncoder,
- RobertaModel,
-)
-from fairseq.modules import LayerNorm
-
-
-try:
- from fairseq.model_parallel.megatron.mpu import (
- copy_to_model_parallel_region,
- gather_from_model_parallel_region,
- ColumnParallelLinear,
- VocabParallelEmbedding,
- )
-
- has_megatron_submodule = True
-except (ImportError, ModuleNotFoundError):
- has_megatron_submodule = False
-
-logger = logging.getLogger(__name__)
-
-
-@register_model("model_parallel_roberta")
-class ModelParallelRobertaModel(RobertaModel):
- def __init__(self, args, encoder):
- super().__init__(args, encoder)
-
- self.classification_heads = nn.ModuleDict()
-
- @staticmethod
- def add_args(parser):
- RobertaModel.add_args(parser)
- parser.add_argument(
- "--no-final-layer-norm",
- action="store_true",
- help=(
- "don't add final layernorm (only applicable when "
- "--encoder-normalize-before=True"
- ),
- )
-
- @classmethod
- def build_model(cls, args, task):
- """Build a new model instance."""
-
- # make sure all arguments are present
- base_architecture(args)
-
- task.source_dictionary.pad_to_multiple_(args.model_parallel_size * 8)
- task.target_dictionary.pad_to_multiple_(args.model_parallel_size * 8)
-
- if not hasattr(args, "max_positions"):
- args.max_positions = args.tokens_per_sample
-
- if getattr(args, "untie_weights_roberta", False):
- raise NotImplementedError(
- "--untie-weights-roberta is not supported in model parallel mode"
- )
-
- encoder = ModelParallelRobertaEncoder(args, task.source_dictionary)
- return cls(args, encoder)
-
- def forward(
- self,
- src_tokens,
- features_only=False,
- return_all_hiddens=False,
- classification_head_name=None,
- **kwargs
- ):
- if classification_head_name is not None:
- features_only = True
-
- x, extra = self.encoder(src_tokens, features_only, return_all_hiddens, **kwargs)
-
- if classification_head_name is not None:
- x = self.classification_heads[classification_head_name](x)
- return x, extra
-
- def register_classification_head(
- self, name, num_classes=None, inner_dim=None, **kwargs
- ):
- """Register a classification head."""
- if name in self.classification_heads:
- prev_num_classes = self.classification_heads[name].out_proj.out_features
- prev_inner_dim = self.classification_heads[name].dense.out_features
- if num_classes != prev_num_classes or inner_dim != prev_inner_dim:
- logger.warning(
- 're-registering head "{}" with num_classes {} (prev: {}) '
- "and inner_dim {} (prev: {})".format(
- name, num_classes, prev_num_classes, inner_dim, prev_inner_dim
- )
- )
- self.classification_heads[name] = ModelParallelRobertaClassificationHead(
- self.args.encoder_embed_dim,
- inner_dim or self.args.encoder_embed_dim,
- num_classes,
- self.args.pooler_activation_fn,
- self.args.pooler_dropout,
- )
-
-
-class ModelParallelRobertaLMHead(nn.Module):
- """Head for masked language modeling."""
-
- def __init__(self, embed_dim, output_dim, activation_fn, weight=None):
- super().__init__()
- self.dense = ColumnParallelLinear(embed_dim, embed_dim, gather_output=True)
- self.activation_fn = utils.get_activation_fn(activation_fn)
- self.layer_norm = LayerNorm(embed_dim)
-
- if weight is None:
- weight = nn.Linear(embed_dim, output_dim, bias=False).weight
- self.weight = weight
- self.bias = nn.Parameter(torch.zeros(output_dim))
-
- def forward(self, features, masked_tokens=None, **kwargs):
- # Only project the unmasked tokens while training,
- # saves both memory and computation
- if masked_tokens is not None:
- features = features[masked_tokens, :]
-
- x = self.dense(features)
- x = self.activation_fn(x)
- x = self.layer_norm(x)
-
- x = copy_to_model_parallel_region(x)
- # project back to size of vocabulary with bias
- x = F.linear(x, self.weight)
- x = gather_from_model_parallel_region(x).contiguous()
- x = x + self.bias
- return x
-
-
-class ModelParallelRobertaClassificationHead(nn.Module):
- """Head for sentence-level classification tasks."""
-
- def __init__(
- self, input_dim, inner_dim, num_classes, activation_fn, pooler_dropout
- ):
- super().__init__()
- self.dense = ColumnParallelLinear(input_dim, inner_dim, gather_output=True)
- self.activation_fn = utils.get_activation_fn(activation_fn)
- self.dropout = nn.Dropout(p=pooler_dropout)
- self.out_proj = nn.Linear(inner_dim, num_classes)
-
- def forward(self, features, **kwargs):
- x = features[:, 0, :] # take token (equiv. to [CLS])
- x = self.dropout(x)
- x = self.dense(x)
- x = self.activation_fn(x)
- x = self.dropout(x)
- x = self.out_proj(x)
- return x
-
-
-class ModelParallelRobertaEncoder(RobertaEncoder):
- """RoBERTa encoder."""
-
- def __init__(self, args, dictionary):
- super().__init__(args, dictionary)
- assert not self.args.untie_weights_roberta
-
- def build_embedding(self, vocab_size, embedding_dim, padding_idx):
- return VocabParallelEmbedding(vocab_size, embedding_dim, padding_idx)
-
- def build_encoder(self, args, dictionary, embed_tokens):
- return ModelParallelTransformerEncoder(args, dictionary, embed_tokens)
-
- def build_lm_head(self, embed_dim, output_dim, activation_fn, weight):
- return ModelParallelRobertaLMHead(embed_dim, output_dim, activation_fn, weight)
-
-
-@register_model_architecture("model_parallel_roberta", "model_parallel_roberta")
-def base_architecture(args):
- args.no_final_layer_norm = getattr(args, "no_final_layer_norm", False)
- # model parallel RoBERTa defaults to "Pre-LN" formulation
- roberta_prenorm_architecture(args)
-
-
-# earlier versions of model parallel RoBERTa removed the final layer norm
-@register_model_architecture("model_parallel_roberta", "model_parallel_roberta_v1")
-def model_parallel_roberta_v1_architecture(args):
- args.no_final_layer_norm = getattr(args, "no_final_layer_norm", True)
- base_architecture(args)
-
-
-@register_model_architecture(
- "model_parallel_roberta", "model_parallel_roberta_postnorm"
-)
-def model_parallel_roberta_postnorm_architecture(args):
- # the original BERT/RoBERTa uses the "Post-LN" formulation
- roberta_base_architecture(args)
-
-
-@register_model_architecture("model_parallel_roberta", "model_parallel_roberta_base")
-def model_parallel_roberta_base_architecture(args):
- base_architecture(args)
-
-
-@register_model_architecture("model_parallel_roberta", "model_parallel_roberta_large")
-def model_parallel_roberta_large_architecture(args):
- args.encoder_layers = getattr(args, "encoder_layers", 24)
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024)
- args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16)
- base_architecture(args)
diff --git a/spaces/ORI-Muchim/BlueArchiveTTS/mel_processing.py b/spaces/ORI-Muchim/BlueArchiveTTS/mel_processing.py
deleted file mode 100644
index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000
--- a/spaces/ORI-Muchim/BlueArchiveTTS/mel_processing.py
+++ /dev/null
@@ -1,101 +0,0 @@
-import torch
-import torch.utils.data
-from librosa.filters import mel as librosa_mel_fn
-
-MAX_WAV_VALUE = 32768.0
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
- return spec
-
-
-def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
- global mel_basis
- dtype_device = str(spec.dtype) + '_' + str(spec.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
- return spec
-
-
-def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global mel_basis, hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
-
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
-
- return spec
diff --git a/spaces/OlaWod/FreeVC/speaker_encoder/model.py b/spaces/OlaWod/FreeVC/speaker_encoder/model.py
deleted file mode 100644
index c022b663ee5c344c52041026bc88dc02734afa33..0000000000000000000000000000000000000000
--- a/spaces/OlaWod/FreeVC/speaker_encoder/model.py
+++ /dev/null
@@ -1,135 +0,0 @@
-from speaker_encoder.params_model import *
-from speaker_encoder.params_data import *
-from scipy.interpolate import interp1d
-from sklearn.metrics import roc_curve
-from torch.nn.utils import clip_grad_norm_
-from scipy.optimize import brentq
-from torch import nn
-import numpy as np
-import torch
-
-
-class SpeakerEncoder(nn.Module):
- def __init__(self, device, loss_device):
- super().__init__()
- self.loss_device = loss_device
-
- # Network defition
- self.lstm = nn.LSTM(input_size=mel_n_channels, # 40
- hidden_size=model_hidden_size, # 256
- num_layers=model_num_layers, # 3
- batch_first=True).to(device)
- self.linear = nn.Linear(in_features=model_hidden_size,
- out_features=model_embedding_size).to(device)
- self.relu = torch.nn.ReLU().to(device)
-
- # Cosine similarity scaling (with fixed initial parameter values)
- self.similarity_weight = nn.Parameter(torch.tensor([10.])).to(loss_device)
- self.similarity_bias = nn.Parameter(torch.tensor([-5.])).to(loss_device)
-
- # Loss
- self.loss_fn = nn.CrossEntropyLoss().to(loss_device)
-
- def do_gradient_ops(self):
- # Gradient scale
- self.similarity_weight.grad *= 0.01
- self.similarity_bias.grad *= 0.01
-
- # Gradient clipping
- clip_grad_norm_(self.parameters(), 3, norm_type=2)
-
- def forward(self, utterances, hidden_init=None):
- """
- Computes the embeddings of a batch of utterance spectrograms.
-
- :param utterances: batch of mel-scale filterbanks of same duration as a tensor of shape
- (batch_size, n_frames, n_channels)
- :param hidden_init: initial hidden state of the LSTM as a tensor of shape (num_layers,
- batch_size, hidden_size). Will default to a tensor of zeros if None.
- :return: the embeddings as a tensor of shape (batch_size, embedding_size)
- """
- # Pass the input through the LSTM layers and retrieve all outputs, the final hidden state
- # and the final cell state.
- out, (hidden, cell) = self.lstm(utterances, hidden_init)
-
- # We take only the hidden state of the last layer
- embeds_raw = self.relu(self.linear(hidden[-1]))
-
- # L2-normalize it
- embeds = embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True)
-
- return embeds
-
- def similarity_matrix(self, embeds):
- """
- Computes the similarity matrix according the section 2.1 of GE2E.
-
- :param embeds: the embeddings as a tensor of shape (speakers_per_batch,
- utterances_per_speaker, embedding_size)
- :return: the similarity matrix as a tensor of shape (speakers_per_batch,
- utterances_per_speaker, speakers_per_batch)
- """
- speakers_per_batch, utterances_per_speaker = embeds.shape[:2]
-
- # Inclusive centroids (1 per speaker). Cloning is needed for reverse differentiation
- centroids_incl = torch.mean(embeds, dim=1, keepdim=True)
- centroids_incl = centroids_incl.clone() / torch.norm(centroids_incl, dim=2, keepdim=True)
-
- # Exclusive centroids (1 per utterance)
- centroids_excl = (torch.sum(embeds, dim=1, keepdim=True) - embeds)
- centroids_excl /= (utterances_per_speaker - 1)
- centroids_excl = centroids_excl.clone() / torch.norm(centroids_excl, dim=2, keepdim=True)
-
- # Similarity matrix. The cosine similarity of already 2-normed vectors is simply the dot
- # product of these vectors (which is just an element-wise multiplication reduced by a sum).
- # We vectorize the computation for efficiency.
- sim_matrix = torch.zeros(speakers_per_batch, utterances_per_speaker,
- speakers_per_batch).to(self.loss_device)
- mask_matrix = 1 - np.eye(speakers_per_batch, dtype=np.int)
- for j in range(speakers_per_batch):
- mask = np.where(mask_matrix[j])[0]
- sim_matrix[mask, :, j] = (embeds[mask] * centroids_incl[j]).sum(dim=2)
- sim_matrix[j, :, j] = (embeds[j] * centroids_excl[j]).sum(dim=1)
-
- ## Even more vectorized version (slower maybe because of transpose)
- # sim_matrix2 = torch.zeros(speakers_per_batch, speakers_per_batch, utterances_per_speaker
- # ).to(self.loss_device)
- # eye = np.eye(speakers_per_batch, dtype=np.int)
- # mask = np.where(1 - eye)
- # sim_matrix2[mask] = (embeds[mask[0]] * centroids_incl[mask[1]]).sum(dim=2)
- # mask = np.where(eye)
- # sim_matrix2[mask] = (embeds * centroids_excl).sum(dim=2)
- # sim_matrix2 = sim_matrix2.transpose(1, 2)
-
- sim_matrix = sim_matrix * self.similarity_weight + self.similarity_bias
- return sim_matrix
-
- def loss(self, embeds):
- """
- Computes the softmax loss according the section 2.1 of GE2E.
-
- :param embeds: the embeddings as a tensor of shape (speakers_per_batch,
- utterances_per_speaker, embedding_size)
- :return: the loss and the EER for this batch of embeddings.
- """
- speakers_per_batch, utterances_per_speaker = embeds.shape[:2]
-
- # Loss
- sim_matrix = self.similarity_matrix(embeds)
- sim_matrix = sim_matrix.reshape((speakers_per_batch * utterances_per_speaker,
- speakers_per_batch))
- ground_truth = np.repeat(np.arange(speakers_per_batch), utterances_per_speaker)
- target = torch.from_numpy(ground_truth).long().to(self.loss_device)
- loss = self.loss_fn(sim_matrix, target)
-
- # EER (not backpropagated)
- with torch.no_grad():
- inv_argmax = lambda i: np.eye(1, speakers_per_batch, i, dtype=np.int)[0]
- labels = np.array([inv_argmax(i) for i in ground_truth])
- preds = sim_matrix.detach().cpu().numpy()
-
- # Snippet from https://yangcha.github.io/EER-ROC/
- fpr, tpr, thresholds = roc_curve(labels.flatten(), preds.flatten())
- eer = brentq(lambda x: 1. - x - interp1d(fpr, tpr)(x), 0., 1.)
-
- return loss, eer
\ No newline at end of file
diff --git a/spaces/Omnibus/MusicGen/audiocraft/models/builders.py b/spaces/Omnibus/MusicGen/audiocraft/models/builders.py
deleted file mode 100644
index 77ee5f96fea2e3c9e475fe961bc1a5ee473ed8eb..0000000000000000000000000000000000000000
--- a/spaces/Omnibus/MusicGen/audiocraft/models/builders.py
+++ /dev/null
@@ -1,218 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-All the functions to build the relevant models and modules
-from the Hydra config.
-"""
-
-import typing as tp
-import warnings
-
-import audiocraft
-import omegaconf
-import torch
-
-from .encodec import CompressionModel, EncodecModel, FlattenedCompressionModel # noqa
-from .lm import LMModel
-from ..modules.codebooks_patterns import (
- CodebooksPatternProvider,
- DelayedPatternProvider,
- ParallelPatternProvider,
- UnrolledPatternProvider,
- VALLEPattern,
- MusicLMPattern,
-)
-from ..modules.conditioners import (
- BaseConditioner,
- ConditioningProvider,
- LUTConditioner,
- T5Conditioner,
- ConditionFuser,
- ChromaStemConditioner,
-)
-from .. import quantization as qt
-from ..utils.utils import dict_from_config
-
-
-def get_quantizer(quantizer: str, cfg: omegaconf.DictConfig, dimension: int) -> qt.BaseQuantizer:
- klass = {
- 'no_quant': qt.DummyQuantizer,
- 'rvq': qt.ResidualVectorQuantizer
- }[quantizer]
- kwargs = dict_from_config(getattr(cfg, quantizer))
- if quantizer != 'no_quant':
- kwargs['dimension'] = dimension
- return klass(**kwargs)
-
-
-def get_encodec_autoencoder(encoder_name: str, cfg: omegaconf.DictConfig):
- if encoder_name == 'seanet':
- kwargs = dict_from_config(getattr(cfg, 'seanet'))
- encoder_override_kwargs = kwargs.pop('encoder')
- decoder_override_kwargs = kwargs.pop('decoder')
- encoder_kwargs = {**kwargs, **encoder_override_kwargs}
- decoder_kwargs = {**kwargs, **decoder_override_kwargs}
- encoder = audiocraft.modules.SEANetEncoder(**encoder_kwargs)
- decoder = audiocraft.modules.SEANetDecoder(**decoder_kwargs)
- return encoder, decoder
- else:
- raise KeyError(f'Unexpected compression model {cfg.compression_model}')
-
-
-def get_compression_model(cfg: omegaconf.DictConfig) -> CompressionModel:
- """Instantiate a compression model.
- """
- if cfg.compression_model == 'encodec':
- kwargs = dict_from_config(getattr(cfg, 'encodec'))
- encoder_name = kwargs.pop('autoencoder')
- quantizer_name = kwargs.pop('quantizer')
- encoder, decoder = get_encodec_autoencoder(encoder_name, cfg)
- quantizer = get_quantizer(quantizer_name, cfg, encoder.dimension)
- frame_rate = kwargs['sample_rate'] // encoder.hop_length
- renormalize = kwargs.pop('renormalize', None)
- renorm = kwargs.pop('renorm')
- if renormalize is None:
- renormalize = renorm is not None
- warnings.warn("You are using a deprecated EnCodec model. Please migrate to new renormalization.")
- return EncodecModel(encoder, decoder, quantizer,
- frame_rate=frame_rate, renormalize=renormalize, **kwargs).to(cfg.device)
- else:
- raise KeyError(f'Unexpected compression model {cfg.compression_model}')
-
-
-def get_lm_model(cfg: omegaconf.DictConfig) -> LMModel:
- """Instantiate a transformer LM.
- """
- if cfg.lm_model == 'transformer_lm':
- kwargs = dict_from_config(getattr(cfg, 'transformer_lm'))
- n_q = kwargs['n_q']
- q_modeling = kwargs.pop('q_modeling', None)
- codebooks_pattern_cfg = getattr(cfg, 'codebooks_pattern')
- attribute_dropout = dict_from_config(getattr(cfg, 'attribute_dropout'))
- cls_free_guidance = dict_from_config(getattr(cfg, 'classifier_free_guidance'))
- cfg_prob, cfg_coef = cls_free_guidance["training_dropout"], cls_free_guidance["inference_coef"]
- fuser = get_condition_fuser(cfg)
- condition_provider = get_conditioner_provider(kwargs["dim"], cfg).to(cfg.device)
- if len(fuser.fuse2cond['cross']) > 0: # enforce cross-att programatically
- kwargs['cross_attention'] = True
- if codebooks_pattern_cfg.modeling is None:
- assert q_modeling is not None, \
- 'LM model should either have a codebook pattern defined or transformer_lm.q_modeling'
- codebooks_pattern_cfg = omegaconf.OmegaConf.create(
- {'modeling': q_modeling, 'delay': {'delays': list(range(n_q))}}
- )
- pattern_provider = get_codebooks_pattern_provider(n_q, codebooks_pattern_cfg)
- return LMModel(
- pattern_provider=pattern_provider,
- condition_provider=condition_provider,
- fuser=fuser,
- cfg_dropout=cfg_prob,
- cfg_coef=cfg_coef,
- attribute_dropout=attribute_dropout,
- dtype=getattr(torch, cfg.dtype),
- device=cfg.device,
- **kwargs
- ).to(cfg.device)
- else:
- raise KeyError(f'Unexpected LM model {cfg.lm_model}')
-
-
-def get_conditioner_provider(output_dim: int, cfg: omegaconf.DictConfig) -> ConditioningProvider:
- """Instantiate a conditioning model.
- """
- device = cfg.device
- duration = cfg.dataset.segment_duration
- cfg = getattr(cfg, "conditioners")
- cfg = omegaconf.OmegaConf.create({}) if cfg is None else cfg
- conditioners: tp.Dict[str, BaseConditioner] = {}
- with omegaconf.open_dict(cfg):
- condition_provider_args = cfg.pop('args', {})
- for cond, cond_cfg in cfg.items():
- model_type = cond_cfg["model"]
- model_args = cond_cfg[model_type]
- if model_type == "t5":
- conditioners[str(cond)] = T5Conditioner(output_dim=output_dim, device=device, **model_args)
- elif model_type == "lut":
- conditioners[str(cond)] = LUTConditioner(output_dim=output_dim, **model_args)
- elif model_type == "chroma_stem":
- model_args.pop('cache_path', None)
- conditioners[str(cond)] = ChromaStemConditioner(
- output_dim=output_dim,
- duration=duration,
- device=device,
- **model_args
- )
- else:
- raise ValueError(f"unrecognized conditioning model: {model_type}")
- conditioner = ConditioningProvider(conditioners, device=device, **condition_provider_args)
- return conditioner
-
-
-def get_condition_fuser(cfg: omegaconf.DictConfig) -> ConditionFuser:
- """Instantiate a condition fuser object.
- """
- fuser_cfg = getattr(cfg, "fuser")
- fuser_methods = ["sum", "cross", "prepend", "input_interpolate"]
- fuse2cond = {k: fuser_cfg[k] for k in fuser_methods}
- kwargs = {k: v for k, v in fuser_cfg.items() if k not in fuser_methods}
- fuser = ConditionFuser(fuse2cond=fuse2cond, **kwargs)
- return fuser
-
-
-def get_codebooks_pattern_provider(n_q: int, cfg: omegaconf.DictConfig) -> CodebooksPatternProvider:
- """Instantiate a codebooks pattern provider object.
- """
- pattern_providers = {
- 'parallel': ParallelPatternProvider,
- 'delay': DelayedPatternProvider,
- 'unroll': UnrolledPatternProvider,
- 'valle': VALLEPattern,
- 'musiclm': MusicLMPattern,
- }
- name = cfg.modeling
- kwargs = dict_from_config(cfg.get(name)) if hasattr(cfg, name) else {}
- klass = pattern_providers[name]
- return klass(n_q, **kwargs)
-
-
-def get_debug_compression_model(device='cpu'):
- """Instantiate a debug compression model to be used for unit tests.
- """
- seanet_kwargs = {
- 'n_filters': 4,
- 'n_residual_layers': 1,
- 'dimension': 32,
- 'ratios': [10, 8, 16] # 25 Hz at 32kHz
- }
- encoder = audiocraft.modules.SEANetEncoder(**seanet_kwargs)
- decoder = audiocraft.modules.SEANetDecoder(**seanet_kwargs)
- quantizer = qt.ResidualVectorQuantizer(dimension=32, bins=400, n_q=4)
- init_x = torch.randn(8, 32, 128)
- quantizer(init_x, 1) # initialize kmeans etc.
- compression_model = EncodecModel(
- encoder, decoder, quantizer,
- frame_rate=25, sample_rate=32000, channels=1).to(device)
- return compression_model.eval()
-
-
-def get_debug_lm_model(device='cpu'):
- """Instantiate a debug LM to be used for unit tests.
- """
- pattern = DelayedPatternProvider(n_q=4)
- dim = 16
- providers = {
- 'description': LUTConditioner(n_bins=128, dim=dim, output_dim=dim, tokenizer="whitespace"),
- }
- condition_provider = ConditioningProvider(providers)
- fuser = ConditionFuser(
- {'cross': ['description'], 'prepend': [],
- 'sum': [], 'input_interpolate': []})
- lm = LMModel(
- pattern, condition_provider, fuser,
- n_q=4, card=400, dim=dim, num_heads=4, custom=True, num_layers=2,
- cross_attention=True, causal=True)
- return lm.to(device).eval()
diff --git a/spaces/Omnibus/TTS-voice-clone/app.py b/spaces/Omnibus/TTS-voice-clone/app.py
deleted file mode 100644
index 80f1e600efb272967f408417030b4da7c18ae517..0000000000000000000000000000000000000000
--- a/spaces/Omnibus/TTS-voice-clone/app.py
+++ /dev/null
@@ -1,64 +0,0 @@
-import gradio as gr
-
-'''
-from TTS.api import TTS
-from bark import SAMPLE_RATE, generate_audio, preload_models
-from scipy.io.wavfile import write as write_wav
-#from IPython.display import Audio
-
-# download and load all models
-#preload_models()
-
-def bark_try():
- # generate audio from text
- text_prompt = """
- Hello, my name is Suno. And, uh — and I like pizza. [laughs]
- But I also have other interests such as playing tic tac toe.
- """
- audio_array = generate_audio(text_prompt)
-
- # save audio to disk
- write_wav("bark_generation.wav", SAMPLE_RATE, audio_array)
-
- # play text in notebook
- #Audio(audio_array, rate=SAMPLE_RATE)
- return ("bark_generation.wav")
-def try1():
- #model_name1 = TTS.list_models()
- #print (f"model1 Name: {model_name1}")
- #model_name = model_name1[0]
- #print (f"model2 Name: {model_name}")
- # Init TTS
- tts = TTS("tts_models/multilingual/multi-dataset/bark", gpu=False)
- # Run TTS
- # Since this model is multi-speaker and multi-lingual, we must set the target speaker and the language
- # Text to speech with a numpy output
- #wav = tts.tts("This is a test! This is also a test!!", speaker=tts.speakers[0], language=tts.languages[0])
- # Text to speech to a file
- tts.tts_to_file(text="Hello world!", speaker=tts.speakers[0], language=tts.languages[0], file_path="output.wav")
- out = "output.wav"
- return out
-
-#def try2():
- #tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False, gpu=False)
- #tts.tts_to_file("This is voice cloning.", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav")
- #tts.tts_to_file("C'est le clonage de la voix.", speaker_wav="my/cloning/audio.wav", language="fr", file_path="output.wav")
- #tts.tts_to_file("Isso é clonagem de voz.", speaker_wav="my/cloning/audio.wav", language="pt", file_path="output.wav")
- #out = "output.wav"
- #return out
-'''
-
-model = gr.Interface.load("models/suno/bark")
-def bark_try_2():
- out = model("this is some text")
- return out
-
-with gr.Blocks() as app:
- out1 = gr.Audio()
- btn1 = gr.Button()
- btn2 = gr.Button()
-
- btn1.click(bark_try_2,None,out1)
- #btn2.click(try1,None,out1)
-
-app.launch()
\ No newline at end of file
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/string-fun.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/string-fun.go
deleted file mode 100644
index aff9766bd38d2963746fea47f850e0c0201dad87..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/string-fun.go and /dev/null differ
diff --git a/spaces/PeepDaSlan9/candle-llama2/worker.js b/spaces/PeepDaSlan9/candle-llama2/worker.js
deleted file mode 100644
index a81a4853a3a9a89dfa3e2df3826507e019ba31a0..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/candle-llama2/worker.js
+++ /dev/null
@@ -1,477 +0,0 @@
-let wasm_bindgen;
-(function() {
- const __exports = {};
- let script_src;
- if (typeof document !== 'undefined' && document.currentScript !== null) {
- script_src = new URL(document.currentScript.src, location.href).toString();
- }
- let wasm = undefined;
-
- const heap = new Array(128).fill(undefined);
-
- heap.push(undefined, null, true, false);
-
-function getObject(idx) { return heap[idx]; }
-
-let heap_next = heap.length;
-
-function dropObject(idx) {
- if (idx < 132) return;
- heap[idx] = heap_next;
- heap_next = idx;
-}
-
-function takeObject(idx) {
- const ret = getObject(idx);
- dropObject(idx);
- return ret;
-}
-
-function addHeapObject(obj) {
- if (heap_next === heap.length) heap.push(heap.length + 1);
- const idx = heap_next;
- heap_next = heap[idx];
-
- heap[idx] = obj;
- return idx;
-}
-
-const cachedTextDecoder = (typeof TextDecoder !== 'undefined' ? new TextDecoder('utf-8', { ignoreBOM: true, fatal: true }) : { decode: () => { throw Error('TextDecoder not available') } } );
-
-if (typeof TextDecoder !== 'undefined') { cachedTextDecoder.decode(); };
-
-let cachedUint8Memory0 = null;
-
-function getUint8Memory0() {
- if (cachedUint8Memory0 === null || cachedUint8Memory0.byteLength === 0) {
- cachedUint8Memory0 = new Uint8Array(wasm.memory.buffer);
- }
- return cachedUint8Memory0;
-}
-
-function getStringFromWasm0(ptr, len) {
- ptr = ptr >>> 0;
- return cachedTextDecoder.decode(getUint8Memory0().subarray(ptr, ptr + len));
-}
-
-function debugString(val) {
- // primitive types
- const type = typeof val;
- if (type == 'number' || type == 'boolean' || val == null) {
- return `${val}`;
- }
- if (type == 'string') {
- return `"${val}"`;
- }
- if (type == 'symbol') {
- const description = val.description;
- if (description == null) {
- return 'Symbol';
- } else {
- return `Symbol(${description})`;
- }
- }
- if (type == 'function') {
- const name = val.name;
- if (typeof name == 'string' && name.length > 0) {
- return `Function(${name})`;
- } else {
- return 'Function';
- }
- }
- // objects
- if (Array.isArray(val)) {
- const length = val.length;
- let debug = '[';
- if (length > 0) {
- debug += debugString(val[0]);
- }
- for(let i = 1; i < length; i++) {
- debug += ', ' + debugString(val[i]);
- }
- debug += ']';
- return debug;
- }
- // Test for built-in
- const builtInMatches = /\[object ([^\]]+)\]/.exec(toString.call(val));
- let className;
- if (builtInMatches.length > 1) {
- className = builtInMatches[1];
- } else {
- // Failed to match the standard '[object ClassName]'
- return toString.call(val);
- }
- if (className == 'Object') {
- // we're a user defined class or Object
- // JSON.stringify avoids problems with cycles, and is generally much
- // easier than looping through ownProperties of `val`.
- try {
- return 'Object(' + JSON.stringify(val) + ')';
- } catch (_) {
- return 'Object';
- }
- }
- // errors
- if (val instanceof Error) {
- return `${val.name}: ${val.message}\n${val.stack}`;
- }
- // TODO we could test for more things here, like `Set`s and `Map`s.
- return className;
-}
-
-let WASM_VECTOR_LEN = 0;
-
-const cachedTextEncoder = (typeof TextEncoder !== 'undefined' ? new TextEncoder('utf-8') : { encode: () => { throw Error('TextEncoder not available') } } );
-
-const encodeString = (typeof cachedTextEncoder.encodeInto === 'function'
- ? function (arg, view) {
- return cachedTextEncoder.encodeInto(arg, view);
-}
- : function (arg, view) {
- const buf = cachedTextEncoder.encode(arg);
- view.set(buf);
- return {
- read: arg.length,
- written: buf.length
- };
-});
-
-function passStringToWasm0(arg, malloc, realloc) {
-
- if (realloc === undefined) {
- const buf = cachedTextEncoder.encode(arg);
- const ptr = malloc(buf.length, 1) >>> 0;
- getUint8Memory0().subarray(ptr, ptr + buf.length).set(buf);
- WASM_VECTOR_LEN = buf.length;
- return ptr;
- }
-
- let len = arg.length;
- let ptr = malloc(len, 1) >>> 0;
-
- const mem = getUint8Memory0();
-
- let offset = 0;
-
- for (; offset < len; offset++) {
- const code = arg.charCodeAt(offset);
- if (code > 0x7F) break;
- mem[ptr + offset] = code;
- }
-
- if (offset !== len) {
- if (offset !== 0) {
- arg = arg.slice(offset);
- }
- ptr = realloc(ptr, len, len = offset + arg.length * 3, 1) >>> 0;
- const view = getUint8Memory0().subarray(ptr + offset, ptr + len);
- const ret = encodeString(arg, view);
-
- offset += ret.written;
- }
-
- WASM_VECTOR_LEN = offset;
- return ptr;
-}
-
-let cachedInt32Memory0 = null;
-
-function getInt32Memory0() {
- if (cachedInt32Memory0 === null || cachedInt32Memory0.byteLength === 0) {
- cachedInt32Memory0 = new Int32Array(wasm.memory.buffer);
- }
- return cachedInt32Memory0;
-}
-
-function makeClosure(arg0, arg1, dtor, f) {
- const state = { a: arg0, b: arg1, cnt: 1, dtor };
- const real = (...args) => {
- // First up with a closure we increment the internal reference
- // count. This ensures that the Rust closure environment won't
- // be deallocated while we're invoking it.
- state.cnt++;
- try {
- return f(state.a, state.b, ...args);
- } finally {
- if (--state.cnt === 0) {
- wasm.__wbindgen_export_2.get(state.dtor)(state.a, state.b);
- state.a = 0;
-
- }
- }
- };
- real.original = state;
-
- return real;
-}
-function __wbg_adapter_22(arg0, arg1, arg2) {
- wasm._dyn_core__ops__function__Fn__A____Output___R_as_wasm_bindgen__closure__WasmClosure___describe__invoke__h394c66cd6bc0689a(arg0, arg1, addHeapObject(arg2));
-}
-
-function handleError(f, args) {
- try {
- return f.apply(this, args);
- } catch (e) {
- wasm.__wbindgen_exn_store(addHeapObject(e));
- }
-}
-
-async function __wbg_load(module, imports) {
- if (typeof Response === 'function' && module instanceof Response) {
- if (typeof WebAssembly.instantiateStreaming === 'function') {
- try {
- return await WebAssembly.instantiateStreaming(module, imports);
-
- } catch (e) {
- if (module.headers.get('Content-Type') != 'application/wasm') {
- console.warn("`WebAssembly.instantiateStreaming` failed because your server does not serve wasm with `application/wasm` MIME type. Falling back to `WebAssembly.instantiate` which is slower. Original error:\n", e);
-
- } else {
- throw e;
- }
- }
- }
-
- const bytes = await module.arrayBuffer();
- return await WebAssembly.instantiate(bytes, imports);
-
- } else {
- const instance = await WebAssembly.instantiate(module, imports);
-
- if (instance instanceof WebAssembly.Instance) {
- return { instance, module };
-
- } else {
- return instance;
- }
- }
-}
-
-function __wbg_get_imports() {
- const imports = {};
- imports.wbg = {};
- imports.wbg.__wbindgen_object_drop_ref = function(arg0) {
- takeObject(arg0);
- };
- imports.wbg.__wbindgen_object_clone_ref = function(arg0) {
- const ret = getObject(arg0);
- return addHeapObject(ret);
- };
- imports.wbg.__wbg_log_3af90b48c052f90b = function(arg0, arg1) {
- console.log(getStringFromWasm0(arg0, arg1));
- };
- imports.wbg.__wbindgen_string_new = function(arg0, arg1) {
- const ret = getStringFromWasm0(arg0, arg1);
- return addHeapObject(ret);
- };
- imports.wbg.__wbg_getRandomValues_37fa2ca9e4e07fab = function() { return handleError(function (arg0, arg1) {
- getObject(arg0).getRandomValues(getObject(arg1));
- }, arguments) };
- imports.wbg.__wbg_randomFillSync_dc1e9a60c158336d = function() { return handleError(function (arg0, arg1) {
- getObject(arg0).randomFillSync(takeObject(arg1));
- }, arguments) };
- imports.wbg.__wbg_crypto_c48a774b022d20ac = function(arg0) {
- const ret = getObject(arg0).crypto;
- return addHeapObject(ret);
- };
- imports.wbg.__wbindgen_is_object = function(arg0) {
- const val = getObject(arg0);
- const ret = typeof(val) === 'object' && val !== null;
- return ret;
- };
- imports.wbg.__wbg_process_298734cf255a885d = function(arg0) {
- const ret = getObject(arg0).process;
- return addHeapObject(ret);
- };
- imports.wbg.__wbg_versions_e2e78e134e3e5d01 = function(arg0) {
- const ret = getObject(arg0).versions;
- return addHeapObject(ret);
- };
- imports.wbg.__wbg_node_1cd7a5d853dbea79 = function(arg0) {
- const ret = getObject(arg0).node;
- return addHeapObject(ret);
- };
- imports.wbg.__wbindgen_is_string = function(arg0) {
- const ret = typeof(getObject(arg0)) === 'string';
- return ret;
- };
- imports.wbg.__wbg_msCrypto_bcb970640f50a1e8 = function(arg0) {
- const ret = getObject(arg0).msCrypto;
- return addHeapObject(ret);
- };
- imports.wbg.__wbg_require_8f08ceecec0f4fee = function() { return handleError(function () {
- const ret = module.require;
- return addHeapObject(ret);
- }, arguments) };
- imports.wbg.__wbindgen_is_function = function(arg0) {
- const ret = typeof(getObject(arg0)) === 'function';
- return ret;
- };
- imports.wbg.__wbg_new_abda76e883ba8a5f = function() {
- const ret = new Error();
- return addHeapObject(ret);
- };
- imports.wbg.__wbg_stack_658279fe44541cf6 = function(arg0, arg1) {
- const ret = getObject(arg1).stack;
- const ptr1 = passStringToWasm0(ret, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);
- const len1 = WASM_VECTOR_LEN;
- getInt32Memory0()[arg0 / 4 + 1] = len1;
- getInt32Memory0()[arg0 / 4 + 0] = ptr1;
- };
- imports.wbg.__wbg_error_f851667af71bcfc6 = function(arg0, arg1) {
- let deferred0_0;
- let deferred0_1;
- try {
- deferred0_0 = arg0;
- deferred0_1 = arg1;
- console.error(getStringFromWasm0(arg0, arg1));
- } finally {
- wasm.__wbindgen_free(deferred0_0, deferred0_1, 1);
- }
- };
- imports.wbg.__wbg_setonmessage_731266b6f3ab0860 = function(arg0, arg1) {
- getObject(arg0).onmessage = getObject(arg1);
- };
- imports.wbg.__wbg_close_889c0c4e86f1403e = function(arg0) {
- getObject(arg0).close();
- };
- imports.wbg.__wbg_postMessage_2f0b8369b84c3c1e = function() { return handleError(function (arg0, arg1) {
- getObject(arg0).postMessage(getObject(arg1));
- }, arguments) };
- imports.wbg.__wbg_data_ab99ae4a2e1e8bc9 = function(arg0) {
- const ret = getObject(arg0).data;
- return addHeapObject(ret);
- };
- imports.wbg.__wbg_newnoargs_581967eacc0e2604 = function(arg0, arg1) {
- const ret = new Function(getStringFromWasm0(arg0, arg1));
- return addHeapObject(ret);
- };
- imports.wbg.__wbg_call_cb65541d95d71282 = function() { return handleError(function (arg0, arg1) {
- const ret = getObject(arg0).call(getObject(arg1));
- return addHeapObject(ret);
- }, arguments) };
- imports.wbg.__wbg_self_1ff1d729e9aae938 = function() { return handleError(function () {
- const ret = self.self;
- return addHeapObject(ret);
- }, arguments) };
- imports.wbg.__wbg_window_5f4faef6c12b79ec = function() { return handleError(function () {
- const ret = window.window;
- return addHeapObject(ret);
- }, arguments) };
- imports.wbg.__wbg_globalThis_1d39714405582d3c = function() { return handleError(function () {
- const ret = globalThis.globalThis;
- return addHeapObject(ret);
- }, arguments) };
- imports.wbg.__wbg_global_651f05c6a0944d1c = function() { return handleError(function () {
- const ret = global.global;
- return addHeapObject(ret);
- }, arguments) };
- imports.wbg.__wbindgen_is_undefined = function(arg0) {
- const ret = getObject(arg0) === undefined;
- return ret;
- };
- imports.wbg.__wbg_call_01734de55d61e11d = function() { return handleError(function (arg0, arg1, arg2) {
- const ret = getObject(arg0).call(getObject(arg1), getObject(arg2));
- return addHeapObject(ret);
- }, arguments) };
- imports.wbg.__wbg_buffer_085ec1f694018c4f = function(arg0) {
- const ret = getObject(arg0).buffer;
- return addHeapObject(ret);
- };
- imports.wbg.__wbg_newwithbyteoffsetandlength_6da8e527659b86aa = function(arg0, arg1, arg2) {
- const ret = new Uint8Array(getObject(arg0), arg1 >>> 0, arg2 >>> 0);
- return addHeapObject(ret);
- };
- imports.wbg.__wbg_new_8125e318e6245eed = function(arg0) {
- const ret = new Uint8Array(getObject(arg0));
- return addHeapObject(ret);
- };
- imports.wbg.__wbg_set_5cf90238115182c3 = function(arg0, arg1, arg2) {
- getObject(arg0).set(getObject(arg1), arg2 >>> 0);
- };
- imports.wbg.__wbg_length_72e2208bbc0efc61 = function(arg0) {
- const ret = getObject(arg0).length;
- return ret;
- };
- imports.wbg.__wbg_newwithlength_e5d69174d6984cd7 = function(arg0) {
- const ret = new Uint8Array(arg0 >>> 0);
- return addHeapObject(ret);
- };
- imports.wbg.__wbg_subarray_13db269f57aa838d = function(arg0, arg1, arg2) {
- const ret = getObject(arg0).subarray(arg1 >>> 0, arg2 >>> 0);
- return addHeapObject(ret);
- };
- imports.wbg.__wbindgen_debug_string = function(arg0, arg1) {
- const ret = debugString(getObject(arg1));
- const ptr1 = passStringToWasm0(ret, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);
- const len1 = WASM_VECTOR_LEN;
- getInt32Memory0()[arg0 / 4 + 1] = len1;
- getInt32Memory0()[arg0 / 4 + 0] = ptr1;
- };
- imports.wbg.__wbindgen_throw = function(arg0, arg1) {
- throw new Error(getStringFromWasm0(arg0, arg1));
- };
- imports.wbg.__wbindgen_memory = function() {
- const ret = wasm.memory;
- return addHeapObject(ret);
- };
- imports.wbg.__wbindgen_closure_wrapper91 = function(arg0, arg1, arg2) {
- const ret = makeClosure(arg0, arg1, 30, __wbg_adapter_22);
- return addHeapObject(ret);
- };
-
- return imports;
-}
-
-function __wbg_init_memory(imports, maybe_memory) {
-
-}
-
-function __wbg_finalize_init(instance, module) {
- wasm = instance.exports;
- __wbg_init.__wbindgen_wasm_module = module;
- cachedInt32Memory0 = null;
- cachedUint8Memory0 = null;
-
- wasm.__wbindgen_start();
- return wasm;
-}
-
-function initSync(module) {
- if (wasm !== undefined) return wasm;
-
- const imports = __wbg_get_imports();
-
- __wbg_init_memory(imports);
-
- if (!(module instanceof WebAssembly.Module)) {
- module = new WebAssembly.Module(module);
- }
-
- const instance = new WebAssembly.Instance(module, imports);
-
- return __wbg_finalize_init(instance, module);
-}
-
-async function __wbg_init(input) {
- if (wasm !== undefined) return wasm;
-
- if (typeof input === 'undefined' && script_src !== 'undefined') {
- input = script_src.replace(/\.js$/, '_bg.wasm');
- }
- const imports = __wbg_get_imports();
-
- if (typeof input === 'string' || (typeof Request === 'function' && input instanceof Request) || (typeof URL === 'function' && input instanceof URL)) {
- input = fetch(input);
- }
-
- __wbg_init_memory(imports);
-
- const { instance, module } = await __wbg_load(await input, imports);
-
- return __wbg_finalize_init(instance, module);
-}
-
-wasm_bindgen = Object.assign(__wbg_init, { initSync }, __exports);
-
-})();
diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/losses/cross_entropy_loss.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/losses/cross_entropy_loss.py
deleted file mode 100644
index 42c0790c98616bb69621deed55547fc04c7392ef..0000000000000000000000000000000000000000
--- a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/losses/cross_entropy_loss.py
+++ /dev/null
@@ -1,198 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from ..builder import LOSSES
-from .utils import get_class_weight, weight_reduce_loss
-
-
-def cross_entropy(pred,
- label,
- weight=None,
- class_weight=None,
- reduction='mean',
- avg_factor=None,
- ignore_index=-100):
- """The wrapper function for :func:`F.cross_entropy`"""
- # class_weight is a manual rescaling weight given to each class.
- # If given, has to be a Tensor of size C element-wise losses
- loss = F.cross_entropy(
- pred,
- label,
- weight=class_weight,
- reduction='none',
- ignore_index=ignore_index)
-
- # apply weights and do the reduction
- if weight is not None:
- weight = weight.float()
- loss = weight_reduce_loss(
- loss, weight=weight, reduction=reduction, avg_factor=avg_factor)
-
- return loss
-
-
-def _expand_onehot_labels(labels, label_weights, target_shape, ignore_index):
- """Expand onehot labels to match the size of prediction."""
- bin_labels = labels.new_zeros(target_shape)
- valid_mask = (labels >= 0) & (labels != ignore_index)
- inds = torch.nonzero(valid_mask, as_tuple=True)
-
- if inds[0].numel() > 0:
- if labels.dim() == 3:
- bin_labels[inds[0], labels[valid_mask], inds[1], inds[2]] = 1
- else:
- bin_labels[inds[0], labels[valid_mask]] = 1
-
- valid_mask = valid_mask.unsqueeze(1).expand(target_shape).float()
- if label_weights is None:
- bin_label_weights = valid_mask
- else:
- bin_label_weights = label_weights.unsqueeze(1).expand(target_shape)
- bin_label_weights *= valid_mask
-
- return bin_labels, bin_label_weights
-
-
-def binary_cross_entropy(pred,
- label,
- weight=None,
- reduction='mean',
- avg_factor=None,
- class_weight=None,
- ignore_index=255):
- """Calculate the binary CrossEntropy loss.
-
- Args:
- pred (torch.Tensor): The prediction with shape (N, 1).
- label (torch.Tensor): The learning label of the prediction.
- weight (torch.Tensor, optional): Sample-wise loss weight.
- reduction (str, optional): The method used to reduce the loss.
- Options are "none", "mean" and "sum".
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
- class_weight (list[float], optional): The weight for each class.
- ignore_index (int | None): The label index to be ignored. Default: 255
-
- Returns:
- torch.Tensor: The calculated loss
- """
- if pred.dim() != label.dim():
- assert (pred.dim() == 2 and label.dim() == 1) or (
- pred.dim() == 4 and label.dim() == 3), \
- 'Only pred shape [N, C], label shape [N] or pred shape [N, C, ' \
- 'H, W], label shape [N, H, W] are supported'
- label, weight = _expand_onehot_labels(label, weight, pred.shape,
- ignore_index)
-
- # weighted element-wise losses
- if weight is not None:
- weight = weight.float()
- loss = F.binary_cross_entropy_with_logits(
- pred, label.float(), pos_weight=class_weight, reduction='none')
- # do the reduction for the weighted loss
- loss = weight_reduce_loss(
- loss, weight, reduction=reduction, avg_factor=avg_factor)
-
- return loss
-
-
-def mask_cross_entropy(pred,
- target,
- label,
- reduction='mean',
- avg_factor=None,
- class_weight=None,
- ignore_index=None):
- """Calculate the CrossEntropy loss for masks.
-
- Args:
- pred (torch.Tensor): The prediction with shape (N, C), C is the number
- of classes.
- target (torch.Tensor): The learning label of the prediction.
- label (torch.Tensor): ``label`` indicates the class label of the mask'
- corresponding object. This will be used to select the mask in the
- of the class which the object belongs to when the mask prediction
- if not class-agnostic.
- reduction (str, optional): The method used to reduce the loss.
- Options are "none", "mean" and "sum".
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
- class_weight (list[float], optional): The weight for each class.
- ignore_index (None): Placeholder, to be consistent with other loss.
- Default: None.
-
- Returns:
- torch.Tensor: The calculated loss
- """
- assert ignore_index is None, 'BCE loss does not support ignore_index'
- # TODO: handle these two reserved arguments
- assert reduction == 'mean' and avg_factor is None
- num_rois = pred.size()[0]
- inds = torch.arange(0, num_rois, dtype=torch.long, device=pred.device)
- pred_slice = pred[inds, label].squeeze(1)
- return F.binary_cross_entropy_with_logits(
- pred_slice, target, weight=class_weight, reduction='mean')[None]
-
-
-@LOSSES.register_module()
-class CrossEntropyLoss(nn.Module):
- """CrossEntropyLoss.
-
- Args:
- use_sigmoid (bool, optional): Whether the prediction uses sigmoid
- of softmax. Defaults to False.
- use_mask (bool, optional): Whether to use mask cross entropy loss.
- Defaults to False.
- reduction (str, optional): . Defaults to 'mean'.
- Options are "none", "mean" and "sum".
- class_weight (list[float] | str, optional): Weight of each class. If in
- str format, read them from a file. Defaults to None.
- loss_weight (float, optional): Weight of the loss. Defaults to 1.0.
- """
-
- def __init__(self,
- use_sigmoid=False,
- use_mask=False,
- reduction='mean',
- class_weight=None,
- loss_weight=1.0):
- super(CrossEntropyLoss, self).__init__()
- assert (use_sigmoid is False) or (use_mask is False)
- self.use_sigmoid = use_sigmoid
- self.use_mask = use_mask
- self.reduction = reduction
- self.loss_weight = loss_weight
- self.class_weight = get_class_weight(class_weight)
-
- if self.use_sigmoid:
- self.cls_criterion = binary_cross_entropy
- elif self.use_mask:
- self.cls_criterion = mask_cross_entropy
- else:
- self.cls_criterion = cross_entropy
-
- def forward(self,
- cls_score,
- label,
- weight=None,
- avg_factor=None,
- reduction_override=None,
- **kwargs):
- """Forward function."""
- assert reduction_override in (None, 'none', 'mean', 'sum')
- reduction = (
- reduction_override if reduction_override else self.reduction)
- if self.class_weight is not None:
- class_weight = cls_score.new_tensor(self.class_weight)
- else:
- class_weight = None
- loss_cls = self.loss_weight * self.cls_criterion(
- cls_score,
- label,
- weight,
- class_weight=class_weight,
- reduction=reduction,
- avg_factor=avg_factor,
- **kwargs)
- return loss_cls
diff --git a/spaces/Pranjal2041/SemSup-XC/semsup.py b/spaces/Pranjal2041/SemSup-XC/semsup.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/docs/ENCODEC.md b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/docs/ENCODEC.md
deleted file mode 100644
index efc2bcc7ec50190b907c887b920b70fd799c6953..0000000000000000000000000000000000000000
--- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/docs/ENCODEC.md
+++ /dev/null
@@ -1,179 +0,0 @@
-# EnCodec: High Fidelity Neural Audio Compression
-
-AudioCraft provides the training code for EnCodec, a state-of-the-art deep learning
-based audio codec supporting both mono stereo audio, presented in the
-[High Fidelity Neural Audio Compression][arxiv] paper.
-Check out our [sample page][encodec_samples].
-
-## Original EnCodec models
-
-The EnCodec models presented in High Fidelity Neural Audio Compression can be accessed
-and used with the [EnCodec repository](https://github.com/facebookresearch/encodec).
-
-**Note**: We do not guarantee compatibility between the AudioCraft and EnCodec codebases
-and released checkpoints at this stage.
-
-
-## Installation
-
-Please follow the AudioCraft installation instructions from the [README](../README.md).
-
-
-## Training
-
-The [CompressionSolver](../audiocraft/solvers/compression.py) implements the audio reconstruction
-task to train an EnCodec model. Specifically, it trains an encoder-decoder with a quantization
-bottleneck - a SEANet encoder-decoder with Residual Vector Quantization bottleneck for EnCodec -
-using a combination of objective and perceptual losses in the forms of discriminators.
-
-The default configuration matches a causal EnCodec training with at a single bandwidth.
-
-### Example configuration and grids
-
-We provide sample configuration and grids for training EnCodec models.
-
-The compression configuration are defined in
-[config/solver/compression](../config/solver/compression).
-
-The example grids are available at
-[audiocraft/grids/compression](../audiocraft/grids/compression).
-
-```shell
-# base causal encodec on monophonic audio sampled at 24 khz
-dora grid compression.encodec_base_24khz
-# encodec model used for MusicGen on monophonic audio sampled at 32 khz
-dora grid compression.encodec_musicgen_32khz
-```
-
-### Training and valid stages
-
-The model is trained using a combination of objective and perceptual losses.
-More specifically, EnCodec is trained with the MS-STFT discriminator along with
-objective losses through the use of a loss balancer to effectively weight
-the different losses, in an intuitive manner.
-
-### Evaluation stage
-
-Evaluations metrics for audio generation:
-* SI-SNR: Scale-Invariant Signal-to-Noise Ratio.
-* ViSQOL: Virtual Speech Quality Objective Listener.
-
-Note: Path to the ViSQOL binary (compiled with bazel) needs to be provided in
-order to run the ViSQOL metric on the reference and degraded signals.
-The metric is disabled by default.
-Please refer to the [metrics documentation](../METRICS.md) to learn more.
-
-### Generation stage
-
-The generation stage consists in generating the reconstructed audio from samples
-with the current model. The number of samples generated and the batch size used are
-controlled by the `dataset.generate` configuration. The output path and audio formats
-are defined in the generate stage configuration.
-
-```shell
-# generate samples every 5 epoch
-dora run solver=compression/encodec_base_24khz generate.every=5
-# run with a different dset
-dora run solver=compression/encodec_base_24khz generate.path=
-# limit the number of samples or use a different batch size
-dora grid solver=compression/encodec_base_24khz dataset.generate.num_samples=10 dataset.generate.batch_size=4
-```
-
-### Playing with the model
-
-Once you have a model trained, it is possible to get the entire solver, or just
-the trained model with the following functions:
-
-```python
-from audiocraft.solvers import CompressionSolver
-
-# If you trained a custom model with signature SIG.
-model = CompressionSolver.model_from_checkpoint('//sig/SIG')
-# If you want to get one of the pretrained models with the `//pretrained/` prefix.
-model = CompressionSolver.model_from_checkpoint('//pretrained/facebook/encodec_32khz')
-# Or load from a custom checkpoint path
-model = CompressionSolver.model_from_checkpoint('/my_checkpoints/foo/bar/checkpoint.th')
-
-
-# If you only want to use a pretrained model, you can also directly get it
-# from the CompressionModel base model class.
-from audiocraft.models import CompressionModel
-
-# Here do not put the `//pretrained/` prefix!
-model = CompressionModel.get_pretrained('facebook/encodec_32khz')
-model = CompressionModel.get_pretrained('dac_44khz')
-
-# Finally, you can also retrieve the full Solver object, with its dataloader etc.
-from audiocraft import train
-from pathlib import Path
-import logging
-import os
-import sys
-
-# uncomment the following line if you want some detailed logs when loading a Solver.
-logging.basicConfig(stream=sys.stderr, level=logging.INFO)
-# You must always run the following function from the root directory.
-os.chdir(Path(train.__file__).parent.parent)
-
-
-# You can also get the full solver (only for your own experiments).
-# You can provide some overrides to the parameters to make things more convenient.
-solver = train.get_solver_from_sig('SIG', {'device': 'cpu', 'dataset': {'batch_size': 8}})
-solver.model
-solver.dataloaders
-```
-
-### Importing / Exporting models
-
-At the moment we do not have a definitive workflow for exporting EnCodec models, for
-instance to Hugging Face (HF). We are working on supporting automatic convertion between
-AudioCraft and Hugging Face implementations.
-
-We still have some support for fine tuning an EnCodec model coming from HF in AudioCraft,
-using for instance `continue_from=//pretrained/facebook/encodec_32k`.
-
-An AudioCraft checkpoint can be exported in a more compact format (excluding the optimizer etc.)
-using `audiocraft.utils.export.export_encodec`. For instance, you could run
-
-```python
-from audiocraft.utils import export
-from audiocraft import train
-xp = train.main.get_xp_from_sig('SIG')
-export.export_encodec(
- xp.folder / 'checkpoint.th',
- '/checkpoints/my_audio_lm/compression_state_dict.bin')
-
-
-from audiocraft.models import CompressionModel
-model = CompressionModel.get_pretrained('/checkpoints/my_audio_lm/compression_state_dict.bin')
-
-from audiocraft.solvers import CompressionSolver
-# The two are strictly equivalent, but this function supports also loading from non already exported models.
-model = CompressionSolver.model_from_checkpoint('//pretrained//checkpoints/my_audio_lm/compression_state_dict.bin')
-```
-
-We will see then how to use this model as a tokenizer for MusicGen/Audio gen in the
-[MusicGen documentation](./MUSICGEN.md).
-
-### Learn more
-
-Learn more about AudioCraft training pipelines in the [dedicated section](./TRAINING.md).
-
-
-## Citation
-```
-@article{defossez2022highfi,
- title={High Fidelity Neural Audio Compression},
- author={Défossez, Alexandre and Copet, Jade and Synnaeve, Gabriel and Adi, Yossi},
- journal={arXiv preprint arXiv:2210.13438},
- year={2022}
-}
-```
-
-
-## License
-
-See license information in the [README](../README.md).
-
-[arxiv]: https://arxiv.org/abs/2210.13438
-[encodec_samples]: https://ai.honu.io/papers/encodec/samples.html
diff --git a/spaces/QinBingFeng/dalle-mini/html2canvas.js b/spaces/QinBingFeng/dalle-mini/html2canvas.js
deleted file mode 100644
index 96e2dc5707b1a584ff7b3b583aea7c6c18d4ea76..0000000000000000000000000000000000000000
--- a/spaces/QinBingFeng/dalle-mini/html2canvas.js
+++ /dev/null
@@ -1,7756 +0,0 @@
-/*!
- * html2canvas 1.4.1
- * Copyright (c) 2022 Niklas von Hertzen
- * Released under MIT License
- */
-(function (global, factory) {
- typeof exports === 'object' && typeof module !== 'undefined' ? module.exports = factory() :
- typeof define === 'function' && define.amd ? define(factory) :
- (global = typeof globalThis !== 'undefined' ? globalThis : global || self, global.html2canvas = factory());
-}(this, (function () { 'use strict';
-
- /*! *****************************************************************************
- Copyright (c) Microsoft Corporation.
-
- Permission to use, copy, modify, and/or distribute this software for any
- purpose with or without fee is hereby granted.
-
- THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH
- REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
- AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,
- INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
- LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR
- OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
- PERFORMANCE OF THIS SOFTWARE.
- ***************************************************************************** */
- /* global Reflect, Promise */
-
- var extendStatics = function(d, b) {
- extendStatics = Object.setPrototypeOf ||
- ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) ||
- function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; };
- return extendStatics(d, b);
- };
-
- function __extends(d, b) {
- if (typeof b !== "function" && b !== null)
- throw new TypeError("Class extends value " + String(b) + " is not a constructor or null");
- extendStatics(d, b);
- function __() { this.constructor = d; }
- d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __());
- }
-
- var __assign = function() {
- __assign = Object.assign || function __assign(t) {
- for (var s, i = 1, n = arguments.length; i < n; i++) {
- s = arguments[i];
- for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p];
- }
- return t;
- };
- return __assign.apply(this, arguments);
- };
-
- function __awaiter(thisArg, _arguments, P, generator) {
- function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }
- return new (P || (P = Promise))(function (resolve, reject) {
- function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }
- function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } }
- function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }
- step((generator = generator.apply(thisArg, _arguments || [])).next());
- });
- }
-
- function __generator(thisArg, body) {
- var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g;
- return g = { next: verb(0), "throw": verb(1), "return": verb(2) }, typeof Symbol === "function" && (g[Symbol.iterator] = function() { return this; }), g;
- function verb(n) { return function (v) { return step([n, v]); }; }
- function step(op) {
- if (f) throw new TypeError("Generator is already executing.");
- while (_) try {
- if (f = 1, y && (t = op[0] & 2 ? y["return"] : op[0] ? y["throw"] || ((t = y["return"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t;
- if (y = 0, t) op = [op[0] & 2, t.value];
- switch (op[0]) {
- case 0: case 1: t = op; break;
- case 4: _.label++; return { value: op[1], done: false };
- case 5: _.label++; y = op[1]; op = [0]; continue;
- case 7: op = _.ops.pop(); _.trys.pop(); continue;
- default:
- if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; }
- if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; }
- if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; }
- if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; }
- if (t[2]) _.ops.pop();
- _.trys.pop(); continue;
- }
- op = body.call(thisArg, _);
- } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; }
- if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true };
- }
- }
-
- function __spreadArray(to, from, pack) {
- if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) {
- if (ar || !(i in from)) {
- if (!ar) ar = Array.prototype.slice.call(from, 0, i);
- ar[i] = from[i];
- }
- }
- return to.concat(ar || from);
- }
-
- var Bounds = /** @class */ (function () {
- function Bounds(left, top, width, height) {
- this.left = left;
- this.top = top;
- this.width = width;
- this.height = height;
- }
- Bounds.prototype.add = function (x, y, w, h) {
- return new Bounds(this.left + x, this.top + y, this.width + w, this.height + h);
- };
- Bounds.fromClientRect = function (context, clientRect) {
- return new Bounds(clientRect.left + context.windowBounds.left, clientRect.top + context.windowBounds.top, clientRect.width, clientRect.height);
- };
- Bounds.fromDOMRectList = function (context, domRectList) {
- var domRect = Array.from(domRectList).find(function (rect) { return rect.width !== 0; });
- return domRect
- ? new Bounds(domRect.left + context.windowBounds.left, domRect.top + context.windowBounds.top, domRect.width, domRect.height)
- : Bounds.EMPTY;
- };
- Bounds.EMPTY = new Bounds(0, 0, 0, 0);
- return Bounds;
- }());
- var parseBounds = function (context, node) {
- return Bounds.fromClientRect(context, node.getBoundingClientRect());
- };
- var parseDocumentSize = function (document) {
- var body = document.body;
- var documentElement = document.documentElement;
- if (!body || !documentElement) {
- throw new Error("Unable to get document size");
- }
- var width = Math.max(Math.max(body.scrollWidth, documentElement.scrollWidth), Math.max(body.offsetWidth, documentElement.offsetWidth), Math.max(body.clientWidth, documentElement.clientWidth));
- var height = Math.max(Math.max(body.scrollHeight, documentElement.scrollHeight), Math.max(body.offsetHeight, documentElement.offsetHeight), Math.max(body.clientHeight, documentElement.clientHeight));
- return new Bounds(0, 0, width, height);
- };
-
- /*
- * css-line-break 2.1.0
- * Copyright (c) 2022 Niklas von Hertzen
- * Released under MIT License
- */
- var toCodePoints$1 = function (str) {
- var codePoints = [];
- var i = 0;
- var length = str.length;
- while (i < length) {
- var value = str.charCodeAt(i++);
- if (value >= 0xd800 && value <= 0xdbff && i < length) {
- var extra = str.charCodeAt(i++);
- if ((extra & 0xfc00) === 0xdc00) {
- codePoints.push(((value & 0x3ff) << 10) + (extra & 0x3ff) + 0x10000);
- }
- else {
- codePoints.push(value);
- i--;
- }
- }
- else {
- codePoints.push(value);
- }
- }
- return codePoints;
- };
- var fromCodePoint$1 = function () {
- var codePoints = [];
- for (var _i = 0; _i < arguments.length; _i++) {
- codePoints[_i] = arguments[_i];
- }
- if (String.fromCodePoint) {
- return String.fromCodePoint.apply(String, codePoints);
- }
- var length = codePoints.length;
- if (!length) {
- return '';
- }
- var codeUnits = [];
- var index = -1;
- var result = '';
- while (++index < length) {
- var codePoint = codePoints[index];
- if (codePoint <= 0xffff) {
- codeUnits.push(codePoint);
- }
- else {
- codePoint -= 0x10000;
- codeUnits.push((codePoint >> 10) + 0xd800, (codePoint % 0x400) + 0xdc00);
- }
- if (index + 1 === length || codeUnits.length > 0x4000) {
- result += String.fromCharCode.apply(String, codeUnits);
- codeUnits.length = 0;
- }
- }
- return result;
- };
- var chars$2 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/';
- // Use a lookup table to find the index.
- var lookup$2 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256);
- for (var i$2 = 0; i$2 < chars$2.length; i$2++) {
- lookup$2[chars$2.charCodeAt(i$2)] = i$2;
- }
-
- /*
- * utrie 1.0.2
- * Copyright (c) 2022 Niklas von Hertzen
- * Released under MIT License
- */
- var chars$1$1 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/';
- // Use a lookup table to find the index.
- var lookup$1$1 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256);
- for (var i$1$1 = 0; i$1$1 < chars$1$1.length; i$1$1++) {
- lookup$1$1[chars$1$1.charCodeAt(i$1$1)] = i$1$1;
- }
- var decode$1 = function (base64) {
- var bufferLength = base64.length * 0.75, len = base64.length, i, p = 0, encoded1, encoded2, encoded3, encoded4;
- if (base64[base64.length - 1] === '=') {
- bufferLength--;
- if (base64[base64.length - 2] === '=') {
- bufferLength--;
- }
- }
- var buffer = typeof ArrayBuffer !== 'undefined' &&
- typeof Uint8Array !== 'undefined' &&
- typeof Uint8Array.prototype.slice !== 'undefined'
- ? new ArrayBuffer(bufferLength)
- : new Array(bufferLength);
- var bytes = Array.isArray(buffer) ? buffer : new Uint8Array(buffer);
- for (i = 0; i < len; i += 4) {
- encoded1 = lookup$1$1[base64.charCodeAt(i)];
- encoded2 = lookup$1$1[base64.charCodeAt(i + 1)];
- encoded3 = lookup$1$1[base64.charCodeAt(i + 2)];
- encoded4 = lookup$1$1[base64.charCodeAt(i + 3)];
- bytes[p++] = (encoded1 << 2) | (encoded2 >> 4);
- bytes[p++] = ((encoded2 & 15) << 4) | (encoded3 >> 2);
- bytes[p++] = ((encoded3 & 3) << 6) | (encoded4 & 63);
- }
- return buffer;
- };
- var polyUint16Array$1 = function (buffer) {
- var length = buffer.length;
- var bytes = [];
- for (var i = 0; i < length; i += 2) {
- bytes.push((buffer[i + 1] << 8) | buffer[i]);
- }
- return bytes;
- };
- var polyUint32Array$1 = function (buffer) {
- var length = buffer.length;
- var bytes = [];
- for (var i = 0; i < length; i += 4) {
- bytes.push((buffer[i + 3] << 24) | (buffer[i + 2] << 16) | (buffer[i + 1] << 8) | buffer[i]);
- }
- return bytes;
- };
-
- /** Shift size for getting the index-2 table offset. */
- var UTRIE2_SHIFT_2$1 = 5;
- /** Shift size for getting the index-1 table offset. */
- var UTRIE2_SHIFT_1$1 = 6 + 5;
- /**
- * Shift size for shifting left the index array values.
- * Increases possible data size with 16-bit index values at the cost
- * of compactability.
- * This requires data blocks to be aligned by UTRIE2_DATA_GRANULARITY.
- */
- var UTRIE2_INDEX_SHIFT$1 = 2;
- /**
- * Difference between the two shift sizes,
- * for getting an index-1 offset from an index-2 offset. 6=11-5
- */
- var UTRIE2_SHIFT_1_2$1 = UTRIE2_SHIFT_1$1 - UTRIE2_SHIFT_2$1;
- /**
- * The part of the index-2 table for U+D800..U+DBFF stores values for
- * lead surrogate code _units_ not code _points_.
- * Values for lead surrogate code _points_ are indexed with this portion of the table.
- * Length=32=0x20=0x400>>UTRIE2_SHIFT_2. (There are 1024=0x400 lead surrogates.)
- */
- var UTRIE2_LSCP_INDEX_2_OFFSET$1 = 0x10000 >> UTRIE2_SHIFT_2$1;
- /** Number of entries in a data block. 32=0x20 */
- var UTRIE2_DATA_BLOCK_LENGTH$1 = 1 << UTRIE2_SHIFT_2$1;
- /** Mask for getting the lower bits for the in-data-block offset. */
- var UTRIE2_DATA_MASK$1 = UTRIE2_DATA_BLOCK_LENGTH$1 - 1;
- var UTRIE2_LSCP_INDEX_2_LENGTH$1 = 0x400 >> UTRIE2_SHIFT_2$1;
- /** Count the lengths of both BMP pieces. 2080=0x820 */
- var UTRIE2_INDEX_2_BMP_LENGTH$1 = UTRIE2_LSCP_INDEX_2_OFFSET$1 + UTRIE2_LSCP_INDEX_2_LENGTH$1;
- /**
- * The 2-byte UTF-8 version of the index-2 table follows at offset 2080=0x820.
- * Length 32=0x20 for lead bytes C0..DF, regardless of UTRIE2_SHIFT_2.
- */
- var UTRIE2_UTF8_2B_INDEX_2_OFFSET$1 = UTRIE2_INDEX_2_BMP_LENGTH$1;
- var UTRIE2_UTF8_2B_INDEX_2_LENGTH$1 = 0x800 >> 6; /* U+0800 is the first code point after 2-byte UTF-8 */
- /**
- * The index-1 table, only used for supplementary code points, at offset 2112=0x840.
- * Variable length, for code points up to highStart, where the last single-value range starts.
- * Maximum length 512=0x200=0x100000>>UTRIE2_SHIFT_1.
- * (For 0x100000 supplementary code points U+10000..U+10ffff.)
- *
- * The part of the index-2 table for supplementary code points starts
- * after this index-1 table.
- *
- * Both the index-1 table and the following part of the index-2 table
- * are omitted completely if there is only BMP data.
- */
- var UTRIE2_INDEX_1_OFFSET$1 = UTRIE2_UTF8_2B_INDEX_2_OFFSET$1 + UTRIE2_UTF8_2B_INDEX_2_LENGTH$1;
- /**
- * Number of index-1 entries for the BMP. 32=0x20
- * This part of the index-1 table is omitted from the serialized form.
- */
- var UTRIE2_OMITTED_BMP_INDEX_1_LENGTH$1 = 0x10000 >> UTRIE2_SHIFT_1$1;
- /** Number of entries in an index-2 block. 64=0x40 */
- var UTRIE2_INDEX_2_BLOCK_LENGTH$1 = 1 << UTRIE2_SHIFT_1_2$1;
- /** Mask for getting the lower bits for the in-index-2-block offset. */
- var UTRIE2_INDEX_2_MASK$1 = UTRIE2_INDEX_2_BLOCK_LENGTH$1 - 1;
- var slice16$1 = function (view, start, end) {
- if (view.slice) {
- return view.slice(start, end);
- }
- return new Uint16Array(Array.prototype.slice.call(view, start, end));
- };
- var slice32$1 = function (view, start, end) {
- if (view.slice) {
- return view.slice(start, end);
- }
- return new Uint32Array(Array.prototype.slice.call(view, start, end));
- };
- var createTrieFromBase64$1 = function (base64, _byteLength) {
- var buffer = decode$1(base64);
- var view32 = Array.isArray(buffer) ? polyUint32Array$1(buffer) : new Uint32Array(buffer);
- var view16 = Array.isArray(buffer) ? polyUint16Array$1(buffer) : new Uint16Array(buffer);
- var headerLength = 24;
- var index = slice16$1(view16, headerLength / 2, view32[4] / 2);
- var data = view32[5] === 2
- ? slice16$1(view16, (headerLength + view32[4]) / 2)
- : slice32$1(view32, Math.ceil((headerLength + view32[4]) / 4));
- return new Trie$1(view32[0], view32[1], view32[2], view32[3], index, data);
- };
- var Trie$1 = /** @class */ (function () {
- function Trie(initialValue, errorValue, highStart, highValueIndex, index, data) {
- this.initialValue = initialValue;
- this.errorValue = errorValue;
- this.highStart = highStart;
- this.highValueIndex = highValueIndex;
- this.index = index;
- this.data = data;
- }
- /**
- * Get the value for a code point as stored in the Trie.
- *
- * @param codePoint the code point
- * @return the value
- */
- Trie.prototype.get = function (codePoint) {
- var ix;
- if (codePoint >= 0) {
- if (codePoint < 0x0d800 || (codePoint > 0x0dbff && codePoint <= 0x0ffff)) {
- // Ordinary BMP code point, excluding leading surrogates.
- // BMP uses a single level lookup. BMP index starts at offset 0 in the Trie2 index.
- // 16 bit data is stored in the index array itself.
- ix = this.index[codePoint >> UTRIE2_SHIFT_2$1];
- ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1);
- return this.data[ix];
- }
- if (codePoint <= 0xffff) {
- // Lead Surrogate Code Point. A Separate index section is stored for
- // lead surrogate code units and code points.
- // The main index has the code unit data.
- // For this function, we need the code point data.
- // Note: this expression could be refactored for slightly improved efficiency, but
- // surrogate code points will be so rare in practice that it's not worth it.
- ix = this.index[UTRIE2_LSCP_INDEX_2_OFFSET$1 + ((codePoint - 0xd800) >> UTRIE2_SHIFT_2$1)];
- ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1);
- return this.data[ix];
- }
- if (codePoint < this.highStart) {
- // Supplemental code point, use two-level lookup.
- ix = UTRIE2_INDEX_1_OFFSET$1 - UTRIE2_OMITTED_BMP_INDEX_1_LENGTH$1 + (codePoint >> UTRIE2_SHIFT_1$1);
- ix = this.index[ix];
- ix += (codePoint >> UTRIE2_SHIFT_2$1) & UTRIE2_INDEX_2_MASK$1;
- ix = this.index[ix];
- ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1);
- return this.data[ix];
- }
- if (codePoint <= 0x10ffff) {
- return this.data[this.highValueIndex];
- }
- }
- // Fall through. The code point is outside of the legal range of 0..0x10ffff.
- return this.errorValue;
- };
- return Trie;
- }());
-
- /*
- * base64-arraybuffer 1.0.2
- * Copyright (c) 2022 Niklas von Hertzen
- * Released under MIT License
- */
- var chars$3 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/';
- // Use a lookup table to find the index.
- var lookup$3 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256);
- for (var i$3 = 0; i$3 < chars$3.length; i$3++) {
- lookup$3[chars$3.charCodeAt(i$3)] = i$3;
- }
-
- var base64$1 = 'KwAAAAAAAAAACA4AUD0AADAgAAACAAAAAAAIABAAGABAAEgAUABYAGAAaABgAGgAYgBqAF8AZwBgAGgAcQB5AHUAfQCFAI0AlQCdAKIAqgCyALoAYABoAGAAaABgAGgAwgDKAGAAaADGAM4A0wDbAOEA6QDxAPkAAQEJAQ8BFwF1AH0AHAEkASwBNAE6AUIBQQFJAVEBWQFhAWgBcAF4ATAAgAGGAY4BlQGXAZ8BpwGvAbUBvQHFAc0B0wHbAeMB6wHxAfkBAQIJAvEBEQIZAiECKQIxAjgCQAJGAk4CVgJeAmQCbAJ0AnwCgQKJApECmQKgAqgCsAK4ArwCxAIwAMwC0wLbAjAA4wLrAvMC+AIAAwcDDwMwABcDHQMlAy0DNQN1AD0DQQNJA0kDSQNRA1EDVwNZA1kDdQB1AGEDdQBpA20DdQN1AHsDdQCBA4kDkQN1AHUAmQOhA3UAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AKYDrgN1AHUAtgO+A8YDzgPWAxcD3gPjA+sD8wN1AHUA+wMDBAkEdQANBBUEHQQlBCoEFwMyBDgEYABABBcDSARQBFgEYARoBDAAcAQzAXgEgASIBJAEdQCXBHUAnwSnBK4EtgS6BMIEyAR1AHUAdQB1AHUAdQCVANAEYABgAGAAYABgAGAAYABgANgEYADcBOQEYADsBPQE/AQEBQwFFAUcBSQFLAU0BWQEPAVEBUsFUwVbBWAAYgVgAGoFcgV6BYIFigWRBWAAmQWfBaYFYABgAGAAYABgAKoFYACxBbAFuQW6BcEFwQXHBcEFwQXPBdMF2wXjBeoF8gX6BQIGCgYSBhoGIgYqBjIGOgZgAD4GRgZMBmAAUwZaBmAAYABgAGAAYABgAGAAYABgAGAAYABgAGIGYABpBnAGYABgAGAAYABgAGAAYABgAGAAYAB4Bn8GhQZgAGAAYAB1AHcDFQSLBmAAYABgAJMGdQA9A3UAmwajBqsGqwaVALMGuwbDBjAAywbSBtIG1QbSBtIG0gbSBtIG0gbdBuMG6wbzBvsGAwcLBxMHAwcbByMHJwcsBywHMQcsB9IGOAdAB0gHTgfSBkgHVgfSBtIG0gbSBtIG0gbSBtIG0gbSBiwHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAdgAGAALAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAdbB2MHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsB2kH0gZwB64EdQB1AHUAdQB1AHUAdQB1AHUHfQdgAIUHjQd1AHUAlQedB2AAYAClB6sHYACzB7YHvgfGB3UAzgfWBzMB3gfmB1EB7gf1B/0HlQENAQUIDQh1ABUIHQglCBcDLQg1CD0IRQhNCEEDUwh1AHUAdQBbCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIcAh3CHoIMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIgggwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAALAcsBywHLAcsBywHLAcsBywHLAcsB4oILAcsB44I0gaWCJ4Ipgh1AHUAqgiyCHUAdQB1AHUAdQB1AHUAdQB1AHUAtwh8AXUAvwh1AMUIyQjRCNkI4AjoCHUAdQB1AO4I9gj+CAYJDgkTCS0HGwkjCYIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiAAIAAAAFAAYABgAGIAXwBgAHEAdQBFAJUAogCyAKAAYABgAEIA4ABGANMA4QDxAMEBDwE1AFwBLAE6AQEBUQF4QkhCmEKoQrhCgAHIQsAB0MLAAcABwAHAAeDC6ABoAHDCwMMAAcABwAHAAdDDGMMAAcAB6MM4wwjDWMNow3jDaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAEjDqABWw6bDqABpg6gAaABoAHcDvwOPA+gAaABfA/8DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DpcPAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcAB9cPKwkyCToJMAB1AHUAdQBCCUoJTQl1AFUJXAljCWcJawkwADAAMAAwAHMJdQB2CX4JdQCECYoJjgmWCXUAngkwAGAAYABxAHUApgn3A64JtAl1ALkJdQDACTAAMAAwADAAdQB1AHUAdQB1AHUAdQB1AHUAowYNBMUIMAAwADAAMADICcsJ0wnZCRUE4QkwAOkJ8An4CTAAMAB1AAAKvwh1AAgKDwoXCh8KdQAwACcKLgp1ADYKqAmICT4KRgowADAAdQB1AE4KMAB1AFYKdQBeCnUAZQowADAAMAAwADAAMAAwADAAMAAVBHUAbQowADAAdQC5CXUKMAAwAHwBxAijBogEMgF9CoQKiASMCpQKmgqIBKIKqgquCogEDQG2Cr4KxgrLCjAAMADTCtsKCgHjCusK8Qr5CgELMAAwADAAMAB1AIsECQsRC3UANAEZCzAAMAAwADAAMAB1ACELKQswAHUANAExCzkLdQBBC0kLMABRC1kLMAAwADAAMAAwADAAdQBhCzAAMAAwAGAAYABpC3ELdwt/CzAAMACHC4sLkwubC58Lpwt1AK4Ltgt1APsDMAAwADAAMAAwADAAMAAwAL4LwwvLC9IL1wvdCzAAMADlC+kL8Qv5C/8LSQswADAAMAAwADAAMAAwADAAMAAHDDAAMAAwADAAMAAODBYMHgx1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1ACYMMAAwADAAdQB1AHUALgx1AHUAdQB1AHUAdQA2DDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AD4MdQBGDHUAdQB1AHUAdQB1AEkMdQB1AHUAdQB1AFAMMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQBYDHUAdQB1AF8MMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUA+wMVBGcMMAAwAHwBbwx1AHcMfwyHDI8MMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAYABgAJcMMAAwADAAdQB1AJ8MlQClDDAAMACtDCwHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsB7UMLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AA0EMAC9DDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAsBywHLAcsBywHLAcsBywHLQcwAMEMyAwsBywHLAcsBywHLAcsBywHLAcsBywHzAwwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1ANQM2QzhDDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMABgAGAAYABgAGAAYABgAOkMYADxDGAA+AwADQYNYABhCWAAYAAODTAAMAAwADAAFg1gAGAAHg37AzAAMAAwADAAYABgACYNYAAsDTQNPA1gAEMNPg1LDWAAYABgAGAAYABgAGAAYABgAGAAUg1aDYsGVglhDV0NcQBnDW0NdQ15DWAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAlQCBDZUAiA2PDZcNMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAnw2nDTAAMAAwADAAMAAwAHUArw23DTAAMAAwADAAMAAwADAAMAAwADAAMAB1AL8NMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAB1AHUAdQB1AHUAdQDHDTAAYABgAM8NMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAA1w11ANwNMAAwAD0B5A0wADAAMAAwADAAMADsDfQN/A0EDgwOFA4wABsOMAAwADAAMAAwADAAMAAwANIG0gbSBtIG0gbSBtIG0gYjDigOwQUuDsEFMw7SBjoO0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGQg5KDlIOVg7SBtIGXg5lDm0OdQ7SBtIGfQ6EDooOjQ6UDtIGmg6hDtIG0gaoDqwO0ga0DrwO0gZgAGAAYADEDmAAYAAkBtIGzA5gANIOYADaDokO0gbSBt8O5w7SBu8O0gb1DvwO0gZgAGAAxA7SBtIG0gbSBtIGYABgAGAAYAAED2AAsAUMD9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGFA8sBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAccD9IGLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHJA8sBywHLAcsBywHLAccDywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywPLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAc0D9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAccD9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGFA8sBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHPA/SBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gYUD0QPlQCVAJUAMAAwADAAMACVAJUAlQCVAJUAlQCVAEwPMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAA//8EAAQABAAEAAQABAAEAAQABAANAAMAAQABAAIABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQACgATABcAHgAbABoAHgAXABYAEgAeABsAGAAPABgAHABLAEsASwBLAEsASwBLAEsASwBLABgAGAAeAB4AHgATAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQABYAGwASAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAWAA0AEQAeAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAFAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAJABYAGgAbABsAGwAeAB0AHQAeAE8AFwAeAA0AHgAeABoAGwBPAE8ADgBQAB0AHQAdAE8ATwAXAE8ATwBPABYAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAFAAUABQAFAAUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAB4AHgAeAFAATwBAAE8ATwBPAEAATwBQAFAATwBQAB4AHgAeAB4AHgAeAB0AHQAdAB0AHgAdAB4ADgBQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgBQAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAJAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAkACQAJAAkACQAJAAkABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAFAAHgAeAB4AKwArAFAAUABQAFAAGABQACsAKwArACsAHgAeAFAAHgBQAFAAUAArAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAUAAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAYAA0AKwArAB4AHgAbACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQADQAEAB4ABAAEAB4ABAAEABMABAArACsAKwArACsAKwArACsAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAKwArACsAKwBWAFYAVgBWAB4AHgArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AGgAaABoAGAAYAB4AHgAEAAQABAAEAAQABAAEAAQABAAEAAQAEwAEACsAEwATAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABLAEsASwBLAEsASwBLAEsASwBLABoAGQAZAB4AUABQAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQABMAUAAEAAQABAAEAAQABAAEAB4AHgAEAAQABAAEAAQABABQAFAABAAEAB4ABAAEAAQABABQAFAASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUAAeAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAFAABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQAUABQAB4AHgAYABMAUAArACsABAAbABsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAFAABAAEAAQABAAEAFAABAAEAAQAUAAEAAQABAAEAAQAKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAArACsAHgArAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAUAAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAABAAEAA0ADQBLAEsASwBLAEsASwBLAEsASwBLAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUAArACsAKwBQAFAAUABQACsAKwAEAFAABAAEAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABABQACsAKwArACsAKwArACsAKwAEACsAKwArACsAUABQACsAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAFAAUAAaABoAUABQAFAAUABQAEwAHgAbAFAAHgAEACsAKwAEAAQABAArAFAAUABQAFAAUABQACsAKwArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQACsAUABQACsAKwAEACsABAAEAAQABAAEACsAKwArACsABAAEACsAKwAEAAQABAArACsAKwAEACsAKwArACsAKwArACsAUABQAFAAUAArAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLAAQABABQAFAAUAAEAB4AKwArACsAKwArACsAKwArACsAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQACsAKwAEAFAABAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAArACsAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAB4AGwArACsAKwArACsAKwArAFAABAAEAAQABAAEAAQAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABAArACsAKwArACsAKwArAAQABAAEACsAKwArACsAUABQACsAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAB4AUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAAQAUAArAFAAUABQAFAAUABQACsAKwArAFAAUABQACsAUABQAFAAUAArACsAKwBQAFAAKwBQACsAUABQACsAKwArAFAAUAArACsAKwBQAFAAUAArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArAAQABAAEAAQABAArACsAKwAEAAQABAArAAQABAAEAAQAKwArAFAAKwArACsAKwArACsABAArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAHgAeAB4AHgAeAB4AGwAeACsAKwArACsAKwAEAAQABAAEAAQAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAUAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAAEACsAKwArACsAKwArACsABAAEACsAUABQAFAAKwArACsAKwArAFAAUAAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAKwAOAFAAUABQAFAAUABQAFAAHgBQAAQABAAEAA4AUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAKwArAAQAUAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAAEACsAKwArACsAKwArACsABAAEACsAKwArACsAKwArACsAUAArAFAAUAAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwBQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAFAABAAEAAQABAAEAAQABAArAAQABAAEACsABAAEAAQABABQAB4AKwArACsAKwBQAFAAUAAEAFAAUABQAFAAUABQAFAAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAFAAUABQAFAAUABQAFAAUABQABoAUABQAFAAUABQAFAAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQACsAUAArACsAUABQAFAAUABQAFAAUAArACsAKwAEACsAKwArACsABAAEAAQABAAEAAQAKwAEACsABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArAAQABAAeACsAKwArACsAKwArACsAKwArACsAKwArAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAAqAFwAXAAqACoAKgAqACoAKgAqACsAKwArACsAGwBcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAeAEsASwBLAEsASwBLAEsASwBLAEsADQANACsAKwArACsAKwBcAFwAKwBcACsAXABcAFwAXABcACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACsAXAArAFwAXABcAFwAXABcAFwAXABcAFwAKgBcAFwAKgAqACoAKgAqACoAKgAqACoAXAArACsAXABcAFwAXABcACsAXAArACoAKgAqACoAKgAqACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwBcAFwAXABcAFAADgAOAA4ADgAeAA4ADgAJAA4ADgANAAkAEwATABMAEwATAAkAHgATAB4AHgAeAAQABAAeAB4AHgAeAB4AHgBLAEsASwBLAEsASwBLAEsASwBLAFAAUABQAFAAUABQAFAAUABQAFAADQAEAB4ABAAeAAQAFgARABYAEQAEAAQAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQADQAEAAQABAAEAAQADQAEAAQAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArAA0ADQAeAB4AHgAeAB4AHgAEAB4AHgAeAB4AHgAeACsAHgAeAA4ADgANAA4AHgAeAB4AHgAeAAkACQArACsAKwArACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgBcAEsASwBLAEsASwBLAEsASwBLAEsADQANAB4AHgAeAB4AXABcAFwAXABcAFwAKgAqACoAKgBcAFwAXABcACoAKgAqAFwAKgAqACoAXABcACoAKgAqACoAKgAqACoAXABcAFwAKgAqACoAKgBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKgAqAFwAKgBLAEsASwBLAEsASwBLAEsASwBLACoAKgAqACoAKgAqAFAAUABQAFAAUABQACsAUAArACsAKwArACsAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgBQAFAAUABQAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUAArACsAUABQAFAAUABQAFAAUAArAFAAKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAKwBQACsAUABQAFAAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsABAAEAAQAHgANAB4AHgAeAB4AHgAeAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUAArACsADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAANAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAWABEAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAA0ADQANAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAANAA0AKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUAArAAQABAArACsAKwArACsAKwArACsAKwArACsAKwBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqAA0ADQAVAFwADQAeAA0AGwBcACoAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwAeAB4AEwATAA0ADQAOAB4AEwATAB4ABAAEAAQACQArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUAAEAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAHgArACsAKwATABMASwBLAEsASwBLAEsASwBLAEsASwBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAArACsAXABcAFwAXABcACsAKwArACsAKwArACsAKwArACsAKwBcAFwAXABcAFwAXABcAFwAXABcAFwAXAArACsAKwArAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAXAArACsAKwAqACoAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAArACsAHgAeAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKwAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKwArAAQASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArACoAKgAqACoAKgAqACoAXAAqACoAKgAqACoAKgArACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABABQAFAAUABQAFAAUABQACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwANAA0AHgANAA0ADQANAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAEAAQAHgAeAB4AHgAeAB4AHgAeAB4AKwArACsABAAEAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwAeAB4AHgAeAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArAA0ADQANAA0ADQBLAEsASwBLAEsASwBLAEsASwBLACsAKwArAFAAUABQAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAA0ADQBQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUAAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArAAQABAAEAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAAQAUABQAFAAUABQAFAABABQAFAABAAEAAQAUAArACsAKwArACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsABAAEAAQABAAEAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAKwBQACsAUAArAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgBQAB4AHgAeAFAAUABQACsAHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQACsAKwAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQACsAHgAeAB4AHgAeAB4AHgAOAB4AKwANAA0ADQANAA0ADQANAAkADQANAA0ACAAEAAsABAAEAA0ACQANAA0ADAAdAB0AHgAXABcAFgAXABcAFwAWABcAHQAdAB4AHgAUABQAFAANAAEAAQAEAAQABAAEAAQACQAaABoAGgAaABoAGgAaABoAHgAXABcAHQAVABUAHgAeAB4AHgAeAB4AGAAWABEAFQAVABUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ADQAeAA0ADQANAA0AHgANAA0ADQAHAB4AHgAeAB4AKwAEAAQABAAEAAQABAAEAAQABAAEAFAAUAArACsATwBQAFAAUABQAFAAHgAeAB4AFgARAE8AUABPAE8ATwBPAFAAUABQAFAAUAAeAB4AHgAWABEAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArABsAGwAbABsAGwAbABsAGgAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGgAbABsAGwAbABoAGwAbABoAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAHgAeAFAAGgAeAB0AHgBQAB4AGgAeAB4AHgAeAB4AHgAeAB4AHgBPAB4AUAAbAB4AHgBQAFAAUABQAFAAHgAeAB4AHQAdAB4AUAAeAFAAHgBQAB4AUABPAFAAUAAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAHgBQAFAAUABQAE8ATwBQAFAAUABQAFAATwBQAFAATwBQAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAFAAUABQAFAATwBPAE8ATwBPAE8ATwBPAE8ATwBQAFAAUABQAFAAUABQAFAAUAAeAB4AUABQAFAAUABPAB4AHgArACsAKwArAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB4AHQAdAB4AHgAeAB0AHQAeAB4AHQAeAB4AHgAdAB4AHQAbABsAHgAdAB4AHgAeAB4AHQAeAB4AHQAdAB0AHQAeAB4AHQAeAB0AHgAdAB0AHQAdAB0AHQAeAB0AHgAeAB4AHgAeAB0AHQAdAB0AHgAeAB4AHgAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB4AHgAeAB0AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHgAeAB0AHQAdAB0AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAeAB4AHgAdAB4AHgAeAB4AHgAeAB4AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABYAEQAWABEAHgAeAB4AHgAeAB4AHQAeAB4AHgAeAB4AHgAeACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAWABEAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAFAAHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAeAB4AHQAdAB0AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHQAdAB4AHgAeAB4AHQAdAB0AHgAeAB0AHgAeAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlAB4AHQAdAB4AHgAdAB4AHgAeAB4AHQAdAB4AHgAeAB4AJQAlAB0AHQAlAB4AJQAlACUAIAAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAeAB4AHgAeAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHgAdAB0AHQAeAB0AJQAdAB0AHgAdAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAdAB0AHQAdACUAHgAlACUAJQAdACUAJQAdAB0AHQAlACUAHQAdACUAHQAdACUAJQAlAB4AHQAeAB4AHgAeAB0AHQAlAB0AHQAdAB0AHQAdACUAJQAlACUAJQAdACUAJQAgACUAHQAdACUAJQAlACUAJQAlACUAJQAeAB4AHgAlACUAIAAgACAAIAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AFwAXABcAFwAXABcAHgATABMAJQAeAB4AHgAWABEAFgARABYAEQAWABEAFgARABYAEQAWABEATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABYAEQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAWABEAFgARABYAEQAWABEAFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFgARABYAEQAWABEAFgARABYAEQAWABEAFgARABYAEQAWABEAFgARABYAEQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAWABEAFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAEAAQABAAeAB4AKwArACsAKwArABMADQANAA0AUAATAA0AUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAUAANACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAA0ADQANAA0ADQANAA0ADQAeAA0AFgANAB4AHgAXABcAHgAeABcAFwAWABEAFgARABYAEQAWABEADQANAA0ADQATAFAADQANAB4ADQANAB4AHgAeAB4AHgAMAAwADQANAA0AHgANAA0AFgANAA0ADQANAA0ADQANAA0AHgANAB4ADQANAB4AHgAeACsAKwArACsAKwArACsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwArACsAKwArACsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArAA0AEQARACUAJQBHAFcAVwAWABEAFgARABYAEQAWABEAFgARACUAJQAWABEAFgARABYAEQAWABEAFQAWABEAEQAlAFcAVwBXAFcAVwBXAFcAVwBXAAQABAAEAAQABAAEACUAVwBXAFcAVwA2ACUAJQBXAFcAVwBHAEcAJQAlACUAKwBRAFcAUQBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFEAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBRAFcAUQBXAFEAVwBXAFcAVwBXAFcAUQBXAFcAVwBXAFcAVwBRAFEAKwArAAQABAAVABUARwBHAFcAFQBRAFcAUQBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBRAFcAVwBXAFcAVwBXAFEAUQBXAFcAVwBXABUAUQBHAEcAVwArACsAKwArACsAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwAlACUAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACsAKwArACsAKwArACsAKwArACsAKwArAFEAUQBRAFEAUQBRAFEAUQBRAFEAUQBRAFEAUQBRAFEAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBPAE8ATwBPAE8ATwBPAE8AJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQAlAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAEcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAADQATAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABLAEsASwBLAEsASwBLAEsASwBLAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAABAAEAAQABAAeAAQABAAEAAQABAAEAAQABAAEAAQAHgBQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUABQAAQABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAeAA0ADQANAA0ADQArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAB4AHgAeAB4AHgAeAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAHgAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAeAB4AUABQAFAAUABQAFAAUABQAFAAUABQAAQAUABQAFAABABQAFAAUABQAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAeAB4AHgAeAAQAKwArACsAUABQAFAAUABQAFAAHgAeABoAHgArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAADgAOABMAEwArACsAKwArACsAKwArACsABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwANAA0ASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAFAAUAAeAB4AHgBQAA4AUABQAAQAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAA0ADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArAB4AWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYACsAKwArAAQAHgAeAB4AHgAeAB4ADQANAA0AHgAeAB4AHgArAFAASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArAB4AHgBcAFwAXABcAFwAKgBcAFwAXABcAFwAXABcAFwAXABcAEsASwBLAEsASwBLAEsASwBLAEsAXABcAFwAXABcACsAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArAFAAUABQAAQAUABQAFAAUABQAFAAUABQAAQABAArACsASwBLAEsASwBLAEsASwBLAEsASwArACsAHgANAA0ADQBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKgAqACoAXAAqACoAKgBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAAqAFwAKgAqACoAXABcACoAKgBcAFwAXABcAFwAKgAqAFwAKgBcACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFwAXABcACoAKgBQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAA0ADQBQAFAAUAAEAAQAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUAArACsAUABQAFAAUABQAFAAKwArAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQADQAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAVABVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBUAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVACsAKwArACsAKwArACsAKwArACsAKwArAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAKwArACsAKwBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAKwArACsAKwAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAKwArACsAKwArAFYABABWAFYAVgBWAFYAVgBWAFYAVgBWAB4AVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgArAFYAVgBWAFYAVgArAFYAKwBWAFYAKwBWAFYAKwBWAFYAVgBWAFYAVgBWAFYAVgBWAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAEQAWAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAaAB4AKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAGAARABEAGAAYABMAEwAWABEAFAArACsAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACUAJQAlACUAJQAWABEAFgARABYAEQAWABEAFgARABYAEQAlACUAFgARACUAJQAlACUAJQAlACUAEQAlABEAKwAVABUAEwATACUAFgARABYAEQAWABEAJQAlACUAJQAlACUAJQAlACsAJQAbABoAJQArACsAKwArAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAcAKwATACUAJQAbABoAJQAlABYAEQAlACUAEQAlABEAJQBXAFcAVwBXAFcAVwBXAFcAVwBXABUAFQAlACUAJQATACUAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXABYAJQARACUAJQAlAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAWACUAEQAlABYAEQARABYAEQARABUAVwBRAFEAUQBRAFEAUQBRAFEAUQBRAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAEcARwArACsAVwBXAFcAVwBXAFcAKwArAFcAVwBXAFcAVwBXACsAKwBXAFcAVwBXAFcAVwArACsAVwBXAFcAKwArACsAGgAbACUAJQAlABsAGwArAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwAEAAQABAAQAB0AKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsADQANAA0AKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAB4AHgAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAAQAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAA0AUABQAFAAUAArACsAKwArAFAAUABQAFAAUABQAFAAUAANAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwAeACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAKwArAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUAArACsAKwBQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwANAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAB4AUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAUABQAFAAUABQAAQABAAEACsABAAEACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAKwBQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEACsAKwArACsABABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAA0ADQANAA0ADQANAA0ADQAeACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAArACsAKwArAFAAUABQAFAAUAANAA0ADQANAA0ADQAUACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsADQANAA0ADQANAA0ADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAAQABAAEAAQAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArAAQABAANACsAKwBQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAB4AHgAeAB4AHgArACsAKwArACsAKwAEAAQABAAEAAQABAAEAA0ADQAeAB4AHgAeAB4AKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgANAA0ADQANACsAKwArACsAKwArACsAKwArACsAKwAeACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsASwBLAEsASwBLAEsASwBLAEsASwANAA0ADQANAFAABAAEAFAAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAeAA4AUAArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAADQANAB4ADQAEAAQABAAEAB4ABAAEAEsASwBLAEsASwBLAEsASwBLAEsAUAAOAFAADQANAA0AKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAANAA0AHgANAA0AHgAEACsAUABQAFAAUABQAFAAUAArAFAAKwBQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAA0AKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsABAAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQACsABAAEAFAABAAEAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABAArACsAUAArACsAKwArACsAKwAEACsAKwArACsAKwBQAFAAUABQAFAABAAEACsAKwAEAAQABAAEAAQABAAEACsAKwArAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwArACsABAAEAAQABAAEAAQABABQAFAAUABQAA0ADQANAA0AHgBLAEsASwBLAEsASwBLAEsASwBLAA0ADQArAB4ABABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAFAAUAAeAFAAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAArACsABAAEAAQABAAEAAQABAAEAAQADgANAA0AEwATAB4AHgAeAA0ADQANAA0ADQANAA0ADQANAA0ADQANAA0ADQANAFAAUABQAFAABAAEACsAKwAEAA0ADQAeAFAAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKwArACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBcAFwADQANAA0AKgBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAKwArAFAAKwArAFAAUABQAFAAUABQAFAAUAArAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQAKwAEAAQAKwArAAQABAAEAAQAUAAEAFAABAAEAA0ADQANACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAArACsABAAEAAQABAAEAAQABABQAA4AUAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAFAABAAEAAQABAAOAB4ADQANAA0ADQAOAB4ABAArACsAKwArACsAKwArACsAUAAEAAQABAAEAAQABAAEAAQABAAEAAQAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAA0ADQANAFAADgAOAA4ADQANACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEACsABAAEAAQABAAEAAQABAAEAFAADQANAA0ADQANACsAKwArACsAKwArACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwAOABMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAArACsAKwAEACsABAAEACsABAAEAAQABAAEAAQABABQAAQAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAKwBQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQAKwAEAAQAKwAEAAQABAAEAAQAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAaABoAGgAaAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsADQANAA0ADQANACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAASABIAEgAQwBDAEMAUABQAFAAUABDAFAAUABQAEgAQwBIAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAASABDAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwAJAAkACQAJAAkACQAJABYAEQArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABIAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwANAA0AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEAAQABAANACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAA0ADQANAB4AHgAeAB4AHgAeAFAAUABQAFAADQAeACsAKwArACsAKwArACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAANAA0AHgAeACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwAEAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAARwBHABUARwAJACsAKwArACsAKwArACsAKwArACsAKwAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACsAKwArACsAKwArACsAKwBXAFcAVwBXAFcAVwBXAFcAVwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUQBRAFEAKwArACsAKwArACsAKwArACsAKwArACsAKwBRAFEAUQBRACsAKwArACsAKwArACsAKwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArACsAHgAEAAQADQAEAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArAB4AHgAeAB4AHgAeAB4AKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAAQABAAEAAQABAAeAB4AHgAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAB4AHgAEAAQABAAEAAQABAAEAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQAHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwBQAFAAKwArAFAAKwArAFAAUAArACsAUABQAFAAUAArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAUAArAFAAUABQAFAAUABQAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAHgAeAFAAUABQAFAAUAArAFAAKwArACsAUABQAFAAUABQAFAAUAArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeACsAKwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4ABAAeAB4AHgAeAB4AHgAeAB4AHgAeAAQAHgAeAA0ADQANAA0AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAAQAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArAAQABAAEAAQABAAEAAQAKwAEAAQAKwAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwBQAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArABsAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArAB4AHgAeAB4ABAAEAAQABAAEAAQABABQACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArABYAFgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAGgBQAFAAUAAaAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAKwBQACsAKwBQACsAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwBQACsAUAArACsAKwArACsAKwBQACsAKwArACsAUAArAFAAKwBQACsAUABQAFAAKwBQAFAAKwBQACsAKwBQACsAUAArAFAAKwBQACsAUAArAFAAUAArAFAAKwArAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUAArAFAAUABQAFAAKwBQACsAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAUABQAFAAKwBQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8AJQAlACUAHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB4AHgAeACUAJQAlAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlAB4AHgAlACUAJQAlACUAHgAlACUAJQAlACUAIAAgACAAJQAlACAAJQAlACAAIAAgACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACEAIQAhACEAIQAlACUAIAAgACUAJQAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlACUAIAAlACUAJQAlACAAIAAgACUAIAAgACAAJQAlACUAJQAlACUAJQAgACUAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAlAB4AJQAeACUAJQAlACUAJQAgACUAJQAlACUAHgAlAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAgACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACAAIAAgACUAJQAlACAAIAAgACAAIAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABcAFwAXABUAFQAVAB4AHgAeAB4AJQAlACUAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAgACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAgACUAJQAgACUAJQAlACUAJQAlACUAJQAgACAAIAAgACAAIAAgACAAJQAlACUAJQAlACUAIAAlACUAJQAlACUAJQAlACUAJQAgACAAIAAgACAAIAAgACAAIAAgACUAJQAgACAAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAgACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAlACAAIAAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAgACAAIAAlACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwArAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACUAVwBXACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAA==';
-
- var LETTER_NUMBER_MODIFIER = 50;
- // Non-tailorable Line Breaking Classes
- var BK = 1; // Cause a line break (after)
- var CR$1 = 2; // Cause a line break (after), except between CR and LF
- var LF$1 = 3; // Cause a line break (after)
- var CM = 4; // Prohibit a line break between the character and the preceding character
- var NL = 5; // Cause a line break (after)
- var WJ = 7; // Prohibit line breaks before and after
- var ZW = 8; // Provide a break opportunity
- var GL = 9; // Prohibit line breaks before and after
- var SP = 10; // Enable indirect line breaks
- var ZWJ$1 = 11; // Prohibit line breaks within joiner sequences
- // Break Opportunities
- var B2 = 12; // Provide a line break opportunity before and after the character
- var BA = 13; // Generally provide a line break opportunity after the character
- var BB = 14; // Generally provide a line break opportunity before the character
- var HY = 15; // Provide a line break opportunity after the character, except in numeric context
- var CB = 16; // Provide a line break opportunity contingent on additional information
- // Characters Prohibiting Certain Breaks
- var CL = 17; // Prohibit line breaks before
- var CP = 18; // Prohibit line breaks before
- var EX = 19; // Prohibit line breaks before
- var IN = 20; // Allow only indirect line breaks between pairs
- var NS = 21; // Allow only indirect line breaks before
- var OP = 22; // Prohibit line breaks after
- var QU = 23; // Act like they are both opening and closing
- // Numeric Context
- var IS = 24; // Prevent breaks after any and before numeric
- var NU = 25; // Form numeric expressions for line breaking purposes
- var PO = 26; // Do not break following a numeric expression
- var PR = 27; // Do not break in front of a numeric expression
- var SY = 28; // Prevent a break before; and allow a break after
- // Other Characters
- var AI = 29; // Act like AL when the resolvedEAW is N; otherwise; act as ID
- var AL = 30; // Are alphabetic characters or symbols that are used with alphabetic characters
- var CJ = 31; // Treat as NS or ID for strict or normal breaking.
- var EB = 32; // Do not break from following Emoji Modifier
- var EM = 33; // Do not break from preceding Emoji Base
- var H2 = 34; // Form Korean syllable blocks
- var H3 = 35; // Form Korean syllable blocks
- var HL = 36; // Do not break around a following hyphen; otherwise act as Alphabetic
- var ID = 37; // Break before or after; except in some numeric context
- var JL = 38; // Form Korean syllable blocks
- var JV = 39; // Form Korean syllable blocks
- var JT = 40; // Form Korean syllable blocks
- var RI$1 = 41; // Keep pairs together. For pairs; break before and after other classes
- var SA = 42; // Provide a line break opportunity contingent on additional, language-specific context analysis
- var XX = 43; // Have as yet unknown line breaking behavior or unassigned code positions
- var ea_OP = [0x2329, 0xff08];
- var BREAK_MANDATORY = '!';
- var BREAK_NOT_ALLOWED$1 = '×';
- var BREAK_ALLOWED$1 = '÷';
- var UnicodeTrie$1 = createTrieFromBase64$1(base64$1);
- var ALPHABETICS = [AL, HL];
- var HARD_LINE_BREAKS = [BK, CR$1, LF$1, NL];
- var SPACE$1 = [SP, ZW];
- var PREFIX_POSTFIX = [PR, PO];
- var LINE_BREAKS = HARD_LINE_BREAKS.concat(SPACE$1);
- var KOREAN_SYLLABLE_BLOCK = [JL, JV, JT, H2, H3];
- var HYPHEN = [HY, BA];
- var codePointsToCharacterClasses = function (codePoints, lineBreak) {
- if (lineBreak === void 0) { lineBreak = 'strict'; }
- var types = [];
- var indices = [];
- var categories = [];
- codePoints.forEach(function (codePoint, index) {
- var classType = UnicodeTrie$1.get(codePoint);
- if (classType > LETTER_NUMBER_MODIFIER) {
- categories.push(true);
- classType -= LETTER_NUMBER_MODIFIER;
- }
- else {
- categories.push(false);
- }
- if (['normal', 'auto', 'loose'].indexOf(lineBreak) !== -1) {
- // U+2010, – U+2013, 〜 U+301C, ゠ U+30A0
- if ([0x2010, 0x2013, 0x301c, 0x30a0].indexOf(codePoint) !== -1) {
- indices.push(index);
- return types.push(CB);
- }
- }
- if (classType === CM || classType === ZWJ$1) {
- // LB10 Treat any remaining combining mark or ZWJ as AL.
- if (index === 0) {
- indices.push(index);
- return types.push(AL);
- }
- // LB9 Do not break a combining character sequence; treat it as if it has the line breaking class of
- // the base character in all of the following rules. Treat ZWJ as if it were CM.
- var prev = types[index - 1];
- if (LINE_BREAKS.indexOf(prev) === -1) {
- indices.push(indices[index - 1]);
- return types.push(prev);
- }
- indices.push(index);
- return types.push(AL);
- }
- indices.push(index);
- if (classType === CJ) {
- return types.push(lineBreak === 'strict' ? NS : ID);
- }
- if (classType === SA) {
- return types.push(AL);
- }
- if (classType === AI) {
- return types.push(AL);
- }
- // For supplementary characters, a useful default is to treat characters in the range 10000..1FFFD as AL
- // and characters in the ranges 20000..2FFFD and 30000..3FFFD as ID, until the implementation can be revised
- // to take into account the actual line breaking properties for these characters.
- if (classType === XX) {
- if ((codePoint >= 0x20000 && codePoint <= 0x2fffd) || (codePoint >= 0x30000 && codePoint <= 0x3fffd)) {
- return types.push(ID);
- }
- else {
- return types.push(AL);
- }
- }
- types.push(classType);
- });
- return [indices, types, categories];
- };
- var isAdjacentWithSpaceIgnored = function (a, b, currentIndex, classTypes) {
- var current = classTypes[currentIndex];
- if (Array.isArray(a) ? a.indexOf(current) !== -1 : a === current) {
- var i = currentIndex;
- while (i <= classTypes.length) {
- i++;
- var next = classTypes[i];
- if (next === b) {
- return true;
- }
- if (next !== SP) {
- break;
- }
- }
- }
- if (current === SP) {
- var i = currentIndex;
- while (i > 0) {
- i--;
- var prev = classTypes[i];
- if (Array.isArray(a) ? a.indexOf(prev) !== -1 : a === prev) {
- var n = currentIndex;
- while (n <= classTypes.length) {
- n++;
- var next = classTypes[n];
- if (next === b) {
- return true;
- }
- if (next !== SP) {
- break;
- }
- }
- }
- if (prev !== SP) {
- break;
- }
- }
- }
- return false;
- };
- var previousNonSpaceClassType = function (currentIndex, classTypes) {
- var i = currentIndex;
- while (i >= 0) {
- var type = classTypes[i];
- if (type === SP) {
- i--;
- }
- else {
- return type;
- }
- }
- return 0;
- };
- var _lineBreakAtIndex = function (codePoints, classTypes, indicies, index, forbiddenBreaks) {
- if (indicies[index] === 0) {
- return BREAK_NOT_ALLOWED$1;
- }
- var currentIndex = index - 1;
- if (Array.isArray(forbiddenBreaks) && forbiddenBreaks[currentIndex] === true) {
- return BREAK_NOT_ALLOWED$1;
- }
- var beforeIndex = currentIndex - 1;
- var afterIndex = currentIndex + 1;
- var current = classTypes[currentIndex];
- // LB4 Always break after hard line breaks.
- // LB5 Treat CR followed by LF, as well as CR, LF, and NL as hard line breaks.
- var before = beforeIndex >= 0 ? classTypes[beforeIndex] : 0;
- var next = classTypes[afterIndex];
- if (current === CR$1 && next === LF$1) {
- return BREAK_NOT_ALLOWED$1;
- }
- if (HARD_LINE_BREAKS.indexOf(current) !== -1) {
- return BREAK_MANDATORY;
- }
- // LB6 Do not break before hard line breaks.
- if (HARD_LINE_BREAKS.indexOf(next) !== -1) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB7 Do not break before spaces or zero width space.
- if (SPACE$1.indexOf(next) !== -1) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB8 Break before any character following a zero-width space, even if one or more spaces intervene.
- if (previousNonSpaceClassType(currentIndex, classTypes) === ZW) {
- return BREAK_ALLOWED$1;
- }
- // LB8a Do not break after a zero width joiner.
- if (UnicodeTrie$1.get(codePoints[currentIndex]) === ZWJ$1) {
- return BREAK_NOT_ALLOWED$1;
- }
- // zwj emojis
- if ((current === EB || current === EM) && UnicodeTrie$1.get(codePoints[afterIndex]) === ZWJ$1) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB11 Do not break before or after Word joiner and related characters.
- if (current === WJ || next === WJ) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB12 Do not break after NBSP and related characters.
- if (current === GL) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB12a Do not break before NBSP and related characters, except after spaces and hyphens.
- if ([SP, BA, HY].indexOf(current) === -1 && next === GL) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB13 Do not break before ‘]’ or ‘!’ or ‘;’ or ‘/’, even after spaces.
- if ([CL, CP, EX, IS, SY].indexOf(next) !== -1) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB14 Do not break after ‘[’, even after spaces.
- if (previousNonSpaceClassType(currentIndex, classTypes) === OP) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB15 Do not break within ‘”[’, even with intervening spaces.
- if (isAdjacentWithSpaceIgnored(QU, OP, currentIndex, classTypes)) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB16 Do not break between closing punctuation and a nonstarter (lb=NS), even with intervening spaces.
- if (isAdjacentWithSpaceIgnored([CL, CP], NS, currentIndex, classTypes)) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB17 Do not break within ‘——’, even with intervening spaces.
- if (isAdjacentWithSpaceIgnored(B2, B2, currentIndex, classTypes)) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB18 Break after spaces.
- if (current === SP) {
- return BREAK_ALLOWED$1;
- }
- // LB19 Do not break before or after quotation marks, such as ‘ ” ’.
- if (current === QU || next === QU) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB20 Break before and after unresolved CB.
- if (next === CB || current === CB) {
- return BREAK_ALLOWED$1;
- }
- // LB21 Do not break before hyphen-minus, other hyphens, fixed-width spaces, small kana, and other non-starters, or after acute accents.
- if ([BA, HY, NS].indexOf(next) !== -1 || current === BB) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB21a Don't break after Hebrew + Hyphen.
- if (before === HL && HYPHEN.indexOf(current) !== -1) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB21b Don’t break between Solidus and Hebrew letters.
- if (current === SY && next === HL) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB22 Do not break before ellipsis.
- if (next === IN) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB23 Do not break between digits and letters.
- if ((ALPHABETICS.indexOf(next) !== -1 && current === NU) || (ALPHABETICS.indexOf(current) !== -1 && next === NU)) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB23a Do not break between numeric prefixes and ideographs, or between ideographs and numeric postfixes.
- if ((current === PR && [ID, EB, EM].indexOf(next) !== -1) ||
- ([ID, EB, EM].indexOf(current) !== -1 && next === PO)) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB24 Do not break between numeric prefix/postfix and letters, or between letters and prefix/postfix.
- if ((ALPHABETICS.indexOf(current) !== -1 && PREFIX_POSTFIX.indexOf(next) !== -1) ||
- (PREFIX_POSTFIX.indexOf(current) !== -1 && ALPHABETICS.indexOf(next) !== -1)) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB25 Do not break between the following pairs of classes relevant to numbers:
- if (
- // (PR | PO) × ( OP | HY )? NU
- ([PR, PO].indexOf(current) !== -1 &&
- (next === NU || ([OP, HY].indexOf(next) !== -1 && classTypes[afterIndex + 1] === NU))) ||
- // ( OP | HY ) × NU
- ([OP, HY].indexOf(current) !== -1 && next === NU) ||
- // NU × (NU | SY | IS)
- (current === NU && [NU, SY, IS].indexOf(next) !== -1)) {
- return BREAK_NOT_ALLOWED$1;
- }
- // NU (NU | SY | IS)* × (NU | SY | IS | CL | CP)
- if ([NU, SY, IS, CL, CP].indexOf(next) !== -1) {
- var prevIndex = currentIndex;
- while (prevIndex >= 0) {
- var type = classTypes[prevIndex];
- if (type === NU) {
- return BREAK_NOT_ALLOWED$1;
- }
- else if ([SY, IS].indexOf(type) !== -1) {
- prevIndex--;
- }
- else {
- break;
- }
- }
- }
- // NU (NU | SY | IS)* (CL | CP)? × (PO | PR))
- if ([PR, PO].indexOf(next) !== -1) {
- var prevIndex = [CL, CP].indexOf(current) !== -1 ? beforeIndex : currentIndex;
- while (prevIndex >= 0) {
- var type = classTypes[prevIndex];
- if (type === NU) {
- return BREAK_NOT_ALLOWED$1;
- }
- else if ([SY, IS].indexOf(type) !== -1) {
- prevIndex--;
- }
- else {
- break;
- }
- }
- }
- // LB26 Do not break a Korean syllable.
- if ((JL === current && [JL, JV, H2, H3].indexOf(next) !== -1) ||
- ([JV, H2].indexOf(current) !== -1 && [JV, JT].indexOf(next) !== -1) ||
- ([JT, H3].indexOf(current) !== -1 && next === JT)) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB27 Treat a Korean Syllable Block the same as ID.
- if ((KOREAN_SYLLABLE_BLOCK.indexOf(current) !== -1 && [IN, PO].indexOf(next) !== -1) ||
- (KOREAN_SYLLABLE_BLOCK.indexOf(next) !== -1 && current === PR)) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB28 Do not break between alphabetics (“at”).
- if (ALPHABETICS.indexOf(current) !== -1 && ALPHABETICS.indexOf(next) !== -1) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB29 Do not break between numeric punctuation and alphabetics (“e.g.”).
- if (current === IS && ALPHABETICS.indexOf(next) !== -1) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB30 Do not break between letters, numbers, or ordinary symbols and opening or closing parentheses.
- if ((ALPHABETICS.concat(NU).indexOf(current) !== -1 &&
- next === OP &&
- ea_OP.indexOf(codePoints[afterIndex]) === -1) ||
- (ALPHABETICS.concat(NU).indexOf(next) !== -1 && current === CP)) {
- return BREAK_NOT_ALLOWED$1;
- }
- // LB30a Break between two regional indicator symbols if and only if there are an even number of regional
- // indicators preceding the position of the break.
- if (current === RI$1 && next === RI$1) {
- var i = indicies[currentIndex];
- var count = 1;
- while (i > 0) {
- i--;
- if (classTypes[i] === RI$1) {
- count++;
- }
- else {
- break;
- }
- }
- if (count % 2 !== 0) {
- return BREAK_NOT_ALLOWED$1;
- }
- }
- // LB30b Do not break between an emoji base and an emoji modifier.
- if (current === EB && next === EM) {
- return BREAK_NOT_ALLOWED$1;
- }
- return BREAK_ALLOWED$1;
- };
- var cssFormattedClasses = function (codePoints, options) {
- if (!options) {
- options = { lineBreak: 'normal', wordBreak: 'normal' };
- }
- var _a = codePointsToCharacterClasses(codePoints, options.lineBreak), indicies = _a[0], classTypes = _a[1], isLetterNumber = _a[2];
- if (options.wordBreak === 'break-all' || options.wordBreak === 'break-word') {
- classTypes = classTypes.map(function (type) { return ([NU, AL, SA].indexOf(type) !== -1 ? ID : type); });
- }
- var forbiddenBreakpoints = options.wordBreak === 'keep-all'
- ? isLetterNumber.map(function (letterNumber, i) {
- return letterNumber && codePoints[i] >= 0x4e00 && codePoints[i] <= 0x9fff;
- })
- : undefined;
- return [indicies, classTypes, forbiddenBreakpoints];
- };
- var Break = /** @class */ (function () {
- function Break(codePoints, lineBreak, start, end) {
- this.codePoints = codePoints;
- this.required = lineBreak === BREAK_MANDATORY;
- this.start = start;
- this.end = end;
- }
- Break.prototype.slice = function () {
- return fromCodePoint$1.apply(void 0, this.codePoints.slice(this.start, this.end));
- };
- return Break;
- }());
- var LineBreaker = function (str, options) {
- var codePoints = toCodePoints$1(str);
- var _a = cssFormattedClasses(codePoints, options), indicies = _a[0], classTypes = _a[1], forbiddenBreakpoints = _a[2];
- var length = codePoints.length;
- var lastEnd = 0;
- var nextIndex = 0;
- return {
- next: function () {
- if (nextIndex >= length) {
- return { done: true, value: null };
- }
- var lineBreak = BREAK_NOT_ALLOWED$1;
- while (nextIndex < length &&
- (lineBreak = _lineBreakAtIndex(codePoints, classTypes, indicies, ++nextIndex, forbiddenBreakpoints)) ===
- BREAK_NOT_ALLOWED$1) { }
- if (lineBreak !== BREAK_NOT_ALLOWED$1 || nextIndex === length) {
- var value = new Break(codePoints, lineBreak, lastEnd, nextIndex);
- lastEnd = nextIndex;
- return { value: value, done: false };
- }
- return { done: true, value: null };
- },
- };
- };
-
- // https://www.w3.org/TR/css-syntax-3
- var FLAG_UNRESTRICTED = 1 << 0;
- var FLAG_ID = 1 << 1;
- var FLAG_INTEGER = 1 << 2;
- var FLAG_NUMBER = 1 << 3;
- var LINE_FEED = 0x000a;
- var SOLIDUS = 0x002f;
- var REVERSE_SOLIDUS = 0x005c;
- var CHARACTER_TABULATION = 0x0009;
- var SPACE = 0x0020;
- var QUOTATION_MARK = 0x0022;
- var EQUALS_SIGN = 0x003d;
- var NUMBER_SIGN = 0x0023;
- var DOLLAR_SIGN = 0x0024;
- var PERCENTAGE_SIGN = 0x0025;
- var APOSTROPHE = 0x0027;
- var LEFT_PARENTHESIS = 0x0028;
- var RIGHT_PARENTHESIS = 0x0029;
- var LOW_LINE = 0x005f;
- var HYPHEN_MINUS = 0x002d;
- var EXCLAMATION_MARK = 0x0021;
- var LESS_THAN_SIGN = 0x003c;
- var GREATER_THAN_SIGN = 0x003e;
- var COMMERCIAL_AT = 0x0040;
- var LEFT_SQUARE_BRACKET = 0x005b;
- var RIGHT_SQUARE_BRACKET = 0x005d;
- var CIRCUMFLEX_ACCENT = 0x003d;
- var LEFT_CURLY_BRACKET = 0x007b;
- var QUESTION_MARK = 0x003f;
- var RIGHT_CURLY_BRACKET = 0x007d;
- var VERTICAL_LINE = 0x007c;
- var TILDE = 0x007e;
- var CONTROL = 0x0080;
- var REPLACEMENT_CHARACTER = 0xfffd;
- var ASTERISK = 0x002a;
- var PLUS_SIGN = 0x002b;
- var COMMA = 0x002c;
- var COLON = 0x003a;
- var SEMICOLON = 0x003b;
- var FULL_STOP = 0x002e;
- var NULL = 0x0000;
- var BACKSPACE = 0x0008;
- var LINE_TABULATION = 0x000b;
- var SHIFT_OUT = 0x000e;
- var INFORMATION_SEPARATOR_ONE = 0x001f;
- var DELETE = 0x007f;
- var EOF = -1;
- var ZERO = 0x0030;
- var a = 0x0061;
- var e = 0x0065;
- var f = 0x0066;
- var u = 0x0075;
- var z = 0x007a;
- var A = 0x0041;
- var E = 0x0045;
- var F = 0x0046;
- var U = 0x0055;
- var Z = 0x005a;
- var isDigit = function (codePoint) { return codePoint >= ZERO && codePoint <= 0x0039; };
- var isSurrogateCodePoint = function (codePoint) { return codePoint >= 0xd800 && codePoint <= 0xdfff; };
- var isHex = function (codePoint) {
- return isDigit(codePoint) || (codePoint >= A && codePoint <= F) || (codePoint >= a && codePoint <= f);
- };
- var isLowerCaseLetter = function (codePoint) { return codePoint >= a && codePoint <= z; };
- var isUpperCaseLetter = function (codePoint) { return codePoint >= A && codePoint <= Z; };
- var isLetter = function (codePoint) { return isLowerCaseLetter(codePoint) || isUpperCaseLetter(codePoint); };
- var isNonASCIICodePoint = function (codePoint) { return codePoint >= CONTROL; };
- var isWhiteSpace = function (codePoint) {
- return codePoint === LINE_FEED || codePoint === CHARACTER_TABULATION || codePoint === SPACE;
- };
- var isNameStartCodePoint = function (codePoint) {
- return isLetter(codePoint) || isNonASCIICodePoint(codePoint) || codePoint === LOW_LINE;
- };
- var isNameCodePoint = function (codePoint) {
- return isNameStartCodePoint(codePoint) || isDigit(codePoint) || codePoint === HYPHEN_MINUS;
- };
- var isNonPrintableCodePoint = function (codePoint) {
- return ((codePoint >= NULL && codePoint <= BACKSPACE) ||
- codePoint === LINE_TABULATION ||
- (codePoint >= SHIFT_OUT && codePoint <= INFORMATION_SEPARATOR_ONE) ||
- codePoint === DELETE);
- };
- var isValidEscape = function (c1, c2) {
- if (c1 !== REVERSE_SOLIDUS) {
- return false;
- }
- return c2 !== LINE_FEED;
- };
- var isIdentifierStart = function (c1, c2, c3) {
- if (c1 === HYPHEN_MINUS) {
- return isNameStartCodePoint(c2) || isValidEscape(c2, c3);
- }
- else if (isNameStartCodePoint(c1)) {
- return true;
- }
- else if (c1 === REVERSE_SOLIDUS && isValidEscape(c1, c2)) {
- return true;
- }
- return false;
- };
- var isNumberStart = function (c1, c2, c3) {
- if (c1 === PLUS_SIGN || c1 === HYPHEN_MINUS) {
- if (isDigit(c2)) {
- return true;
- }
- return c2 === FULL_STOP && isDigit(c3);
- }
- if (c1 === FULL_STOP) {
- return isDigit(c2);
- }
- return isDigit(c1);
- };
- var stringToNumber = function (codePoints) {
- var c = 0;
- var sign = 1;
- if (codePoints[c] === PLUS_SIGN || codePoints[c] === HYPHEN_MINUS) {
- if (codePoints[c] === HYPHEN_MINUS) {
- sign = -1;
- }
- c++;
- }
- var integers = [];
- while (isDigit(codePoints[c])) {
- integers.push(codePoints[c++]);
- }
- var int = integers.length ? parseInt(fromCodePoint$1.apply(void 0, integers), 10) : 0;
- if (codePoints[c] === FULL_STOP) {
- c++;
- }
- var fraction = [];
- while (isDigit(codePoints[c])) {
- fraction.push(codePoints[c++]);
- }
- var fracd = fraction.length;
- var frac = fracd ? parseInt(fromCodePoint$1.apply(void 0, fraction), 10) : 0;
- if (codePoints[c] === E || codePoints[c] === e) {
- c++;
- }
- var expsign = 1;
- if (codePoints[c] === PLUS_SIGN || codePoints[c] === HYPHEN_MINUS) {
- if (codePoints[c] === HYPHEN_MINUS) {
- expsign = -1;
- }
- c++;
- }
- var exponent = [];
- while (isDigit(codePoints[c])) {
- exponent.push(codePoints[c++]);
- }
- var exp = exponent.length ? parseInt(fromCodePoint$1.apply(void 0, exponent), 10) : 0;
- return sign * (int + frac * Math.pow(10, -fracd)) * Math.pow(10, expsign * exp);
- };
- var LEFT_PARENTHESIS_TOKEN = {
- type: 2 /* LEFT_PARENTHESIS_TOKEN */
- };
- var RIGHT_PARENTHESIS_TOKEN = {
- type: 3 /* RIGHT_PARENTHESIS_TOKEN */
- };
- var COMMA_TOKEN = { type: 4 /* COMMA_TOKEN */ };
- var SUFFIX_MATCH_TOKEN = { type: 13 /* SUFFIX_MATCH_TOKEN */ };
- var PREFIX_MATCH_TOKEN = { type: 8 /* PREFIX_MATCH_TOKEN */ };
- var COLUMN_TOKEN = { type: 21 /* COLUMN_TOKEN */ };
- var DASH_MATCH_TOKEN = { type: 9 /* DASH_MATCH_TOKEN */ };
- var INCLUDE_MATCH_TOKEN = { type: 10 /* INCLUDE_MATCH_TOKEN */ };
- var LEFT_CURLY_BRACKET_TOKEN = {
- type: 11 /* LEFT_CURLY_BRACKET_TOKEN */
- };
- var RIGHT_CURLY_BRACKET_TOKEN = {
- type: 12 /* RIGHT_CURLY_BRACKET_TOKEN */
- };
- var SUBSTRING_MATCH_TOKEN = { type: 14 /* SUBSTRING_MATCH_TOKEN */ };
- var BAD_URL_TOKEN = { type: 23 /* BAD_URL_TOKEN */ };
- var BAD_STRING_TOKEN = { type: 1 /* BAD_STRING_TOKEN */ };
- var CDO_TOKEN = { type: 25 /* CDO_TOKEN */ };
- var CDC_TOKEN = { type: 24 /* CDC_TOKEN */ };
- var COLON_TOKEN = { type: 26 /* COLON_TOKEN */ };
- var SEMICOLON_TOKEN = { type: 27 /* SEMICOLON_TOKEN */ };
- var LEFT_SQUARE_BRACKET_TOKEN = {
- type: 28 /* LEFT_SQUARE_BRACKET_TOKEN */
- };
- var RIGHT_SQUARE_BRACKET_TOKEN = {
- type: 29 /* RIGHT_SQUARE_BRACKET_TOKEN */
- };
- var WHITESPACE_TOKEN = { type: 31 /* WHITESPACE_TOKEN */ };
- var EOF_TOKEN = { type: 32 /* EOF_TOKEN */ };
- var Tokenizer = /** @class */ (function () {
- function Tokenizer() {
- this._value = [];
- }
- Tokenizer.prototype.write = function (chunk) {
- this._value = this._value.concat(toCodePoints$1(chunk));
- };
- Tokenizer.prototype.read = function () {
- var tokens = [];
- var token = this.consumeToken();
- while (token !== EOF_TOKEN) {
- tokens.push(token);
- token = this.consumeToken();
- }
- return tokens;
- };
- Tokenizer.prototype.consumeToken = function () {
- var codePoint = this.consumeCodePoint();
- switch (codePoint) {
- case QUOTATION_MARK:
- return this.consumeStringToken(QUOTATION_MARK);
- case NUMBER_SIGN:
- var c1 = this.peekCodePoint(0);
- var c2 = this.peekCodePoint(1);
- var c3 = this.peekCodePoint(2);
- if (isNameCodePoint(c1) || isValidEscape(c2, c3)) {
- var flags = isIdentifierStart(c1, c2, c3) ? FLAG_ID : FLAG_UNRESTRICTED;
- var value = this.consumeName();
- return { type: 5 /* HASH_TOKEN */, value: value, flags: flags };
- }
- break;
- case DOLLAR_SIGN:
- if (this.peekCodePoint(0) === EQUALS_SIGN) {
- this.consumeCodePoint();
- return SUFFIX_MATCH_TOKEN;
- }
- break;
- case APOSTROPHE:
- return this.consumeStringToken(APOSTROPHE);
- case LEFT_PARENTHESIS:
- return LEFT_PARENTHESIS_TOKEN;
- case RIGHT_PARENTHESIS:
- return RIGHT_PARENTHESIS_TOKEN;
- case ASTERISK:
- if (this.peekCodePoint(0) === EQUALS_SIGN) {
- this.consumeCodePoint();
- return SUBSTRING_MATCH_TOKEN;
- }
- break;
- case PLUS_SIGN:
- if (isNumberStart(codePoint, this.peekCodePoint(0), this.peekCodePoint(1))) {
- this.reconsumeCodePoint(codePoint);
- return this.consumeNumericToken();
- }
- break;
- case COMMA:
- return COMMA_TOKEN;
- case HYPHEN_MINUS:
- var e1 = codePoint;
- var e2 = this.peekCodePoint(0);
- var e3 = this.peekCodePoint(1);
- if (isNumberStart(e1, e2, e3)) {
- this.reconsumeCodePoint(codePoint);
- return this.consumeNumericToken();
- }
- if (isIdentifierStart(e1, e2, e3)) {
- this.reconsumeCodePoint(codePoint);
- return this.consumeIdentLikeToken();
- }
- if (e2 === HYPHEN_MINUS && e3 === GREATER_THAN_SIGN) {
- this.consumeCodePoint();
- this.consumeCodePoint();
- return CDC_TOKEN;
- }
- break;
- case FULL_STOP:
- if (isNumberStart(codePoint, this.peekCodePoint(0), this.peekCodePoint(1))) {
- this.reconsumeCodePoint(codePoint);
- return this.consumeNumericToken();
- }
- break;
- case SOLIDUS:
- if (this.peekCodePoint(0) === ASTERISK) {
- this.consumeCodePoint();
- while (true) {
- var c = this.consumeCodePoint();
- if (c === ASTERISK) {
- c = this.consumeCodePoint();
- if (c === SOLIDUS) {
- return this.consumeToken();
- }
- }
- if (c === EOF) {
- return this.consumeToken();
- }
- }
- }
- break;
- case COLON:
- return COLON_TOKEN;
- case SEMICOLON:
- return SEMICOLON_TOKEN;
- case LESS_THAN_SIGN:
- if (this.peekCodePoint(0) === EXCLAMATION_MARK &&
- this.peekCodePoint(1) === HYPHEN_MINUS &&
- this.peekCodePoint(2) === HYPHEN_MINUS) {
- this.consumeCodePoint();
- this.consumeCodePoint();
- return CDO_TOKEN;
- }
- break;
- case COMMERCIAL_AT:
- var a1 = this.peekCodePoint(0);
- var a2 = this.peekCodePoint(1);
- var a3 = this.peekCodePoint(2);
- if (isIdentifierStart(a1, a2, a3)) {
- var value = this.consumeName();
- return { type: 7 /* AT_KEYWORD_TOKEN */, value: value };
- }
- break;
- case LEFT_SQUARE_BRACKET:
- return LEFT_SQUARE_BRACKET_TOKEN;
- case REVERSE_SOLIDUS:
- if (isValidEscape(codePoint, this.peekCodePoint(0))) {
- this.reconsumeCodePoint(codePoint);
- return this.consumeIdentLikeToken();
- }
- break;
- case RIGHT_SQUARE_BRACKET:
- return RIGHT_SQUARE_BRACKET_TOKEN;
- case CIRCUMFLEX_ACCENT:
- if (this.peekCodePoint(0) === EQUALS_SIGN) {
- this.consumeCodePoint();
- return PREFIX_MATCH_TOKEN;
- }
- break;
- case LEFT_CURLY_BRACKET:
- return LEFT_CURLY_BRACKET_TOKEN;
- case RIGHT_CURLY_BRACKET:
- return RIGHT_CURLY_BRACKET_TOKEN;
- case u:
- case U:
- var u1 = this.peekCodePoint(0);
- var u2 = this.peekCodePoint(1);
- if (u1 === PLUS_SIGN && (isHex(u2) || u2 === QUESTION_MARK)) {
- this.consumeCodePoint();
- this.consumeUnicodeRangeToken();
- }
- this.reconsumeCodePoint(codePoint);
- return this.consumeIdentLikeToken();
- case VERTICAL_LINE:
- if (this.peekCodePoint(0) === EQUALS_SIGN) {
- this.consumeCodePoint();
- return DASH_MATCH_TOKEN;
- }
- if (this.peekCodePoint(0) === VERTICAL_LINE) {
- this.consumeCodePoint();
- return COLUMN_TOKEN;
- }
- break;
- case TILDE:
- if (this.peekCodePoint(0) === EQUALS_SIGN) {
- this.consumeCodePoint();
- return INCLUDE_MATCH_TOKEN;
- }
- break;
- case EOF:
- return EOF_TOKEN;
- }
- if (isWhiteSpace(codePoint)) {
- this.consumeWhiteSpace();
- return WHITESPACE_TOKEN;
- }
- if (isDigit(codePoint)) {
- this.reconsumeCodePoint(codePoint);
- return this.consumeNumericToken();
- }
- if (isNameStartCodePoint(codePoint)) {
- this.reconsumeCodePoint(codePoint);
- return this.consumeIdentLikeToken();
- }
- return { type: 6 /* DELIM_TOKEN */, value: fromCodePoint$1(codePoint) };
- };
- Tokenizer.prototype.consumeCodePoint = function () {
- var value = this._value.shift();
- return typeof value === 'undefined' ? -1 : value;
- };
- Tokenizer.prototype.reconsumeCodePoint = function (codePoint) {
- this._value.unshift(codePoint);
- };
- Tokenizer.prototype.peekCodePoint = function (delta) {
- if (delta >= this._value.length) {
- return -1;
- }
- return this._value[delta];
- };
- Tokenizer.prototype.consumeUnicodeRangeToken = function () {
- var digits = [];
- var codePoint = this.consumeCodePoint();
- while (isHex(codePoint) && digits.length < 6) {
- digits.push(codePoint);
- codePoint = this.consumeCodePoint();
- }
- var questionMarks = false;
- while (codePoint === QUESTION_MARK && digits.length < 6) {
- digits.push(codePoint);
- codePoint = this.consumeCodePoint();
- questionMarks = true;
- }
- if (questionMarks) {
- var start_1 = parseInt(fromCodePoint$1.apply(void 0, digits.map(function (digit) { return (digit === QUESTION_MARK ? ZERO : digit); })), 16);
- var end = parseInt(fromCodePoint$1.apply(void 0, digits.map(function (digit) { return (digit === QUESTION_MARK ? F : digit); })), 16);
- return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start_1, end: end };
- }
- var start = parseInt(fromCodePoint$1.apply(void 0, digits), 16);
- if (this.peekCodePoint(0) === HYPHEN_MINUS && isHex(this.peekCodePoint(1))) {
- this.consumeCodePoint();
- codePoint = this.consumeCodePoint();
- var endDigits = [];
- while (isHex(codePoint) && endDigits.length < 6) {
- endDigits.push(codePoint);
- codePoint = this.consumeCodePoint();
- }
- var end = parseInt(fromCodePoint$1.apply(void 0, endDigits), 16);
- return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start, end: end };
- }
- else {
- return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start, end: start };
- }
- };
- Tokenizer.prototype.consumeIdentLikeToken = function () {
- var value = this.consumeName();
- if (value.toLowerCase() === 'url' && this.peekCodePoint(0) === LEFT_PARENTHESIS) {
- this.consumeCodePoint();
- return this.consumeUrlToken();
- }
- else if (this.peekCodePoint(0) === LEFT_PARENTHESIS) {
- this.consumeCodePoint();
- return { type: 19 /* FUNCTION_TOKEN */, value: value };
- }
- return { type: 20 /* IDENT_TOKEN */, value: value };
- };
- Tokenizer.prototype.consumeUrlToken = function () {
- var value = [];
- this.consumeWhiteSpace();
- if (this.peekCodePoint(0) === EOF) {
- return { type: 22 /* URL_TOKEN */, value: '' };
- }
- var next = this.peekCodePoint(0);
- if (next === APOSTROPHE || next === QUOTATION_MARK) {
- var stringToken = this.consumeStringToken(this.consumeCodePoint());
- if (stringToken.type === 0 /* STRING_TOKEN */) {
- this.consumeWhiteSpace();
- if (this.peekCodePoint(0) === EOF || this.peekCodePoint(0) === RIGHT_PARENTHESIS) {
- this.consumeCodePoint();
- return { type: 22 /* URL_TOKEN */, value: stringToken.value };
- }
- }
- this.consumeBadUrlRemnants();
- return BAD_URL_TOKEN;
- }
- while (true) {
- var codePoint = this.consumeCodePoint();
- if (codePoint === EOF || codePoint === RIGHT_PARENTHESIS) {
- return { type: 22 /* URL_TOKEN */, value: fromCodePoint$1.apply(void 0, value) };
- }
- else if (isWhiteSpace(codePoint)) {
- this.consumeWhiteSpace();
- if (this.peekCodePoint(0) === EOF || this.peekCodePoint(0) === RIGHT_PARENTHESIS) {
- this.consumeCodePoint();
- return { type: 22 /* URL_TOKEN */, value: fromCodePoint$1.apply(void 0, value) };
- }
- this.consumeBadUrlRemnants();
- return BAD_URL_TOKEN;
- }
- else if (codePoint === QUOTATION_MARK ||
- codePoint === APOSTROPHE ||
- codePoint === LEFT_PARENTHESIS ||
- isNonPrintableCodePoint(codePoint)) {
- this.consumeBadUrlRemnants();
- return BAD_URL_TOKEN;
- }
- else if (codePoint === REVERSE_SOLIDUS) {
- if (isValidEscape(codePoint, this.peekCodePoint(0))) {
- value.push(this.consumeEscapedCodePoint());
- }
- else {
- this.consumeBadUrlRemnants();
- return BAD_URL_TOKEN;
- }
- }
- else {
- value.push(codePoint);
- }
- }
- };
- Tokenizer.prototype.consumeWhiteSpace = function () {
- while (isWhiteSpace(this.peekCodePoint(0))) {
- this.consumeCodePoint();
- }
- };
- Tokenizer.prototype.consumeBadUrlRemnants = function () {
- while (true) {
- var codePoint = this.consumeCodePoint();
- if (codePoint === RIGHT_PARENTHESIS || codePoint === EOF) {
- return;
- }
- if (isValidEscape(codePoint, this.peekCodePoint(0))) {
- this.consumeEscapedCodePoint();
- }
- }
- };
- Tokenizer.prototype.consumeStringSlice = function (count) {
- var SLICE_STACK_SIZE = 50000;
- var value = '';
- while (count > 0) {
- var amount = Math.min(SLICE_STACK_SIZE, count);
- value += fromCodePoint$1.apply(void 0, this._value.splice(0, amount));
- count -= amount;
- }
- this._value.shift();
- return value;
- };
- Tokenizer.prototype.consumeStringToken = function (endingCodePoint) {
- var value = '';
- var i = 0;
- do {
- var codePoint = this._value[i];
- if (codePoint === EOF || codePoint === undefined || codePoint === endingCodePoint) {
- value += this.consumeStringSlice(i);
- return { type: 0 /* STRING_TOKEN */, value: value };
- }
- if (codePoint === LINE_FEED) {
- this._value.splice(0, i);
- return BAD_STRING_TOKEN;
- }
- if (codePoint === REVERSE_SOLIDUS) {
- var next = this._value[i + 1];
- if (next !== EOF && next !== undefined) {
- if (next === LINE_FEED) {
- value += this.consumeStringSlice(i);
- i = -1;
- this._value.shift();
- }
- else if (isValidEscape(codePoint, next)) {
- value += this.consumeStringSlice(i);
- value += fromCodePoint$1(this.consumeEscapedCodePoint());
- i = -1;
- }
- }
- }
- i++;
- } while (true);
- };
- Tokenizer.prototype.consumeNumber = function () {
- var repr = [];
- var type = FLAG_INTEGER;
- var c1 = this.peekCodePoint(0);
- if (c1 === PLUS_SIGN || c1 === HYPHEN_MINUS) {
- repr.push(this.consumeCodePoint());
- }
- while (isDigit(this.peekCodePoint(0))) {
- repr.push(this.consumeCodePoint());
- }
- c1 = this.peekCodePoint(0);
- var c2 = this.peekCodePoint(1);
- if (c1 === FULL_STOP && isDigit(c2)) {
- repr.push(this.consumeCodePoint(), this.consumeCodePoint());
- type = FLAG_NUMBER;
- while (isDigit(this.peekCodePoint(0))) {
- repr.push(this.consumeCodePoint());
- }
- }
- c1 = this.peekCodePoint(0);
- c2 = this.peekCodePoint(1);
- var c3 = this.peekCodePoint(2);
- if ((c1 === E || c1 === e) && (((c2 === PLUS_SIGN || c2 === HYPHEN_MINUS) && isDigit(c3)) || isDigit(c2))) {
- repr.push(this.consumeCodePoint(), this.consumeCodePoint());
- type = FLAG_NUMBER;
- while (isDigit(this.peekCodePoint(0))) {
- repr.push(this.consumeCodePoint());
- }
- }
- return [stringToNumber(repr), type];
- };
- Tokenizer.prototype.consumeNumericToken = function () {
- var _a = this.consumeNumber(), number = _a[0], flags = _a[1];
- var c1 = this.peekCodePoint(0);
- var c2 = this.peekCodePoint(1);
- var c3 = this.peekCodePoint(2);
- if (isIdentifierStart(c1, c2, c3)) {
- var unit = this.consumeName();
- return { type: 15 /* DIMENSION_TOKEN */, number: number, flags: flags, unit: unit };
- }
- if (c1 === PERCENTAGE_SIGN) {
- this.consumeCodePoint();
- return { type: 16 /* PERCENTAGE_TOKEN */, number: number, flags: flags };
- }
- return { type: 17 /* NUMBER_TOKEN */, number: number, flags: flags };
- };
- Tokenizer.prototype.consumeEscapedCodePoint = function () {
- var codePoint = this.consumeCodePoint();
- if (isHex(codePoint)) {
- var hex = fromCodePoint$1(codePoint);
- while (isHex(this.peekCodePoint(0)) && hex.length < 6) {
- hex += fromCodePoint$1(this.consumeCodePoint());
- }
- if (isWhiteSpace(this.peekCodePoint(0))) {
- this.consumeCodePoint();
- }
- var hexCodePoint = parseInt(hex, 16);
- if (hexCodePoint === 0 || isSurrogateCodePoint(hexCodePoint) || hexCodePoint > 0x10ffff) {
- return REPLACEMENT_CHARACTER;
- }
- return hexCodePoint;
- }
- if (codePoint === EOF) {
- return REPLACEMENT_CHARACTER;
- }
- return codePoint;
- };
- Tokenizer.prototype.consumeName = function () {
- var result = '';
- while (true) {
- var codePoint = this.consumeCodePoint();
- if (isNameCodePoint(codePoint)) {
- result += fromCodePoint$1(codePoint);
- }
- else if (isValidEscape(codePoint, this.peekCodePoint(0))) {
- result += fromCodePoint$1(this.consumeEscapedCodePoint());
- }
- else {
- this.reconsumeCodePoint(codePoint);
- return result;
- }
- }
- };
- return Tokenizer;
- }());
-
- var Parser = /** @class */ (function () {
- function Parser(tokens) {
- this._tokens = tokens;
- }
- Parser.create = function (value) {
- var tokenizer = new Tokenizer();
- tokenizer.write(value);
- return new Parser(tokenizer.read());
- };
- Parser.parseValue = function (value) {
- return Parser.create(value).parseComponentValue();
- };
- Parser.parseValues = function (value) {
- return Parser.create(value).parseComponentValues();
- };
- Parser.prototype.parseComponentValue = function () {
- var token = this.consumeToken();
- while (token.type === 31 /* WHITESPACE_TOKEN */) {
- token = this.consumeToken();
- }
- if (token.type === 32 /* EOF_TOKEN */) {
- throw new SyntaxError("Error parsing CSS component value, unexpected EOF");
- }
- this.reconsumeToken(token);
- var value = this.consumeComponentValue();
- do {
- token = this.consumeToken();
- } while (token.type === 31 /* WHITESPACE_TOKEN */);
- if (token.type === 32 /* EOF_TOKEN */) {
- return value;
- }
- throw new SyntaxError("Error parsing CSS component value, multiple values found when expecting only one");
- };
- Parser.prototype.parseComponentValues = function () {
- var values = [];
- while (true) {
- var value = this.consumeComponentValue();
- if (value.type === 32 /* EOF_TOKEN */) {
- return values;
- }
- values.push(value);
- values.push();
- }
- };
- Parser.prototype.consumeComponentValue = function () {
- var token = this.consumeToken();
- switch (token.type) {
- case 11 /* LEFT_CURLY_BRACKET_TOKEN */:
- case 28 /* LEFT_SQUARE_BRACKET_TOKEN */:
- case 2 /* LEFT_PARENTHESIS_TOKEN */:
- return this.consumeSimpleBlock(token.type);
- case 19 /* FUNCTION_TOKEN */:
- return this.consumeFunction(token);
- }
- return token;
- };
- Parser.prototype.consumeSimpleBlock = function (type) {
- var block = { type: type, values: [] };
- var token = this.consumeToken();
- while (true) {
- if (token.type === 32 /* EOF_TOKEN */ || isEndingTokenFor(token, type)) {
- return block;
- }
- this.reconsumeToken(token);
- block.values.push(this.consumeComponentValue());
- token = this.consumeToken();
- }
- };
- Parser.prototype.consumeFunction = function (functionToken) {
- var cssFunction = {
- name: functionToken.value,
- values: [],
- type: 18 /* FUNCTION */
- };
- while (true) {
- var token = this.consumeToken();
- if (token.type === 32 /* EOF_TOKEN */ || token.type === 3 /* RIGHT_PARENTHESIS_TOKEN */) {
- return cssFunction;
- }
- this.reconsumeToken(token);
- cssFunction.values.push(this.consumeComponentValue());
- }
- };
- Parser.prototype.consumeToken = function () {
- var token = this._tokens.shift();
- return typeof token === 'undefined' ? EOF_TOKEN : token;
- };
- Parser.prototype.reconsumeToken = function (token) {
- this._tokens.unshift(token);
- };
- return Parser;
- }());
- var isDimensionToken = function (token) { return token.type === 15 /* DIMENSION_TOKEN */; };
- var isNumberToken = function (token) { return token.type === 17 /* NUMBER_TOKEN */; };
- var isIdentToken = function (token) { return token.type === 20 /* IDENT_TOKEN */; };
- var isStringToken = function (token) { return token.type === 0 /* STRING_TOKEN */; };
- var isIdentWithValue = function (token, value) {
- return isIdentToken(token) && token.value === value;
- };
- var nonWhiteSpace = function (token) { return token.type !== 31 /* WHITESPACE_TOKEN */; };
- var nonFunctionArgSeparator = function (token) {
- return token.type !== 31 /* WHITESPACE_TOKEN */ && token.type !== 4 /* COMMA_TOKEN */;
- };
- var parseFunctionArgs = function (tokens) {
- var args = [];
- var arg = [];
- tokens.forEach(function (token) {
- if (token.type === 4 /* COMMA_TOKEN */) {
- if (arg.length === 0) {
- throw new Error("Error parsing function args, zero tokens for arg");
- }
- args.push(arg);
- arg = [];
- return;
- }
- if (token.type !== 31 /* WHITESPACE_TOKEN */) {
- arg.push(token);
- }
- });
- if (arg.length) {
- args.push(arg);
- }
- return args;
- };
- var isEndingTokenFor = function (token, type) {
- if (type === 11 /* LEFT_CURLY_BRACKET_TOKEN */ && token.type === 12 /* RIGHT_CURLY_BRACKET_TOKEN */) {
- return true;
- }
- if (type === 28 /* LEFT_SQUARE_BRACKET_TOKEN */ && token.type === 29 /* RIGHT_SQUARE_BRACKET_TOKEN */) {
- return true;
- }
- return type === 2 /* LEFT_PARENTHESIS_TOKEN */ && token.type === 3 /* RIGHT_PARENTHESIS_TOKEN */;
- };
-
- var isLength = function (token) {
- return token.type === 17 /* NUMBER_TOKEN */ || token.type === 15 /* DIMENSION_TOKEN */;
- };
-
- var isLengthPercentage = function (token) {
- return token.type === 16 /* PERCENTAGE_TOKEN */ || isLength(token);
- };
- var parseLengthPercentageTuple = function (tokens) {
- return tokens.length > 1 ? [tokens[0], tokens[1]] : [tokens[0]];
- };
- var ZERO_LENGTH = {
- type: 17 /* NUMBER_TOKEN */,
- number: 0,
- flags: FLAG_INTEGER
- };
- var FIFTY_PERCENT = {
- type: 16 /* PERCENTAGE_TOKEN */,
- number: 50,
- flags: FLAG_INTEGER
- };
- var HUNDRED_PERCENT = {
- type: 16 /* PERCENTAGE_TOKEN */,
- number: 100,
- flags: FLAG_INTEGER
- };
- var getAbsoluteValueForTuple = function (tuple, width, height) {
- var x = tuple[0], y = tuple[1];
- return [getAbsoluteValue(x, width), getAbsoluteValue(typeof y !== 'undefined' ? y : x, height)];
- };
- var getAbsoluteValue = function (token, parent) {
- if (token.type === 16 /* PERCENTAGE_TOKEN */) {
- return (token.number / 100) * parent;
- }
- if (isDimensionToken(token)) {
- switch (token.unit) {
- case 'rem':
- case 'em':
- return 16 * token.number; // TODO use correct font-size
- case 'px':
- default:
- return token.number;
- }
- }
- return token.number;
- };
-
- var DEG = 'deg';
- var GRAD = 'grad';
- var RAD = 'rad';
- var TURN = 'turn';
- var angle = {
- name: 'angle',
- parse: function (_context, value) {
- if (value.type === 15 /* DIMENSION_TOKEN */) {
- switch (value.unit) {
- case DEG:
- return (Math.PI * value.number) / 180;
- case GRAD:
- return (Math.PI / 200) * value.number;
- case RAD:
- return value.number;
- case TURN:
- return Math.PI * 2 * value.number;
- }
- }
- throw new Error("Unsupported angle type");
- }
- };
- var isAngle = function (value) {
- if (value.type === 15 /* DIMENSION_TOKEN */) {
- if (value.unit === DEG || value.unit === GRAD || value.unit === RAD || value.unit === TURN) {
- return true;
- }
- }
- return false;
- };
- var parseNamedSide = function (tokens) {
- var sideOrCorner = tokens
- .filter(isIdentToken)
- .map(function (ident) { return ident.value; })
- .join(' ');
- switch (sideOrCorner) {
- case 'to bottom right':
- case 'to right bottom':
- case 'left top':
- case 'top left':
- return [ZERO_LENGTH, ZERO_LENGTH];
- case 'to top':
- case 'bottom':
- return deg(0);
- case 'to bottom left':
- case 'to left bottom':
- case 'right top':
- case 'top right':
- return [ZERO_LENGTH, HUNDRED_PERCENT];
- case 'to right':
- case 'left':
- return deg(90);
- case 'to top left':
- case 'to left top':
- case 'right bottom':
- case 'bottom right':
- return [HUNDRED_PERCENT, HUNDRED_PERCENT];
- case 'to bottom':
- case 'top':
- return deg(180);
- case 'to top right':
- case 'to right top':
- case 'left bottom':
- case 'bottom left':
- return [HUNDRED_PERCENT, ZERO_LENGTH];
- case 'to left':
- case 'right':
- return deg(270);
- }
- return 0;
- };
- var deg = function (deg) { return (Math.PI * deg) / 180; };
-
- var color$1 = {
- name: 'color',
- parse: function (context, value) {
- if (value.type === 18 /* FUNCTION */) {
- var colorFunction = SUPPORTED_COLOR_FUNCTIONS[value.name];
- if (typeof colorFunction === 'undefined') {
- throw new Error("Attempting to parse an unsupported color function \"" + value.name + "\"");
- }
- return colorFunction(context, value.values);
- }
- if (value.type === 5 /* HASH_TOKEN */) {
- if (value.value.length === 3) {
- var r = value.value.substring(0, 1);
- var g = value.value.substring(1, 2);
- var b = value.value.substring(2, 3);
- return pack(parseInt(r + r, 16), parseInt(g + g, 16), parseInt(b + b, 16), 1);
- }
- if (value.value.length === 4) {
- var r = value.value.substring(0, 1);
- var g = value.value.substring(1, 2);
- var b = value.value.substring(2, 3);
- var a = value.value.substring(3, 4);
- return pack(parseInt(r + r, 16), parseInt(g + g, 16), parseInt(b + b, 16), parseInt(a + a, 16) / 255);
- }
- if (value.value.length === 6) {
- var r = value.value.substring(0, 2);
- var g = value.value.substring(2, 4);
- var b = value.value.substring(4, 6);
- return pack(parseInt(r, 16), parseInt(g, 16), parseInt(b, 16), 1);
- }
- if (value.value.length === 8) {
- var r = value.value.substring(0, 2);
- var g = value.value.substring(2, 4);
- var b = value.value.substring(4, 6);
- var a = value.value.substring(6, 8);
- return pack(parseInt(r, 16), parseInt(g, 16), parseInt(b, 16), parseInt(a, 16) / 255);
- }
- }
- if (value.type === 20 /* IDENT_TOKEN */) {
- var namedColor = COLORS[value.value.toUpperCase()];
- if (typeof namedColor !== 'undefined') {
- return namedColor;
- }
- }
- return COLORS.TRANSPARENT;
- }
- };
- var isTransparent = function (color) { return (0xff & color) === 0; };
- var asString = function (color) {
- var alpha = 0xff & color;
- var blue = 0xff & (color >> 8);
- var green = 0xff & (color >> 16);
- var red = 0xff & (color >> 24);
- return alpha < 255 ? "rgba(" + red + "," + green + "," + blue + "," + alpha / 255 + ")" : "rgb(" + red + "," + green + "," + blue + ")";
- };
- var pack = function (r, g, b, a) {
- return ((r << 24) | (g << 16) | (b << 8) | (Math.round(a * 255) << 0)) >>> 0;
- };
- var getTokenColorValue = function (token, i) {
- if (token.type === 17 /* NUMBER_TOKEN */) {
- return token.number;
- }
- if (token.type === 16 /* PERCENTAGE_TOKEN */) {
- var max = i === 3 ? 1 : 255;
- return i === 3 ? (token.number / 100) * max : Math.round((token.number / 100) * max);
- }
- return 0;
- };
- var rgb = function (_context, args) {
- var tokens = args.filter(nonFunctionArgSeparator);
- if (tokens.length === 3) {
- var _a = tokens.map(getTokenColorValue), r = _a[0], g = _a[1], b = _a[2];
- return pack(r, g, b, 1);
- }
- if (tokens.length === 4) {
- var _b = tokens.map(getTokenColorValue), r = _b[0], g = _b[1], b = _b[2], a = _b[3];
- return pack(r, g, b, a);
- }
- return 0;
- };
- function hue2rgb(t1, t2, hue) {
- if (hue < 0) {
- hue += 1;
- }
- if (hue >= 1) {
- hue -= 1;
- }
- if (hue < 1 / 6) {
- return (t2 - t1) * hue * 6 + t1;
- }
- else if (hue < 1 / 2) {
- return t2;
- }
- else if (hue < 2 / 3) {
- return (t2 - t1) * 6 * (2 / 3 - hue) + t1;
- }
- else {
- return t1;
- }
- }
- var hsl = function (context, args) {
- var tokens = args.filter(nonFunctionArgSeparator);
- var hue = tokens[0], saturation = tokens[1], lightness = tokens[2], alpha = tokens[3];
- var h = (hue.type === 17 /* NUMBER_TOKEN */ ? deg(hue.number) : angle.parse(context, hue)) / (Math.PI * 2);
- var s = isLengthPercentage(saturation) ? saturation.number / 100 : 0;
- var l = isLengthPercentage(lightness) ? lightness.number / 100 : 0;
- var a = typeof alpha !== 'undefined' && isLengthPercentage(alpha) ? getAbsoluteValue(alpha, 1) : 1;
- if (s === 0) {
- return pack(l * 255, l * 255, l * 255, 1);
- }
- var t2 = l <= 0.5 ? l * (s + 1) : l + s - l * s;
- var t1 = l * 2 - t2;
- var r = hue2rgb(t1, t2, h + 1 / 3);
- var g = hue2rgb(t1, t2, h);
- var b = hue2rgb(t1, t2, h - 1 / 3);
- return pack(r * 255, g * 255, b * 255, a);
- };
- var SUPPORTED_COLOR_FUNCTIONS = {
- hsl: hsl,
- hsla: hsl,
- rgb: rgb,
- rgba: rgb
- };
- var parseColor = function (context, value) {
- return color$1.parse(context, Parser.create(value).parseComponentValue());
- };
- var COLORS = {
- ALICEBLUE: 0xf0f8ffff,
- ANTIQUEWHITE: 0xfaebd7ff,
- AQUA: 0x00ffffff,
- AQUAMARINE: 0x7fffd4ff,
- AZURE: 0xf0ffffff,
- BEIGE: 0xf5f5dcff,
- BISQUE: 0xffe4c4ff,
- BLACK: 0x000000ff,
- BLANCHEDALMOND: 0xffebcdff,
- BLUE: 0x0000ffff,
- BLUEVIOLET: 0x8a2be2ff,
- BROWN: 0xa52a2aff,
- BURLYWOOD: 0xdeb887ff,
- CADETBLUE: 0x5f9ea0ff,
- CHARTREUSE: 0x7fff00ff,
- CHOCOLATE: 0xd2691eff,
- CORAL: 0xff7f50ff,
- CORNFLOWERBLUE: 0x6495edff,
- CORNSILK: 0xfff8dcff,
- CRIMSON: 0xdc143cff,
- CYAN: 0x00ffffff,
- DARKBLUE: 0x00008bff,
- DARKCYAN: 0x008b8bff,
- DARKGOLDENROD: 0xb886bbff,
- DARKGRAY: 0xa9a9a9ff,
- DARKGREEN: 0x006400ff,
- DARKGREY: 0xa9a9a9ff,
- DARKKHAKI: 0xbdb76bff,
- DARKMAGENTA: 0x8b008bff,
- DARKOLIVEGREEN: 0x556b2fff,
- DARKORANGE: 0xff8c00ff,
- DARKORCHID: 0x9932ccff,
- DARKRED: 0x8b0000ff,
- DARKSALMON: 0xe9967aff,
- DARKSEAGREEN: 0x8fbc8fff,
- DARKSLATEBLUE: 0x483d8bff,
- DARKSLATEGRAY: 0x2f4f4fff,
- DARKSLATEGREY: 0x2f4f4fff,
- DARKTURQUOISE: 0x00ced1ff,
- DARKVIOLET: 0x9400d3ff,
- DEEPPINK: 0xff1493ff,
- DEEPSKYBLUE: 0x00bfffff,
- DIMGRAY: 0x696969ff,
- DIMGREY: 0x696969ff,
- DODGERBLUE: 0x1e90ffff,
- FIREBRICK: 0xb22222ff,
- FLORALWHITE: 0xfffaf0ff,
- FORESTGREEN: 0x228b22ff,
- FUCHSIA: 0xff00ffff,
- GAINSBORO: 0xdcdcdcff,
- GHOSTWHITE: 0xf8f8ffff,
- GOLD: 0xffd700ff,
- GOLDENROD: 0xdaa520ff,
- GRAY: 0x808080ff,
- GREEN: 0x008000ff,
- GREENYELLOW: 0xadff2fff,
- GREY: 0x808080ff,
- HONEYDEW: 0xf0fff0ff,
- HOTPINK: 0xff69b4ff,
- INDIANRED: 0xcd5c5cff,
- INDIGO: 0x4b0082ff,
- IVORY: 0xfffff0ff,
- KHAKI: 0xf0e68cff,
- LAVENDER: 0xe6e6faff,
- LAVENDERBLUSH: 0xfff0f5ff,
- LAWNGREEN: 0x7cfc00ff,
- LEMONCHIFFON: 0xfffacdff,
- LIGHTBLUE: 0xadd8e6ff,
- LIGHTCORAL: 0xf08080ff,
- LIGHTCYAN: 0xe0ffffff,
- LIGHTGOLDENRODYELLOW: 0xfafad2ff,
- LIGHTGRAY: 0xd3d3d3ff,
- LIGHTGREEN: 0x90ee90ff,
- LIGHTGREY: 0xd3d3d3ff,
- LIGHTPINK: 0xffb6c1ff,
- LIGHTSALMON: 0xffa07aff,
- LIGHTSEAGREEN: 0x20b2aaff,
- LIGHTSKYBLUE: 0x87cefaff,
- LIGHTSLATEGRAY: 0x778899ff,
- LIGHTSLATEGREY: 0x778899ff,
- LIGHTSTEELBLUE: 0xb0c4deff,
- LIGHTYELLOW: 0xffffe0ff,
- LIME: 0x00ff00ff,
- LIMEGREEN: 0x32cd32ff,
- LINEN: 0xfaf0e6ff,
- MAGENTA: 0xff00ffff,
- MAROON: 0x800000ff,
- MEDIUMAQUAMARINE: 0x66cdaaff,
- MEDIUMBLUE: 0x0000cdff,
- MEDIUMORCHID: 0xba55d3ff,
- MEDIUMPURPLE: 0x9370dbff,
- MEDIUMSEAGREEN: 0x3cb371ff,
- MEDIUMSLATEBLUE: 0x7b68eeff,
- MEDIUMSPRINGGREEN: 0x00fa9aff,
- MEDIUMTURQUOISE: 0x48d1ccff,
- MEDIUMVIOLETRED: 0xc71585ff,
- MIDNIGHTBLUE: 0x191970ff,
- MINTCREAM: 0xf5fffaff,
- MISTYROSE: 0xffe4e1ff,
- MOCCASIN: 0xffe4b5ff,
- NAVAJOWHITE: 0xffdeadff,
- NAVY: 0x000080ff,
- OLDLACE: 0xfdf5e6ff,
- OLIVE: 0x808000ff,
- OLIVEDRAB: 0x6b8e23ff,
- ORANGE: 0xffa500ff,
- ORANGERED: 0xff4500ff,
- ORCHID: 0xda70d6ff,
- PALEGOLDENROD: 0xeee8aaff,
- PALEGREEN: 0x98fb98ff,
- PALETURQUOISE: 0xafeeeeff,
- PALEVIOLETRED: 0xdb7093ff,
- PAPAYAWHIP: 0xffefd5ff,
- PEACHPUFF: 0xffdab9ff,
- PERU: 0xcd853fff,
- PINK: 0xffc0cbff,
- PLUM: 0xdda0ddff,
- POWDERBLUE: 0xb0e0e6ff,
- PURPLE: 0x800080ff,
- REBECCAPURPLE: 0x663399ff,
- RED: 0xff0000ff,
- ROSYBROWN: 0xbc8f8fff,
- ROYALBLUE: 0x4169e1ff,
- SADDLEBROWN: 0x8b4513ff,
- SALMON: 0xfa8072ff,
- SANDYBROWN: 0xf4a460ff,
- SEAGREEN: 0x2e8b57ff,
- SEASHELL: 0xfff5eeff,
- SIENNA: 0xa0522dff,
- SILVER: 0xc0c0c0ff,
- SKYBLUE: 0x87ceebff,
- SLATEBLUE: 0x6a5acdff,
- SLATEGRAY: 0x708090ff,
- SLATEGREY: 0x708090ff,
- SNOW: 0xfffafaff,
- SPRINGGREEN: 0x00ff7fff,
- STEELBLUE: 0x4682b4ff,
- TAN: 0xd2b48cff,
- TEAL: 0x008080ff,
- THISTLE: 0xd8bfd8ff,
- TOMATO: 0xff6347ff,
- TRANSPARENT: 0x00000000,
- TURQUOISE: 0x40e0d0ff,
- VIOLET: 0xee82eeff,
- WHEAT: 0xf5deb3ff,
- WHITE: 0xffffffff,
- WHITESMOKE: 0xf5f5f5ff,
- YELLOW: 0xffff00ff,
- YELLOWGREEN: 0x9acd32ff
- };
-
- var backgroundClip = {
- name: 'background-clip',
- initialValue: 'border-box',
- prefix: false,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- return tokens.map(function (token) {
- if (isIdentToken(token)) {
- switch (token.value) {
- case 'padding-box':
- return 1 /* PADDING_BOX */;
- case 'content-box':
- return 2 /* CONTENT_BOX */;
- }
- }
- return 0 /* BORDER_BOX */;
- });
- }
- };
-
- var backgroundColor = {
- name: "background-color",
- initialValue: 'transparent',
- prefix: false,
- type: 3 /* TYPE_VALUE */,
- format: 'color'
- };
-
- var parseColorStop = function (context, args) {
- var color = color$1.parse(context, args[0]);
- var stop = args[1];
- return stop && isLengthPercentage(stop) ? { color: color, stop: stop } : { color: color, stop: null };
- };
- var processColorStops = function (stops, lineLength) {
- var first = stops[0];
- var last = stops[stops.length - 1];
- if (first.stop === null) {
- first.stop = ZERO_LENGTH;
- }
- if (last.stop === null) {
- last.stop = HUNDRED_PERCENT;
- }
- var processStops = [];
- var previous = 0;
- for (var i = 0; i < stops.length; i++) {
- var stop_1 = stops[i].stop;
- if (stop_1 !== null) {
- var absoluteValue = getAbsoluteValue(stop_1, lineLength);
- if (absoluteValue > previous) {
- processStops.push(absoluteValue);
- }
- else {
- processStops.push(previous);
- }
- previous = absoluteValue;
- }
- else {
- processStops.push(null);
- }
- }
- var gapBegin = null;
- for (var i = 0; i < processStops.length; i++) {
- var stop_2 = processStops[i];
- if (stop_2 === null) {
- if (gapBegin === null) {
- gapBegin = i;
- }
- }
- else if (gapBegin !== null) {
- var gapLength = i - gapBegin;
- var beforeGap = processStops[gapBegin - 1];
- var gapValue = (stop_2 - beforeGap) / (gapLength + 1);
- for (var g = 1; g <= gapLength; g++) {
- processStops[gapBegin + g - 1] = gapValue * g;
- }
- gapBegin = null;
- }
- }
- return stops.map(function (_a, i) {
- var color = _a.color;
- return { color: color, stop: Math.max(Math.min(1, processStops[i] / lineLength), 0) };
- });
- };
- var getAngleFromCorner = function (corner, width, height) {
- var centerX = width / 2;
- var centerY = height / 2;
- var x = getAbsoluteValue(corner[0], width) - centerX;
- var y = centerY - getAbsoluteValue(corner[1], height);
- return (Math.atan2(y, x) + Math.PI * 2) % (Math.PI * 2);
- };
- var calculateGradientDirection = function (angle, width, height) {
- var radian = typeof angle === 'number' ? angle : getAngleFromCorner(angle, width, height);
- var lineLength = Math.abs(width * Math.sin(radian)) + Math.abs(height * Math.cos(radian));
- var halfWidth = width / 2;
- var halfHeight = height / 2;
- var halfLineLength = lineLength / 2;
- var yDiff = Math.sin(radian - Math.PI / 2) * halfLineLength;
- var xDiff = Math.cos(radian - Math.PI / 2) * halfLineLength;
- return [lineLength, halfWidth - xDiff, halfWidth + xDiff, halfHeight - yDiff, halfHeight + yDiff];
- };
- var distance = function (a, b) { return Math.sqrt(a * a + b * b); };
- var findCorner = function (width, height, x, y, closest) {
- var corners = [
- [0, 0],
- [0, height],
- [width, 0],
- [width, height]
- ];
- return corners.reduce(function (stat, corner) {
- var cx = corner[0], cy = corner[1];
- var d = distance(x - cx, y - cy);
- if (closest ? d < stat.optimumDistance : d > stat.optimumDistance) {
- return {
- optimumCorner: corner,
- optimumDistance: d
- };
- }
- return stat;
- }, {
- optimumDistance: closest ? Infinity : -Infinity,
- optimumCorner: null
- }).optimumCorner;
- };
- var calculateRadius = function (gradient, x, y, width, height) {
- var rx = 0;
- var ry = 0;
- switch (gradient.size) {
- case 0 /* CLOSEST_SIDE */:
- // The ending shape is sized so that that it exactly meets the side of the gradient box closest to the gradient’s center.
- // If the shape is an ellipse, it exactly meets the closest side in each dimension.
- if (gradient.shape === 0 /* CIRCLE */) {
- rx = ry = Math.min(Math.abs(x), Math.abs(x - width), Math.abs(y), Math.abs(y - height));
- }
- else if (gradient.shape === 1 /* ELLIPSE */) {
- rx = Math.min(Math.abs(x), Math.abs(x - width));
- ry = Math.min(Math.abs(y), Math.abs(y - height));
- }
- break;
- case 2 /* CLOSEST_CORNER */:
- // The ending shape is sized so that that it passes through the corner of the gradient box closest to the gradient’s center.
- // If the shape is an ellipse, the ending shape is given the same aspect-ratio it would have if closest-side were specified.
- if (gradient.shape === 0 /* CIRCLE */) {
- rx = ry = Math.min(distance(x, y), distance(x, y - height), distance(x - width, y), distance(x - width, y - height));
- }
- else if (gradient.shape === 1 /* ELLIPSE */) {
- // Compute the ratio ry/rx (which is to be the same as for "closest-side")
- var c = Math.min(Math.abs(y), Math.abs(y - height)) / Math.min(Math.abs(x), Math.abs(x - width));
- var _a = findCorner(width, height, x, y, true), cx = _a[0], cy = _a[1];
- rx = distance(cx - x, (cy - y) / c);
- ry = c * rx;
- }
- break;
- case 1 /* FARTHEST_SIDE */:
- // Same as closest-side, except the ending shape is sized based on the farthest side(s)
- if (gradient.shape === 0 /* CIRCLE */) {
- rx = ry = Math.max(Math.abs(x), Math.abs(x - width), Math.abs(y), Math.abs(y - height));
- }
- else if (gradient.shape === 1 /* ELLIPSE */) {
- rx = Math.max(Math.abs(x), Math.abs(x - width));
- ry = Math.max(Math.abs(y), Math.abs(y - height));
- }
- break;
- case 3 /* FARTHEST_CORNER */:
- // Same as closest-corner, except the ending shape is sized based on the farthest corner.
- // If the shape is an ellipse, the ending shape is given the same aspect ratio it would have if farthest-side were specified.
- if (gradient.shape === 0 /* CIRCLE */) {
- rx = ry = Math.max(distance(x, y), distance(x, y - height), distance(x - width, y), distance(x - width, y - height));
- }
- else if (gradient.shape === 1 /* ELLIPSE */) {
- // Compute the ratio ry/rx (which is to be the same as for "farthest-side")
- var c = Math.max(Math.abs(y), Math.abs(y - height)) / Math.max(Math.abs(x), Math.abs(x - width));
- var _b = findCorner(width, height, x, y, false), cx = _b[0], cy = _b[1];
- rx = distance(cx - x, (cy - y) / c);
- ry = c * rx;
- }
- break;
- }
- if (Array.isArray(gradient.size)) {
- rx = getAbsoluteValue(gradient.size[0], width);
- ry = gradient.size.length === 2 ? getAbsoluteValue(gradient.size[1], height) : rx;
- }
- return [rx, ry];
- };
-
- var linearGradient = function (context, tokens) {
- var angle$1 = deg(180);
- var stops = [];
- parseFunctionArgs(tokens).forEach(function (arg, i) {
- if (i === 0) {
- var firstToken = arg[0];
- if (firstToken.type === 20 /* IDENT_TOKEN */ && firstToken.value === 'to') {
- angle$1 = parseNamedSide(arg);
- return;
- }
- else if (isAngle(firstToken)) {
- angle$1 = angle.parse(context, firstToken);
- return;
- }
- }
- var colorStop = parseColorStop(context, arg);
- stops.push(colorStop);
- });
- return { angle: angle$1, stops: stops, type: 1 /* LINEAR_GRADIENT */ };
- };
-
- var prefixLinearGradient = function (context, tokens) {
- var angle$1 = deg(180);
- var stops = [];
- parseFunctionArgs(tokens).forEach(function (arg, i) {
- if (i === 0) {
- var firstToken = arg[0];
- if (firstToken.type === 20 /* IDENT_TOKEN */ &&
- ['top', 'left', 'right', 'bottom'].indexOf(firstToken.value) !== -1) {
- angle$1 = parseNamedSide(arg);
- return;
- }
- else if (isAngle(firstToken)) {
- angle$1 = (angle.parse(context, firstToken) + deg(270)) % deg(360);
- return;
- }
- }
- var colorStop = parseColorStop(context, arg);
- stops.push(colorStop);
- });
- return {
- angle: angle$1,
- stops: stops,
- type: 1 /* LINEAR_GRADIENT */
- };
- };
-
- var webkitGradient = function (context, tokens) {
- var angle = deg(180);
- var stops = [];
- var type = 1 /* LINEAR_GRADIENT */;
- var shape = 0 /* CIRCLE */;
- var size = 3 /* FARTHEST_CORNER */;
- var position = [];
- parseFunctionArgs(tokens).forEach(function (arg, i) {
- var firstToken = arg[0];
- if (i === 0) {
- if (isIdentToken(firstToken) && firstToken.value === 'linear') {
- type = 1 /* LINEAR_GRADIENT */;
- return;
- }
- else if (isIdentToken(firstToken) && firstToken.value === 'radial') {
- type = 2 /* RADIAL_GRADIENT */;
- return;
- }
- }
- if (firstToken.type === 18 /* FUNCTION */) {
- if (firstToken.name === 'from') {
- var color = color$1.parse(context, firstToken.values[0]);
- stops.push({ stop: ZERO_LENGTH, color: color });
- }
- else if (firstToken.name === 'to') {
- var color = color$1.parse(context, firstToken.values[0]);
- stops.push({ stop: HUNDRED_PERCENT, color: color });
- }
- else if (firstToken.name === 'color-stop') {
- var values = firstToken.values.filter(nonFunctionArgSeparator);
- if (values.length === 2) {
- var color = color$1.parse(context, values[1]);
- var stop_1 = values[0];
- if (isNumberToken(stop_1)) {
- stops.push({
- stop: { type: 16 /* PERCENTAGE_TOKEN */, number: stop_1.number * 100, flags: stop_1.flags },
- color: color
- });
- }
- }
- }
- }
- });
- return type === 1 /* LINEAR_GRADIENT */
- ? {
- angle: (angle + deg(180)) % deg(360),
- stops: stops,
- type: type
- }
- : { size: size, shape: shape, stops: stops, position: position, type: type };
- };
-
- var CLOSEST_SIDE = 'closest-side';
- var FARTHEST_SIDE = 'farthest-side';
- var CLOSEST_CORNER = 'closest-corner';
- var FARTHEST_CORNER = 'farthest-corner';
- var CIRCLE = 'circle';
- var ELLIPSE = 'ellipse';
- var COVER = 'cover';
- var CONTAIN = 'contain';
- var radialGradient = function (context, tokens) {
- var shape = 0 /* CIRCLE */;
- var size = 3 /* FARTHEST_CORNER */;
- var stops = [];
- var position = [];
- parseFunctionArgs(tokens).forEach(function (arg, i) {
- var isColorStop = true;
- if (i === 0) {
- var isAtPosition_1 = false;
- isColorStop = arg.reduce(function (acc, token) {
- if (isAtPosition_1) {
- if (isIdentToken(token)) {
- switch (token.value) {
- case 'center':
- position.push(FIFTY_PERCENT);
- return acc;
- case 'top':
- case 'left':
- position.push(ZERO_LENGTH);
- return acc;
- case 'right':
- case 'bottom':
- position.push(HUNDRED_PERCENT);
- return acc;
- }
- }
- else if (isLengthPercentage(token) || isLength(token)) {
- position.push(token);
- }
- }
- else if (isIdentToken(token)) {
- switch (token.value) {
- case CIRCLE:
- shape = 0 /* CIRCLE */;
- return false;
- case ELLIPSE:
- shape = 1 /* ELLIPSE */;
- return false;
- case 'at':
- isAtPosition_1 = true;
- return false;
- case CLOSEST_SIDE:
- size = 0 /* CLOSEST_SIDE */;
- return false;
- case COVER:
- case FARTHEST_SIDE:
- size = 1 /* FARTHEST_SIDE */;
- return false;
- case CONTAIN:
- case CLOSEST_CORNER:
- size = 2 /* CLOSEST_CORNER */;
- return false;
- case FARTHEST_CORNER:
- size = 3 /* FARTHEST_CORNER */;
- return false;
- }
- }
- else if (isLength(token) || isLengthPercentage(token)) {
- if (!Array.isArray(size)) {
- size = [];
- }
- size.push(token);
- return false;
- }
- return acc;
- }, isColorStop);
- }
- if (isColorStop) {
- var colorStop = parseColorStop(context, arg);
- stops.push(colorStop);
- }
- });
- return { size: size, shape: shape, stops: stops, position: position, type: 2 /* RADIAL_GRADIENT */ };
- };
-
- var prefixRadialGradient = function (context, tokens) {
- var shape = 0 /* CIRCLE */;
- var size = 3 /* FARTHEST_CORNER */;
- var stops = [];
- var position = [];
- parseFunctionArgs(tokens).forEach(function (arg, i) {
- var isColorStop = true;
- if (i === 0) {
- isColorStop = arg.reduce(function (acc, token) {
- if (isIdentToken(token)) {
- switch (token.value) {
- case 'center':
- position.push(FIFTY_PERCENT);
- return false;
- case 'top':
- case 'left':
- position.push(ZERO_LENGTH);
- return false;
- case 'right':
- case 'bottom':
- position.push(HUNDRED_PERCENT);
- return false;
- }
- }
- else if (isLengthPercentage(token) || isLength(token)) {
- position.push(token);
- return false;
- }
- return acc;
- }, isColorStop);
- }
- else if (i === 1) {
- isColorStop = arg.reduce(function (acc, token) {
- if (isIdentToken(token)) {
- switch (token.value) {
- case CIRCLE:
- shape = 0 /* CIRCLE */;
- return false;
- case ELLIPSE:
- shape = 1 /* ELLIPSE */;
- return false;
- case CONTAIN:
- case CLOSEST_SIDE:
- size = 0 /* CLOSEST_SIDE */;
- return false;
- case FARTHEST_SIDE:
- size = 1 /* FARTHEST_SIDE */;
- return false;
- case CLOSEST_CORNER:
- size = 2 /* CLOSEST_CORNER */;
- return false;
- case COVER:
- case FARTHEST_CORNER:
- size = 3 /* FARTHEST_CORNER */;
- return false;
- }
- }
- else if (isLength(token) || isLengthPercentage(token)) {
- if (!Array.isArray(size)) {
- size = [];
- }
- size.push(token);
- return false;
- }
- return acc;
- }, isColorStop);
- }
- if (isColorStop) {
- var colorStop = parseColorStop(context, arg);
- stops.push(colorStop);
- }
- });
- return { size: size, shape: shape, stops: stops, position: position, type: 2 /* RADIAL_GRADIENT */ };
- };
-
- var isLinearGradient = function (background) {
- return background.type === 1 /* LINEAR_GRADIENT */;
- };
- var isRadialGradient = function (background) {
- return background.type === 2 /* RADIAL_GRADIENT */;
- };
- var image = {
- name: 'image',
- parse: function (context, value) {
- if (value.type === 22 /* URL_TOKEN */) {
- var image_1 = { url: value.value, type: 0 /* URL */ };
- context.cache.addImage(value.value);
- return image_1;
- }
- if (value.type === 18 /* FUNCTION */) {
- var imageFunction = SUPPORTED_IMAGE_FUNCTIONS[value.name];
- if (typeof imageFunction === 'undefined') {
- throw new Error("Attempting to parse an unsupported image function \"" + value.name + "\"");
- }
- return imageFunction(context, value.values);
- }
- throw new Error("Unsupported image type " + value.type);
- }
- };
- function isSupportedImage(value) {
- return (!(value.type === 20 /* IDENT_TOKEN */ && value.value === 'none') &&
- (value.type !== 18 /* FUNCTION */ || !!SUPPORTED_IMAGE_FUNCTIONS[value.name]));
- }
- var SUPPORTED_IMAGE_FUNCTIONS = {
- 'linear-gradient': linearGradient,
- '-moz-linear-gradient': prefixLinearGradient,
- '-ms-linear-gradient': prefixLinearGradient,
- '-o-linear-gradient': prefixLinearGradient,
- '-webkit-linear-gradient': prefixLinearGradient,
- 'radial-gradient': radialGradient,
- '-moz-radial-gradient': prefixRadialGradient,
- '-ms-radial-gradient': prefixRadialGradient,
- '-o-radial-gradient': prefixRadialGradient,
- '-webkit-radial-gradient': prefixRadialGradient,
- '-webkit-gradient': webkitGradient
- };
-
- var backgroundImage = {
- name: 'background-image',
- initialValue: 'none',
- type: 1 /* LIST */,
- prefix: false,
- parse: function (context, tokens) {
- if (tokens.length === 0) {
- return [];
- }
- var first = tokens[0];
- if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') {
- return [];
- }
- return tokens
- .filter(function (value) { return nonFunctionArgSeparator(value) && isSupportedImage(value); })
- .map(function (value) { return image.parse(context, value); });
- }
- };
-
- var backgroundOrigin = {
- name: 'background-origin',
- initialValue: 'border-box',
- prefix: false,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- return tokens.map(function (token) {
- if (isIdentToken(token)) {
- switch (token.value) {
- case 'padding-box':
- return 1 /* PADDING_BOX */;
- case 'content-box':
- return 2 /* CONTENT_BOX */;
- }
- }
- return 0 /* BORDER_BOX */;
- });
- }
- };
-
- var backgroundPosition = {
- name: 'background-position',
- initialValue: '0% 0%',
- type: 1 /* LIST */,
- prefix: false,
- parse: function (_context, tokens) {
- return parseFunctionArgs(tokens)
- .map(function (values) { return values.filter(isLengthPercentage); })
- .map(parseLengthPercentageTuple);
- }
- };
-
- var backgroundRepeat = {
- name: 'background-repeat',
- initialValue: 'repeat',
- prefix: false,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- return parseFunctionArgs(tokens)
- .map(function (values) {
- return values
- .filter(isIdentToken)
- .map(function (token) { return token.value; })
- .join(' ');
- })
- .map(parseBackgroundRepeat);
- }
- };
- var parseBackgroundRepeat = function (value) {
- switch (value) {
- case 'no-repeat':
- return 1 /* NO_REPEAT */;
- case 'repeat-x':
- case 'repeat no-repeat':
- return 2 /* REPEAT_X */;
- case 'repeat-y':
- case 'no-repeat repeat':
- return 3 /* REPEAT_Y */;
- case 'repeat':
- default:
- return 0 /* REPEAT */;
- }
- };
-
- var BACKGROUND_SIZE;
- (function (BACKGROUND_SIZE) {
- BACKGROUND_SIZE["AUTO"] = "auto";
- BACKGROUND_SIZE["CONTAIN"] = "contain";
- BACKGROUND_SIZE["COVER"] = "cover";
- })(BACKGROUND_SIZE || (BACKGROUND_SIZE = {}));
- var backgroundSize = {
- name: 'background-size',
- initialValue: '0',
- prefix: false,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- return parseFunctionArgs(tokens).map(function (values) { return values.filter(isBackgroundSizeInfoToken); });
- }
- };
- var isBackgroundSizeInfoToken = function (value) {
- return isIdentToken(value) || isLengthPercentage(value);
- };
-
- var borderColorForSide = function (side) { return ({
- name: "border-" + side + "-color",
- initialValue: 'transparent',
- prefix: false,
- type: 3 /* TYPE_VALUE */,
- format: 'color'
- }); };
- var borderTopColor = borderColorForSide('top');
- var borderRightColor = borderColorForSide('right');
- var borderBottomColor = borderColorForSide('bottom');
- var borderLeftColor = borderColorForSide('left');
-
- var borderRadiusForSide = function (side) { return ({
- name: "border-radius-" + side,
- initialValue: '0 0',
- prefix: false,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- return parseLengthPercentageTuple(tokens.filter(isLengthPercentage));
- }
- }); };
- var borderTopLeftRadius = borderRadiusForSide('top-left');
- var borderTopRightRadius = borderRadiusForSide('top-right');
- var borderBottomRightRadius = borderRadiusForSide('bottom-right');
- var borderBottomLeftRadius = borderRadiusForSide('bottom-left');
-
- var borderStyleForSide = function (side) { return ({
- name: "border-" + side + "-style",
- initialValue: 'solid',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, style) {
- switch (style) {
- case 'none':
- return 0 /* NONE */;
- case 'dashed':
- return 2 /* DASHED */;
- case 'dotted':
- return 3 /* DOTTED */;
- case 'double':
- return 4 /* DOUBLE */;
- }
- return 1 /* SOLID */;
- }
- }); };
- var borderTopStyle = borderStyleForSide('top');
- var borderRightStyle = borderStyleForSide('right');
- var borderBottomStyle = borderStyleForSide('bottom');
- var borderLeftStyle = borderStyleForSide('left');
-
- var borderWidthForSide = function (side) { return ({
- name: "border-" + side + "-width",
- initialValue: '0',
- type: 0 /* VALUE */,
- prefix: false,
- parse: function (_context, token) {
- if (isDimensionToken(token)) {
- return token.number;
- }
- return 0;
- }
- }); };
- var borderTopWidth = borderWidthForSide('top');
- var borderRightWidth = borderWidthForSide('right');
- var borderBottomWidth = borderWidthForSide('bottom');
- var borderLeftWidth = borderWidthForSide('left');
-
- var color = {
- name: "color",
- initialValue: 'transparent',
- prefix: false,
- type: 3 /* TYPE_VALUE */,
- format: 'color'
- };
-
- var direction = {
- name: 'direction',
- initialValue: 'ltr',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, direction) {
- switch (direction) {
- case 'rtl':
- return 1 /* RTL */;
- case 'ltr':
- default:
- return 0 /* LTR */;
- }
- }
- };
-
- var display = {
- name: 'display',
- initialValue: 'inline-block',
- prefix: false,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- return tokens.filter(isIdentToken).reduce(function (bit, token) {
- return bit | parseDisplayValue(token.value);
- }, 0 /* NONE */);
- }
- };
- var parseDisplayValue = function (display) {
- switch (display) {
- case 'block':
- case '-webkit-box':
- return 2 /* BLOCK */;
- case 'inline':
- return 4 /* INLINE */;
- case 'run-in':
- return 8 /* RUN_IN */;
- case 'flow':
- return 16 /* FLOW */;
- case 'flow-root':
- return 32 /* FLOW_ROOT */;
- case 'table':
- return 64 /* TABLE */;
- case 'flex':
- case '-webkit-flex':
- return 128 /* FLEX */;
- case 'grid':
- case '-ms-grid':
- return 256 /* GRID */;
- case 'ruby':
- return 512 /* RUBY */;
- case 'subgrid':
- return 1024 /* SUBGRID */;
- case 'list-item':
- return 2048 /* LIST_ITEM */;
- case 'table-row-group':
- return 4096 /* TABLE_ROW_GROUP */;
- case 'table-header-group':
- return 8192 /* TABLE_HEADER_GROUP */;
- case 'table-footer-group':
- return 16384 /* TABLE_FOOTER_GROUP */;
- case 'table-row':
- return 32768 /* TABLE_ROW */;
- case 'table-cell':
- return 65536 /* TABLE_CELL */;
- case 'table-column-group':
- return 131072 /* TABLE_COLUMN_GROUP */;
- case 'table-column':
- return 262144 /* TABLE_COLUMN */;
- case 'table-caption':
- return 524288 /* TABLE_CAPTION */;
- case 'ruby-base':
- return 1048576 /* RUBY_BASE */;
- case 'ruby-text':
- return 2097152 /* RUBY_TEXT */;
- case 'ruby-base-container':
- return 4194304 /* RUBY_BASE_CONTAINER */;
- case 'ruby-text-container':
- return 8388608 /* RUBY_TEXT_CONTAINER */;
- case 'contents':
- return 16777216 /* CONTENTS */;
- case 'inline-block':
- return 33554432 /* INLINE_BLOCK */;
- case 'inline-list-item':
- return 67108864 /* INLINE_LIST_ITEM */;
- case 'inline-table':
- return 134217728 /* INLINE_TABLE */;
- case 'inline-flex':
- return 268435456 /* INLINE_FLEX */;
- case 'inline-grid':
- return 536870912 /* INLINE_GRID */;
- }
- return 0 /* NONE */;
- };
-
- var float = {
- name: 'float',
- initialValue: 'none',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, float) {
- switch (float) {
- case 'left':
- return 1 /* LEFT */;
- case 'right':
- return 2 /* RIGHT */;
- case 'inline-start':
- return 3 /* INLINE_START */;
- case 'inline-end':
- return 4 /* INLINE_END */;
- }
- return 0 /* NONE */;
- }
- };
-
- var letterSpacing = {
- name: 'letter-spacing',
- initialValue: '0',
- prefix: false,
- type: 0 /* VALUE */,
- parse: function (_context, token) {
- if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'normal') {
- return 0;
- }
- if (token.type === 17 /* NUMBER_TOKEN */) {
- return token.number;
- }
- if (token.type === 15 /* DIMENSION_TOKEN */) {
- return token.number;
- }
- return 0;
- }
- };
-
- var LINE_BREAK;
- (function (LINE_BREAK) {
- LINE_BREAK["NORMAL"] = "normal";
- LINE_BREAK["STRICT"] = "strict";
- })(LINE_BREAK || (LINE_BREAK = {}));
- var lineBreak = {
- name: 'line-break',
- initialValue: 'normal',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, lineBreak) {
- switch (lineBreak) {
- case 'strict':
- return LINE_BREAK.STRICT;
- case 'normal':
- default:
- return LINE_BREAK.NORMAL;
- }
- }
- };
-
- var lineHeight = {
- name: 'line-height',
- initialValue: 'normal',
- prefix: false,
- type: 4 /* TOKEN_VALUE */
- };
- var computeLineHeight = function (token, fontSize) {
- if (isIdentToken(token) && token.value === 'normal') {
- return 1.2 * fontSize;
- }
- else if (token.type === 17 /* NUMBER_TOKEN */) {
- return fontSize * token.number;
- }
- else if (isLengthPercentage(token)) {
- return getAbsoluteValue(token, fontSize);
- }
- return fontSize;
- };
-
- var listStyleImage = {
- name: 'list-style-image',
- initialValue: 'none',
- type: 0 /* VALUE */,
- prefix: false,
- parse: function (context, token) {
- if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'none') {
- return null;
- }
- return image.parse(context, token);
- }
- };
-
- var listStylePosition = {
- name: 'list-style-position',
- initialValue: 'outside',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, position) {
- switch (position) {
- case 'inside':
- return 0 /* INSIDE */;
- case 'outside':
- default:
- return 1 /* OUTSIDE */;
- }
- }
- };
-
- var listStyleType = {
- name: 'list-style-type',
- initialValue: 'none',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, type) {
- switch (type) {
- case 'disc':
- return 0 /* DISC */;
- case 'circle':
- return 1 /* CIRCLE */;
- case 'square':
- return 2 /* SQUARE */;
- case 'decimal':
- return 3 /* DECIMAL */;
- case 'cjk-decimal':
- return 4 /* CJK_DECIMAL */;
- case 'decimal-leading-zero':
- return 5 /* DECIMAL_LEADING_ZERO */;
- case 'lower-roman':
- return 6 /* LOWER_ROMAN */;
- case 'upper-roman':
- return 7 /* UPPER_ROMAN */;
- case 'lower-greek':
- return 8 /* LOWER_GREEK */;
- case 'lower-alpha':
- return 9 /* LOWER_ALPHA */;
- case 'upper-alpha':
- return 10 /* UPPER_ALPHA */;
- case 'arabic-indic':
- return 11 /* ARABIC_INDIC */;
- case 'armenian':
- return 12 /* ARMENIAN */;
- case 'bengali':
- return 13 /* BENGALI */;
- case 'cambodian':
- return 14 /* CAMBODIAN */;
- case 'cjk-earthly-branch':
- return 15 /* CJK_EARTHLY_BRANCH */;
- case 'cjk-heavenly-stem':
- return 16 /* CJK_HEAVENLY_STEM */;
- case 'cjk-ideographic':
- return 17 /* CJK_IDEOGRAPHIC */;
- case 'devanagari':
- return 18 /* DEVANAGARI */;
- case 'ethiopic-numeric':
- return 19 /* ETHIOPIC_NUMERIC */;
- case 'georgian':
- return 20 /* GEORGIAN */;
- case 'gujarati':
- return 21 /* GUJARATI */;
- case 'gurmukhi':
- return 22 /* GURMUKHI */;
- case 'hebrew':
- return 22 /* HEBREW */;
- case 'hiragana':
- return 23 /* HIRAGANA */;
- case 'hiragana-iroha':
- return 24 /* HIRAGANA_IROHA */;
- case 'japanese-formal':
- return 25 /* JAPANESE_FORMAL */;
- case 'japanese-informal':
- return 26 /* JAPANESE_INFORMAL */;
- case 'kannada':
- return 27 /* KANNADA */;
- case 'katakana':
- return 28 /* KATAKANA */;
- case 'katakana-iroha':
- return 29 /* KATAKANA_IROHA */;
- case 'khmer':
- return 30 /* KHMER */;
- case 'korean-hangul-formal':
- return 31 /* KOREAN_HANGUL_FORMAL */;
- case 'korean-hanja-formal':
- return 32 /* KOREAN_HANJA_FORMAL */;
- case 'korean-hanja-informal':
- return 33 /* KOREAN_HANJA_INFORMAL */;
- case 'lao':
- return 34 /* LAO */;
- case 'lower-armenian':
- return 35 /* LOWER_ARMENIAN */;
- case 'malayalam':
- return 36 /* MALAYALAM */;
- case 'mongolian':
- return 37 /* MONGOLIAN */;
- case 'myanmar':
- return 38 /* MYANMAR */;
- case 'oriya':
- return 39 /* ORIYA */;
- case 'persian':
- return 40 /* PERSIAN */;
- case 'simp-chinese-formal':
- return 41 /* SIMP_CHINESE_FORMAL */;
- case 'simp-chinese-informal':
- return 42 /* SIMP_CHINESE_INFORMAL */;
- case 'tamil':
- return 43 /* TAMIL */;
- case 'telugu':
- return 44 /* TELUGU */;
- case 'thai':
- return 45 /* THAI */;
- case 'tibetan':
- return 46 /* TIBETAN */;
- case 'trad-chinese-formal':
- return 47 /* TRAD_CHINESE_FORMAL */;
- case 'trad-chinese-informal':
- return 48 /* TRAD_CHINESE_INFORMAL */;
- case 'upper-armenian':
- return 49 /* UPPER_ARMENIAN */;
- case 'disclosure-open':
- return 50 /* DISCLOSURE_OPEN */;
- case 'disclosure-closed':
- return 51 /* DISCLOSURE_CLOSED */;
- case 'none':
- default:
- return -1 /* NONE */;
- }
- }
- };
-
- var marginForSide = function (side) { return ({
- name: "margin-" + side,
- initialValue: '0',
- prefix: false,
- type: 4 /* TOKEN_VALUE */
- }); };
- var marginTop = marginForSide('top');
- var marginRight = marginForSide('right');
- var marginBottom = marginForSide('bottom');
- var marginLeft = marginForSide('left');
-
- var overflow = {
- name: 'overflow',
- initialValue: 'visible',
- prefix: false,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- return tokens.filter(isIdentToken).map(function (overflow) {
- switch (overflow.value) {
- case 'hidden':
- return 1 /* HIDDEN */;
- case 'scroll':
- return 2 /* SCROLL */;
- case 'clip':
- return 3 /* CLIP */;
- case 'auto':
- return 4 /* AUTO */;
- case 'visible':
- default:
- return 0 /* VISIBLE */;
- }
- });
- }
- };
-
- var overflowWrap = {
- name: 'overflow-wrap',
- initialValue: 'normal',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, overflow) {
- switch (overflow) {
- case 'break-word':
- return "break-word" /* BREAK_WORD */;
- case 'normal':
- default:
- return "normal" /* NORMAL */;
- }
- }
- };
-
- var paddingForSide = function (side) { return ({
- name: "padding-" + side,
- initialValue: '0',
- prefix: false,
- type: 3 /* TYPE_VALUE */,
- format: 'length-percentage'
- }); };
- var paddingTop = paddingForSide('top');
- var paddingRight = paddingForSide('right');
- var paddingBottom = paddingForSide('bottom');
- var paddingLeft = paddingForSide('left');
-
- var textAlign = {
- name: 'text-align',
- initialValue: 'left',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, textAlign) {
- switch (textAlign) {
- case 'right':
- return 2 /* RIGHT */;
- case 'center':
- case 'justify':
- return 1 /* CENTER */;
- case 'left':
- default:
- return 0 /* LEFT */;
- }
- }
- };
-
- var position = {
- name: 'position',
- initialValue: 'static',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, position) {
- switch (position) {
- case 'relative':
- return 1 /* RELATIVE */;
- case 'absolute':
- return 2 /* ABSOLUTE */;
- case 'fixed':
- return 3 /* FIXED */;
- case 'sticky':
- return 4 /* STICKY */;
- }
- return 0 /* STATIC */;
- }
- };
-
- var textShadow = {
- name: 'text-shadow',
- initialValue: 'none',
- type: 1 /* LIST */,
- prefix: false,
- parse: function (context, tokens) {
- if (tokens.length === 1 && isIdentWithValue(tokens[0], 'none')) {
- return [];
- }
- return parseFunctionArgs(tokens).map(function (values) {
- var shadow = {
- color: COLORS.TRANSPARENT,
- offsetX: ZERO_LENGTH,
- offsetY: ZERO_LENGTH,
- blur: ZERO_LENGTH
- };
- var c = 0;
- for (var i = 0; i < values.length; i++) {
- var token = values[i];
- if (isLength(token)) {
- if (c === 0) {
- shadow.offsetX = token;
- }
- else if (c === 1) {
- shadow.offsetY = token;
- }
- else {
- shadow.blur = token;
- }
- c++;
- }
- else {
- shadow.color = color$1.parse(context, token);
- }
- }
- return shadow;
- });
- }
- };
-
- var textTransform = {
- name: 'text-transform',
- initialValue: 'none',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, textTransform) {
- switch (textTransform) {
- case 'uppercase':
- return 2 /* UPPERCASE */;
- case 'lowercase':
- return 1 /* LOWERCASE */;
- case 'capitalize':
- return 3 /* CAPITALIZE */;
- }
- return 0 /* NONE */;
- }
- };
-
- var transform$1 = {
- name: 'transform',
- initialValue: 'none',
- prefix: true,
- type: 0 /* VALUE */,
- parse: function (_context, token) {
- if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'none') {
- return null;
- }
- if (token.type === 18 /* FUNCTION */) {
- var transformFunction = SUPPORTED_TRANSFORM_FUNCTIONS[token.name];
- if (typeof transformFunction === 'undefined') {
- throw new Error("Attempting to parse an unsupported transform function \"" + token.name + "\"");
- }
- return transformFunction(token.values);
- }
- return null;
- }
- };
- var matrix = function (args) {
- var values = args.filter(function (arg) { return arg.type === 17 /* NUMBER_TOKEN */; }).map(function (arg) { return arg.number; });
- return values.length === 6 ? values : null;
- };
- // doesn't support 3D transforms at the moment
- var matrix3d = function (args) {
- var values = args.filter(function (arg) { return arg.type === 17 /* NUMBER_TOKEN */; }).map(function (arg) { return arg.number; });
- var a1 = values[0], b1 = values[1]; values[2]; values[3]; var a2 = values[4], b2 = values[5]; values[6]; values[7]; values[8]; values[9]; values[10]; values[11]; var a4 = values[12], b4 = values[13]; values[14]; values[15];
- return values.length === 16 ? [a1, b1, a2, b2, a4, b4] : null;
- };
- var SUPPORTED_TRANSFORM_FUNCTIONS = {
- matrix: matrix,
- matrix3d: matrix3d
- };
-
- var DEFAULT_VALUE = {
- type: 16 /* PERCENTAGE_TOKEN */,
- number: 50,
- flags: FLAG_INTEGER
- };
- var DEFAULT = [DEFAULT_VALUE, DEFAULT_VALUE];
- var transformOrigin = {
- name: 'transform-origin',
- initialValue: '50% 50%',
- prefix: true,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- var origins = tokens.filter(isLengthPercentage);
- if (origins.length !== 2) {
- return DEFAULT;
- }
- return [origins[0], origins[1]];
- }
- };
-
- var visibility = {
- name: 'visible',
- initialValue: 'none',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, visibility) {
- switch (visibility) {
- case 'hidden':
- return 1 /* HIDDEN */;
- case 'collapse':
- return 2 /* COLLAPSE */;
- case 'visible':
- default:
- return 0 /* VISIBLE */;
- }
- }
- };
-
- var WORD_BREAK;
- (function (WORD_BREAK) {
- WORD_BREAK["NORMAL"] = "normal";
- WORD_BREAK["BREAK_ALL"] = "break-all";
- WORD_BREAK["KEEP_ALL"] = "keep-all";
- })(WORD_BREAK || (WORD_BREAK = {}));
- var wordBreak = {
- name: 'word-break',
- initialValue: 'normal',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, wordBreak) {
- switch (wordBreak) {
- case 'break-all':
- return WORD_BREAK.BREAK_ALL;
- case 'keep-all':
- return WORD_BREAK.KEEP_ALL;
- case 'normal':
- default:
- return WORD_BREAK.NORMAL;
- }
- }
- };
-
- var zIndex = {
- name: 'z-index',
- initialValue: 'auto',
- prefix: false,
- type: 0 /* VALUE */,
- parse: function (_context, token) {
- if (token.type === 20 /* IDENT_TOKEN */) {
- return { auto: true, order: 0 };
- }
- if (isNumberToken(token)) {
- return { auto: false, order: token.number };
- }
- throw new Error("Invalid z-index number parsed");
- }
- };
-
- var time = {
- name: 'time',
- parse: function (_context, value) {
- if (value.type === 15 /* DIMENSION_TOKEN */) {
- switch (value.unit.toLowerCase()) {
- case 's':
- return 1000 * value.number;
- case 'ms':
- return value.number;
- }
- }
- throw new Error("Unsupported time type");
- }
- };
-
- var opacity = {
- name: 'opacity',
- initialValue: '1',
- type: 0 /* VALUE */,
- prefix: false,
- parse: function (_context, token) {
- if (isNumberToken(token)) {
- return token.number;
- }
- return 1;
- }
- };
-
- var textDecorationColor = {
- name: "text-decoration-color",
- initialValue: 'transparent',
- prefix: false,
- type: 3 /* TYPE_VALUE */,
- format: 'color'
- };
-
- var textDecorationLine = {
- name: 'text-decoration-line',
- initialValue: 'none',
- prefix: false,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- return tokens
- .filter(isIdentToken)
- .map(function (token) {
- switch (token.value) {
- case 'underline':
- return 1 /* UNDERLINE */;
- case 'overline':
- return 2 /* OVERLINE */;
- case 'line-through':
- return 3 /* LINE_THROUGH */;
- case 'none':
- return 4 /* BLINK */;
- }
- return 0 /* NONE */;
- })
- .filter(function (line) { return line !== 0 /* NONE */; });
- }
- };
-
- var fontFamily = {
- name: "font-family",
- initialValue: '',
- prefix: false,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- var accumulator = [];
- var results = [];
- tokens.forEach(function (token) {
- switch (token.type) {
- case 20 /* IDENT_TOKEN */:
- case 0 /* STRING_TOKEN */:
- accumulator.push(token.value);
- break;
- case 17 /* NUMBER_TOKEN */:
- accumulator.push(token.number.toString());
- break;
- case 4 /* COMMA_TOKEN */:
- results.push(accumulator.join(' '));
- accumulator.length = 0;
- break;
- }
- });
- if (accumulator.length) {
- results.push(accumulator.join(' '));
- }
- return results.map(function (result) { return (result.indexOf(' ') === -1 ? result : "'" + result + "'"); });
- }
- };
-
- var fontSize = {
- name: "font-size",
- initialValue: '0',
- prefix: false,
- type: 3 /* TYPE_VALUE */,
- format: 'length'
- };
-
- var fontWeight = {
- name: 'font-weight',
- initialValue: 'normal',
- type: 0 /* VALUE */,
- prefix: false,
- parse: function (_context, token) {
- if (isNumberToken(token)) {
- return token.number;
- }
- if (isIdentToken(token)) {
- switch (token.value) {
- case 'bold':
- return 700;
- case 'normal':
- default:
- return 400;
- }
- }
- return 400;
- }
- };
-
- var fontVariant = {
- name: 'font-variant',
- initialValue: 'none',
- type: 1 /* LIST */,
- prefix: false,
- parse: function (_context, tokens) {
- return tokens.filter(isIdentToken).map(function (token) { return token.value; });
- }
- };
-
- var fontStyle = {
- name: 'font-style',
- initialValue: 'normal',
- prefix: false,
- type: 2 /* IDENT_VALUE */,
- parse: function (_context, overflow) {
- switch (overflow) {
- case 'oblique':
- return "oblique" /* OBLIQUE */;
- case 'italic':
- return "italic" /* ITALIC */;
- case 'normal':
- default:
- return "normal" /* NORMAL */;
- }
- }
- };
-
- var contains = function (bit, value) { return (bit & value) !== 0; };
-
- var content = {
- name: 'content',
- initialValue: 'none',
- type: 1 /* LIST */,
- prefix: false,
- parse: function (_context, tokens) {
- if (tokens.length === 0) {
- return [];
- }
- var first = tokens[0];
- if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') {
- return [];
- }
- return tokens;
- }
- };
-
- var counterIncrement = {
- name: 'counter-increment',
- initialValue: 'none',
- prefix: true,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- if (tokens.length === 0) {
- return null;
- }
- var first = tokens[0];
- if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') {
- return null;
- }
- var increments = [];
- var filtered = tokens.filter(nonWhiteSpace);
- for (var i = 0; i < filtered.length; i++) {
- var counter = filtered[i];
- var next = filtered[i + 1];
- if (counter.type === 20 /* IDENT_TOKEN */) {
- var increment = next && isNumberToken(next) ? next.number : 1;
- increments.push({ counter: counter.value, increment: increment });
- }
- }
- return increments;
- }
- };
-
- var counterReset = {
- name: 'counter-reset',
- initialValue: 'none',
- prefix: true,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- if (tokens.length === 0) {
- return [];
- }
- var resets = [];
- var filtered = tokens.filter(nonWhiteSpace);
- for (var i = 0; i < filtered.length; i++) {
- var counter = filtered[i];
- var next = filtered[i + 1];
- if (isIdentToken(counter) && counter.value !== 'none') {
- var reset = next && isNumberToken(next) ? next.number : 0;
- resets.push({ counter: counter.value, reset: reset });
- }
- }
- return resets;
- }
- };
-
- var duration = {
- name: 'duration',
- initialValue: '0s',
- prefix: false,
- type: 1 /* LIST */,
- parse: function (context, tokens) {
- return tokens.filter(isDimensionToken).map(function (token) { return time.parse(context, token); });
- }
- };
-
- var quotes = {
- name: 'quotes',
- initialValue: 'none',
- prefix: true,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- if (tokens.length === 0) {
- return null;
- }
- var first = tokens[0];
- if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') {
- return null;
- }
- var quotes = [];
- var filtered = tokens.filter(isStringToken);
- if (filtered.length % 2 !== 0) {
- return null;
- }
- for (var i = 0; i < filtered.length; i += 2) {
- var open_1 = filtered[i].value;
- var close_1 = filtered[i + 1].value;
- quotes.push({ open: open_1, close: close_1 });
- }
- return quotes;
- }
- };
- var getQuote = function (quotes, depth, open) {
- if (!quotes) {
- return '';
- }
- var quote = quotes[Math.min(depth, quotes.length - 1)];
- if (!quote) {
- return '';
- }
- return open ? quote.open : quote.close;
- };
-
- var paintOrder = {
- name: 'paint-order',
- initialValue: 'normal',
- prefix: false,
- type: 1 /* LIST */,
- parse: function (_context, tokens) {
- var DEFAULT_VALUE = [0 /* FILL */, 1 /* STROKE */, 2 /* MARKERS */];
- var layers = [];
- tokens.filter(isIdentToken).forEach(function (token) {
- switch (token.value) {
- case 'stroke':
- layers.push(1 /* STROKE */);
- break;
- case 'fill':
- layers.push(0 /* FILL */);
- break;
- case 'markers':
- layers.push(2 /* MARKERS */);
- break;
- }
- });
- DEFAULT_VALUE.forEach(function (value) {
- if (layers.indexOf(value) === -1) {
- layers.push(value);
- }
- });
- return layers;
- }
- };
-
- var webkitTextStrokeColor = {
- name: "-webkit-text-stroke-color",
- initialValue: 'currentcolor',
- prefix: false,
- type: 3 /* TYPE_VALUE */,
- format: 'color'
- };
-
- var webkitTextStrokeWidth = {
- name: "-webkit-text-stroke-width",
- initialValue: '0',
- type: 0 /* VALUE */,
- prefix: false,
- parse: function (_context, token) {
- if (isDimensionToken(token)) {
- return token.number;
- }
- return 0;
- }
- };
-
- var CSSParsedDeclaration = /** @class */ (function () {
- function CSSParsedDeclaration(context, declaration) {
- var _a, _b;
- this.animationDuration = parse(context, duration, declaration.animationDuration);
- this.backgroundClip = parse(context, backgroundClip, declaration.backgroundClip);
- this.backgroundColor = parse(context, backgroundColor, declaration.backgroundColor);
- this.backgroundImage = parse(context, backgroundImage, declaration.backgroundImage);
- this.backgroundOrigin = parse(context, backgroundOrigin, declaration.backgroundOrigin);
- this.backgroundPosition = parse(context, backgroundPosition, declaration.backgroundPosition);
- this.backgroundRepeat = parse(context, backgroundRepeat, declaration.backgroundRepeat);
- this.backgroundSize = parse(context, backgroundSize, declaration.backgroundSize);
- this.borderTopColor = parse(context, borderTopColor, declaration.borderTopColor);
- this.borderRightColor = parse(context, borderRightColor, declaration.borderRightColor);
- this.borderBottomColor = parse(context, borderBottomColor, declaration.borderBottomColor);
- this.borderLeftColor = parse(context, borderLeftColor, declaration.borderLeftColor);
- this.borderTopLeftRadius = parse(context, borderTopLeftRadius, declaration.borderTopLeftRadius);
- this.borderTopRightRadius = parse(context, borderTopRightRadius, declaration.borderTopRightRadius);
- this.borderBottomRightRadius = parse(context, borderBottomRightRadius, declaration.borderBottomRightRadius);
- this.borderBottomLeftRadius = parse(context, borderBottomLeftRadius, declaration.borderBottomLeftRadius);
- this.borderTopStyle = parse(context, borderTopStyle, declaration.borderTopStyle);
- this.borderRightStyle = parse(context, borderRightStyle, declaration.borderRightStyle);
- this.borderBottomStyle = parse(context, borderBottomStyle, declaration.borderBottomStyle);
- this.borderLeftStyle = parse(context, borderLeftStyle, declaration.borderLeftStyle);
- this.borderTopWidth = parse(context, borderTopWidth, declaration.borderTopWidth);
- this.borderRightWidth = parse(context, borderRightWidth, declaration.borderRightWidth);
- this.borderBottomWidth = parse(context, borderBottomWidth, declaration.borderBottomWidth);
- this.borderLeftWidth = parse(context, borderLeftWidth, declaration.borderLeftWidth);
- this.color = parse(context, color, declaration.color);
- this.direction = parse(context, direction, declaration.direction);
- this.display = parse(context, display, declaration.display);
- this.float = parse(context, float, declaration.cssFloat);
- this.fontFamily = parse(context, fontFamily, declaration.fontFamily);
- this.fontSize = parse(context, fontSize, declaration.fontSize);
- this.fontStyle = parse(context, fontStyle, declaration.fontStyle);
- this.fontVariant = parse(context, fontVariant, declaration.fontVariant);
- this.fontWeight = parse(context, fontWeight, declaration.fontWeight);
- this.letterSpacing = parse(context, letterSpacing, declaration.letterSpacing);
- this.lineBreak = parse(context, lineBreak, declaration.lineBreak);
- this.lineHeight = parse(context, lineHeight, declaration.lineHeight);
- this.listStyleImage = parse(context, listStyleImage, declaration.listStyleImage);
- this.listStylePosition = parse(context, listStylePosition, declaration.listStylePosition);
- this.listStyleType = parse(context, listStyleType, declaration.listStyleType);
- this.marginTop = parse(context, marginTop, declaration.marginTop);
- this.marginRight = parse(context, marginRight, declaration.marginRight);
- this.marginBottom = parse(context, marginBottom, declaration.marginBottom);
- this.marginLeft = parse(context, marginLeft, declaration.marginLeft);
- this.opacity = parse(context, opacity, declaration.opacity);
- var overflowTuple = parse(context, overflow, declaration.overflow);
- this.overflowX = overflowTuple[0];
- this.overflowY = overflowTuple[overflowTuple.length > 1 ? 1 : 0];
- this.overflowWrap = parse(context, overflowWrap, declaration.overflowWrap);
- this.paddingTop = parse(context, paddingTop, declaration.paddingTop);
- this.paddingRight = parse(context, paddingRight, declaration.paddingRight);
- this.paddingBottom = parse(context, paddingBottom, declaration.paddingBottom);
- this.paddingLeft = parse(context, paddingLeft, declaration.paddingLeft);
- this.paintOrder = parse(context, paintOrder, declaration.paintOrder);
- this.position = parse(context, position, declaration.position);
- this.textAlign = parse(context, textAlign, declaration.textAlign);
- this.textDecorationColor = parse(context, textDecorationColor, (_a = declaration.textDecorationColor) !== null && _a !== void 0 ? _a : declaration.color);
- this.textDecorationLine = parse(context, textDecorationLine, (_b = declaration.textDecorationLine) !== null && _b !== void 0 ? _b : declaration.textDecoration);
- this.textShadow = parse(context, textShadow, declaration.textShadow);
- this.textTransform = parse(context, textTransform, declaration.textTransform);
- this.transform = parse(context, transform$1, declaration.transform);
- this.transformOrigin = parse(context, transformOrigin, declaration.transformOrigin);
- this.visibility = parse(context, visibility, declaration.visibility);
- this.webkitTextStrokeColor = parse(context, webkitTextStrokeColor, declaration.webkitTextStrokeColor);
- this.webkitTextStrokeWidth = parse(context, webkitTextStrokeWidth, declaration.webkitTextStrokeWidth);
- this.wordBreak = parse(context, wordBreak, declaration.wordBreak);
- this.zIndex = parse(context, zIndex, declaration.zIndex);
- }
- CSSParsedDeclaration.prototype.isVisible = function () {
- return this.display > 0 && this.opacity > 0 && this.visibility === 0 /* VISIBLE */;
- };
- CSSParsedDeclaration.prototype.isTransparent = function () {
- return isTransparent(this.backgroundColor);
- };
- CSSParsedDeclaration.prototype.isTransformed = function () {
- return this.transform !== null;
- };
- CSSParsedDeclaration.prototype.isPositioned = function () {
- return this.position !== 0 /* STATIC */;
- };
- CSSParsedDeclaration.prototype.isPositionedWithZIndex = function () {
- return this.isPositioned() && !this.zIndex.auto;
- };
- CSSParsedDeclaration.prototype.isFloating = function () {
- return this.float !== 0 /* NONE */;
- };
- CSSParsedDeclaration.prototype.isInlineLevel = function () {
- return (contains(this.display, 4 /* INLINE */) ||
- contains(this.display, 33554432 /* INLINE_BLOCK */) ||
- contains(this.display, 268435456 /* INLINE_FLEX */) ||
- contains(this.display, 536870912 /* INLINE_GRID */) ||
- contains(this.display, 67108864 /* INLINE_LIST_ITEM */) ||
- contains(this.display, 134217728 /* INLINE_TABLE */));
- };
- return CSSParsedDeclaration;
- }());
- var CSSParsedPseudoDeclaration = /** @class */ (function () {
- function CSSParsedPseudoDeclaration(context, declaration) {
- this.content = parse(context, content, declaration.content);
- this.quotes = parse(context, quotes, declaration.quotes);
- }
- return CSSParsedPseudoDeclaration;
- }());
- var CSSParsedCounterDeclaration = /** @class */ (function () {
- function CSSParsedCounterDeclaration(context, declaration) {
- this.counterIncrement = parse(context, counterIncrement, declaration.counterIncrement);
- this.counterReset = parse(context, counterReset, declaration.counterReset);
- }
- return CSSParsedCounterDeclaration;
- }());
- // eslint-disable-next-line @typescript-eslint/no-explicit-any
- var parse = function (context, descriptor, style) {
- var tokenizer = new Tokenizer();
- var value = style !== null && typeof style !== 'undefined' ? style.toString() : descriptor.initialValue;
- tokenizer.write(value);
- var parser = new Parser(tokenizer.read());
- switch (descriptor.type) {
- case 2 /* IDENT_VALUE */:
- var token = parser.parseComponentValue();
- return descriptor.parse(context, isIdentToken(token) ? token.value : descriptor.initialValue);
- case 0 /* VALUE */:
- return descriptor.parse(context, parser.parseComponentValue());
- case 1 /* LIST */:
- return descriptor.parse(context, parser.parseComponentValues());
- case 4 /* TOKEN_VALUE */:
- return parser.parseComponentValue();
- case 3 /* TYPE_VALUE */:
- switch (descriptor.format) {
- case 'angle':
- return angle.parse(context, parser.parseComponentValue());
- case 'color':
- return color$1.parse(context, parser.parseComponentValue());
- case 'image':
- return image.parse(context, parser.parseComponentValue());
- case 'length':
- var length_1 = parser.parseComponentValue();
- return isLength(length_1) ? length_1 : ZERO_LENGTH;
- case 'length-percentage':
- var value_1 = parser.parseComponentValue();
- return isLengthPercentage(value_1) ? value_1 : ZERO_LENGTH;
- case 'time':
- return time.parse(context, parser.parseComponentValue());
- }
- break;
- }
- };
-
- var elementDebuggerAttribute = 'data-html2canvas-debug';
- var getElementDebugType = function (element) {
- var attribute = element.getAttribute(elementDebuggerAttribute);
- switch (attribute) {
- case 'all':
- return 1 /* ALL */;
- case 'clone':
- return 2 /* CLONE */;
- case 'parse':
- return 3 /* PARSE */;
- case 'render':
- return 4 /* RENDER */;
- default:
- return 0 /* NONE */;
- }
- };
- var isDebugging = function (element, type) {
- var elementType = getElementDebugType(element);
- return elementType === 1 /* ALL */ || type === elementType;
- };
-
- var ElementContainer = /** @class */ (function () {
- function ElementContainer(context, element) {
- this.context = context;
- this.textNodes = [];
- this.elements = [];
- this.flags = 0;
- if (isDebugging(element, 3 /* PARSE */)) {
- debugger;
- }
- this.styles = new CSSParsedDeclaration(context, window.getComputedStyle(element, null));
- if (isHTMLElementNode(element)) {
- if (this.styles.animationDuration.some(function (duration) { return duration > 0; })) {
- element.style.animationDuration = '0s';
- }
- if (this.styles.transform !== null) {
- // getBoundingClientRect takes transforms into account
- element.style.transform = 'none';
- }
- }
- this.bounds = parseBounds(this.context, element);
- if (isDebugging(element, 4 /* RENDER */)) {
- this.flags |= 16 /* DEBUG_RENDER */;
- }
- }
- return ElementContainer;
- }());
-
- /*
- * text-segmentation 1.0.3
- * Copyright (c) 2022 Niklas von Hertzen
- * Released under MIT License
- */
- var base64 = 'AAAAAAAAAAAAEA4AGBkAAFAaAAACAAAAAAAIABAAGAAwADgACAAQAAgAEAAIABAACAAQAAgAEAAIABAACAAQAAgAEAAIABAAQABIAEQATAAIABAACAAQAAgAEAAIABAAVABcAAgAEAAIABAACAAQAGAAaABwAHgAgACIAI4AlgAIABAAmwCjAKgAsAC2AL4AvQDFAMoA0gBPAVYBWgEIAAgACACMANoAYgFkAWwBdAF8AX0BhQGNAZUBlgGeAaMBlQGWAasBswF8AbsBwwF0AcsBYwHTAQgA2wG/AOMBdAF8AekB8QF0AfkB+wHiAHQBfAEIAAMC5gQIAAsCEgIIAAgAFgIeAggAIgIpAggAMQI5AkACygEIAAgASAJQAlgCYAIIAAgACAAKBQoFCgUTBRMFGQUrBSsFCAAIAAgACAAIAAgACAAIAAgACABdAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABoAmgCrwGvAQgAbgJ2AggAHgEIAAgACADnAXsCCAAIAAgAgwIIAAgACAAIAAgACACKAggAkQKZAggAPADJAAgAoQKkAqwCsgK6AsICCADJAggA0AIIAAgACAAIANYC3gIIAAgACAAIAAgACABAAOYCCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAkASoB+QIEAAgACAA8AEMCCABCBQgACABJBVAFCAAIAAgACAAIAAgACAAIAAgACABTBVoFCAAIAFoFCABfBWUFCAAIAAgACAAIAAgAbQUIAAgACAAIAAgACABzBXsFfQWFBYoFigWKBZEFigWKBYoFmAWfBaYFrgWxBbkFCAAIAAgACAAIAAgACAAIAAgACAAIAMEFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAMgFCADQBQgACAAIAAgACAAIAAgACAAIAAgACAAIAO4CCAAIAAgAiQAIAAgACABAAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAD0AggACAD8AggACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIANYFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAMDvwAIAAgAJAIIAAgACAAIAAgACAAIAAgACwMTAwgACAB9BOsEGwMjAwgAKwMyAwsFYgE3A/MEPwMIAEUDTQNRAwgAWQOsAGEDCAAIAAgACAAIAAgACABpAzQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFIQUoBSwFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABtAwgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABMAEwACAAIAAgACAAIABgACAAIAAgACAC/AAgACAAyAQgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACAAIAAwAAgACAAIAAgACAAIAAgACAAIAAAARABIAAgACAAIABQASAAIAAgAIABwAEAAjgCIABsAqAC2AL0AigDQAtwC+IJIQqVAZUBWQqVAZUBlQGVAZUBlQGrC5UBlQGVAZUBlQGVAZUBlQGVAXsKlQGVAbAK6wsrDGUMpQzlDJUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAfAKAAuZA64AtwCJALoC6ADwAAgAuACgA/oEpgO6AqsD+AAIAAgAswMIAAgACAAIAIkAuwP5AfsBwwPLAwgACAAIAAgACADRA9kDCAAIAOED6QMIAAgACAAIAAgACADuA/YDCAAIAP4DyQAIAAgABgQIAAgAXQAOBAgACAAIAAgACAAIABMECAAIAAgACAAIAAgACAD8AAQBCAAIAAgAGgQiBCoECAExBAgAEAEIAAgACAAIAAgACAAIAAgACAAIAAgACAA4BAgACABABEYECAAIAAgATAQYAQgAVAQIAAgACAAIAAgACAAIAAgACAAIAFoECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAOQEIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAB+BAcACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAEABhgSMBAgACAAIAAgAlAQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAwAEAAQABAADAAMAAwADAAQABAAEAAQABAAEAAQABHATAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAdQMIAAgACAAIAAgACAAIAMkACAAIAAgAfQMIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACFA4kDCAAIAAgACAAIAOcBCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAIcDCAAIAAgACAAIAAgACAAIAAgACAAIAJEDCAAIAAgACADFAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABgBAgAZgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAbAQCBXIECAAIAHkECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABAAJwEQACjBKoEsgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAC6BMIECAAIAAgACAAIAAgACABmBAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAxwQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAGYECAAIAAgAzgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBd0FXwUIAOIF6gXxBYoF3gT5BQAGCAaKBYoFigWKBYoFigWKBYoFigWKBYoFigXWBIoFigWKBYoFigWKBYoFigWKBYsFEAaKBYoFigWKBYoFigWKBRQGCACKBYoFigWKBQgACAAIANEECAAIABgGigUgBggAJgYIAC4GMwaKBYoF0wQ3Bj4GigWKBYoFigWKBYoFigWKBYoFigWKBYoFigUIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWLBf///////wQABAAEAAQABAAEAAQABAAEAAQAAwAEAAQAAgAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAQADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUAAAAFAAUAAAAFAAUAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAQAAAAUABQAFAAUABQAFAAAAAAAFAAUAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAFAAUAAQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAAABwAHAAcAAAAHAAcABwAFAAEAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAcABwAFAAUAAAAAAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAAAAQABAAAAAAAAAAAAAAAFAAUABQAFAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAHAAcAAAAHAAcAAAAAAAUABQAHAAUAAQAHAAEABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwABAAUABQAFAAUAAAAAAAAAAAAAAAEAAQABAAEAAQABAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABQANAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAABQAHAAUABQAFAAAAAAAAAAcABQAFAAUABQAFAAQABAAEAAQABAAEAAQABAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUAAAAFAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAUAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAcABwAFAAcABwAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUABwAHAAUABQAFAAUAAAAAAAcABwAAAAAABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAAAAAAAAAAABQAFAAAAAAAFAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAFAAUABQAFAAUAAAAFAAUABwAAAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABwAFAAUABQAFAAAAAAAHAAcAAAAAAAcABwAFAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAAAAAAAAAHAAcABwAAAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAUABQAFAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAHAAcABQAHAAcAAAAFAAcABwAAAAcABwAFAAUAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAFAAcABwAFAAUABQAAAAUAAAAHAAcABwAHAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAHAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUAAAAFAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAUAAAAFAAUAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABwAFAAUABQAFAAUABQAAAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABQAFAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAFAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAHAAUABQAFAAUABQAFAAUABwAHAAcABwAHAAcABwAHAAUABwAHAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABwAHAAcABwAFAAUABwAHAAcAAAAAAAAAAAAHAAcABQAHAAcABwAHAAcABwAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAUABQAFAAUABQAFAAUAAAAFAAAABQAAAAAABQAFAAUABQAFAAUABQAFAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAUABQAFAAUABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABwAFAAcABwAHAAcABwAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAUABQAFAAUABwAHAAUABQAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABQAFAAcABwAHAAUABwAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAcABQAFAAUABQAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAAAAAABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAAAAAAAAAFAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAUABQAHAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAFAAUABQAFAAcABwAFAAUABwAHAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAcABwAFAAUABwAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABQAAAAAABQAFAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAcABwAAAAAAAAAAAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAcABwAFAAcABwAAAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAFAAUABQAAAAUABQAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABwAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAHAAcABQAHAAUABQAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAAABwAHAAAAAAAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAFAAUABwAFAAcABwAFAAcABQAFAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAAAAAABwAHAAcABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAFAAcABwAFAAUABQAFAAUABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAUABQAFAAcABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABQAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAAAAAAFAAUABwAHAAcABwAFAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAHAAUABQAFAAUABQAFAAUABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAABQAAAAUABQAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAHAAcAAAAFAAUAAAAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABQAFAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAABQAFAAUABQAFAAUABQAAAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAFAAUABQAFAAUADgAOAA4ADgAOAA4ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAMAAwADAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkAAAAAAAAAAAAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAAAAAAAAAAAAsADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwACwAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAADgAOAA4AAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAAAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4AAAAOAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAAAAAAAAAAAA4AAAAOAAAAAAAAAAAADgAOAA4AAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAA=';
-
- /*
- * utrie 1.0.2
- * Copyright (c) 2022 Niklas von Hertzen
- * Released under MIT License
- */
- var chars$1 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/';
- // Use a lookup table to find the index.
- var lookup$1 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256);
- for (var i$1 = 0; i$1 < chars$1.length; i$1++) {
- lookup$1[chars$1.charCodeAt(i$1)] = i$1;
- }
- var decode = function (base64) {
- var bufferLength = base64.length * 0.75, len = base64.length, i, p = 0, encoded1, encoded2, encoded3, encoded4;
- if (base64[base64.length - 1] === '=') {
- bufferLength--;
- if (base64[base64.length - 2] === '=') {
- bufferLength--;
- }
- }
- var buffer = typeof ArrayBuffer !== 'undefined' &&
- typeof Uint8Array !== 'undefined' &&
- typeof Uint8Array.prototype.slice !== 'undefined'
- ? new ArrayBuffer(bufferLength)
- : new Array(bufferLength);
- var bytes = Array.isArray(buffer) ? buffer : new Uint8Array(buffer);
- for (i = 0; i < len; i += 4) {
- encoded1 = lookup$1[base64.charCodeAt(i)];
- encoded2 = lookup$1[base64.charCodeAt(i + 1)];
- encoded3 = lookup$1[base64.charCodeAt(i + 2)];
- encoded4 = lookup$1[base64.charCodeAt(i + 3)];
- bytes[p++] = (encoded1 << 2) | (encoded2 >> 4);
- bytes[p++] = ((encoded2 & 15) << 4) | (encoded3 >> 2);
- bytes[p++] = ((encoded3 & 3) << 6) | (encoded4 & 63);
- }
- return buffer;
- };
- var polyUint16Array = function (buffer) {
- var length = buffer.length;
- var bytes = [];
- for (var i = 0; i < length; i += 2) {
- bytes.push((buffer[i + 1] << 8) | buffer[i]);
- }
- return bytes;
- };
- var polyUint32Array = function (buffer) {
- var length = buffer.length;
- var bytes = [];
- for (var i = 0; i < length; i += 4) {
- bytes.push((buffer[i + 3] << 24) | (buffer[i + 2] << 16) | (buffer[i + 1] << 8) | buffer[i]);
- }
- return bytes;
- };
-
- /** Shift size for getting the index-2 table offset. */
- var UTRIE2_SHIFT_2 = 5;
- /** Shift size for getting the index-1 table offset. */
- var UTRIE2_SHIFT_1 = 6 + 5;
- /**
- * Shift size for shifting left the index array values.
- * Increases possible data size with 16-bit index values at the cost
- * of compactability.
- * This requires data blocks to be aligned by UTRIE2_DATA_GRANULARITY.
- */
- var UTRIE2_INDEX_SHIFT = 2;
- /**
- * Difference between the two shift sizes,
- * for getting an index-1 offset from an index-2 offset. 6=11-5
- */
- var UTRIE2_SHIFT_1_2 = UTRIE2_SHIFT_1 - UTRIE2_SHIFT_2;
- /**
- * The part of the index-2 table for U+D800..U+DBFF stores values for
- * lead surrogate code _units_ not code _points_.
- * Values for lead surrogate code _points_ are indexed with this portion of the table.
- * Length=32=0x20=0x400>>UTRIE2_SHIFT_2. (There are 1024=0x400 lead surrogates.)
- */
- var UTRIE2_LSCP_INDEX_2_OFFSET = 0x10000 >> UTRIE2_SHIFT_2;
- /** Number of entries in a data block. 32=0x20 */
- var UTRIE2_DATA_BLOCK_LENGTH = 1 << UTRIE2_SHIFT_2;
- /** Mask for getting the lower bits for the in-data-block offset. */
- var UTRIE2_DATA_MASK = UTRIE2_DATA_BLOCK_LENGTH - 1;
- var UTRIE2_LSCP_INDEX_2_LENGTH = 0x400 >> UTRIE2_SHIFT_2;
- /** Count the lengths of both BMP pieces. 2080=0x820 */
- var UTRIE2_INDEX_2_BMP_LENGTH = UTRIE2_LSCP_INDEX_2_OFFSET + UTRIE2_LSCP_INDEX_2_LENGTH;
- /**
- * The 2-byte UTF-8 version of the index-2 table follows at offset 2080=0x820.
- * Length 32=0x20 for lead bytes C0..DF, regardless of UTRIE2_SHIFT_2.
- */
- var UTRIE2_UTF8_2B_INDEX_2_OFFSET = UTRIE2_INDEX_2_BMP_LENGTH;
- var UTRIE2_UTF8_2B_INDEX_2_LENGTH = 0x800 >> 6; /* U+0800 is the first code point after 2-byte UTF-8 */
- /**
- * The index-1 table, only used for supplementary code points, at offset 2112=0x840.
- * Variable length, for code points up to highStart, where the last single-value range starts.
- * Maximum length 512=0x200=0x100000>>UTRIE2_SHIFT_1.
- * (For 0x100000 supplementary code points U+10000..U+10ffff.)
- *
- * The part of the index-2 table for supplementary code points starts
- * after this index-1 table.
- *
- * Both the index-1 table and the following part of the index-2 table
- * are omitted completely if there is only BMP data.
- */
- var UTRIE2_INDEX_1_OFFSET = UTRIE2_UTF8_2B_INDEX_2_OFFSET + UTRIE2_UTF8_2B_INDEX_2_LENGTH;
- /**
- * Number of index-1 entries for the BMP. 32=0x20
- * This part of the index-1 table is omitted from the serialized form.
- */
- var UTRIE2_OMITTED_BMP_INDEX_1_LENGTH = 0x10000 >> UTRIE2_SHIFT_1;
- /** Number of entries in an index-2 block. 64=0x40 */
- var UTRIE2_INDEX_2_BLOCK_LENGTH = 1 << UTRIE2_SHIFT_1_2;
- /** Mask for getting the lower bits for the in-index-2-block offset. */
- var UTRIE2_INDEX_2_MASK = UTRIE2_INDEX_2_BLOCK_LENGTH - 1;
- var slice16 = function (view, start, end) {
- if (view.slice) {
- return view.slice(start, end);
- }
- return new Uint16Array(Array.prototype.slice.call(view, start, end));
- };
- var slice32 = function (view, start, end) {
- if (view.slice) {
- return view.slice(start, end);
- }
- return new Uint32Array(Array.prototype.slice.call(view, start, end));
- };
- var createTrieFromBase64 = function (base64, _byteLength) {
- var buffer = decode(base64);
- var view32 = Array.isArray(buffer) ? polyUint32Array(buffer) : new Uint32Array(buffer);
- var view16 = Array.isArray(buffer) ? polyUint16Array(buffer) : new Uint16Array(buffer);
- var headerLength = 24;
- var index = slice16(view16, headerLength / 2, view32[4] / 2);
- var data = view32[5] === 2
- ? slice16(view16, (headerLength + view32[4]) / 2)
- : slice32(view32, Math.ceil((headerLength + view32[4]) / 4));
- return new Trie(view32[0], view32[1], view32[2], view32[3], index, data);
- };
- var Trie = /** @class */ (function () {
- function Trie(initialValue, errorValue, highStart, highValueIndex, index, data) {
- this.initialValue = initialValue;
- this.errorValue = errorValue;
- this.highStart = highStart;
- this.highValueIndex = highValueIndex;
- this.index = index;
- this.data = data;
- }
- /**
- * Get the value for a code point as stored in the Trie.
- *
- * @param codePoint the code point
- * @return the value
- */
- Trie.prototype.get = function (codePoint) {
- var ix;
- if (codePoint >= 0) {
- if (codePoint < 0x0d800 || (codePoint > 0x0dbff && codePoint <= 0x0ffff)) {
- // Ordinary BMP code point, excluding leading surrogates.
- // BMP uses a single level lookup. BMP index starts at offset 0 in the Trie2 index.
- // 16 bit data is stored in the index array itself.
- ix = this.index[codePoint >> UTRIE2_SHIFT_2];
- ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK);
- return this.data[ix];
- }
- if (codePoint <= 0xffff) {
- // Lead Surrogate Code Point. A Separate index section is stored for
- // lead surrogate code units and code points.
- // The main index has the code unit data.
- // For this function, we need the code point data.
- // Note: this expression could be refactored for slightly improved efficiency, but
- // surrogate code points will be so rare in practice that it's not worth it.
- ix = this.index[UTRIE2_LSCP_INDEX_2_OFFSET + ((codePoint - 0xd800) >> UTRIE2_SHIFT_2)];
- ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK);
- return this.data[ix];
- }
- if (codePoint < this.highStart) {
- // Supplemental code point, use two-level lookup.
- ix = UTRIE2_INDEX_1_OFFSET - UTRIE2_OMITTED_BMP_INDEX_1_LENGTH + (codePoint >> UTRIE2_SHIFT_1);
- ix = this.index[ix];
- ix += (codePoint >> UTRIE2_SHIFT_2) & UTRIE2_INDEX_2_MASK;
- ix = this.index[ix];
- ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK);
- return this.data[ix];
- }
- if (codePoint <= 0x10ffff) {
- return this.data[this.highValueIndex];
- }
- }
- // Fall through. The code point is outside of the legal range of 0..0x10ffff.
- return this.errorValue;
- };
- return Trie;
- }());
-
- /*
- * base64-arraybuffer 1.0.2
- * Copyright (c) 2022 Niklas von Hertzen
- * Released under MIT License
- */
- var chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/';
- // Use a lookup table to find the index.
- var lookup = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256);
- for (var i = 0; i < chars.length; i++) {
- lookup[chars.charCodeAt(i)] = i;
- }
-
- var Prepend = 1;
- var CR = 2;
- var LF = 3;
- var Control = 4;
- var Extend = 5;
- var SpacingMark = 7;
- var L = 8;
- var V = 9;
- var T = 10;
- var LV = 11;
- var LVT = 12;
- var ZWJ = 13;
- var Extended_Pictographic = 14;
- var RI = 15;
- var toCodePoints = function (str) {
- var codePoints = [];
- var i = 0;
- var length = str.length;
- while (i < length) {
- var value = str.charCodeAt(i++);
- if (value >= 0xd800 && value <= 0xdbff && i < length) {
- var extra = str.charCodeAt(i++);
- if ((extra & 0xfc00) === 0xdc00) {
- codePoints.push(((value & 0x3ff) << 10) + (extra & 0x3ff) + 0x10000);
- }
- else {
- codePoints.push(value);
- i--;
- }
- }
- else {
- codePoints.push(value);
- }
- }
- return codePoints;
- };
- var fromCodePoint = function () {
- var codePoints = [];
- for (var _i = 0; _i < arguments.length; _i++) {
- codePoints[_i] = arguments[_i];
- }
- if (String.fromCodePoint) {
- return String.fromCodePoint.apply(String, codePoints);
- }
- var length = codePoints.length;
- if (!length) {
- return '';
- }
- var codeUnits = [];
- var index = -1;
- var result = '';
- while (++index < length) {
- var codePoint = codePoints[index];
- if (codePoint <= 0xffff) {
- codeUnits.push(codePoint);
- }
- else {
- codePoint -= 0x10000;
- codeUnits.push((codePoint >> 10) + 0xd800, (codePoint % 0x400) + 0xdc00);
- }
- if (index + 1 === length || codeUnits.length > 0x4000) {
- result += String.fromCharCode.apply(String, codeUnits);
- codeUnits.length = 0;
- }
- }
- return result;
- };
- var UnicodeTrie = createTrieFromBase64(base64);
- var BREAK_NOT_ALLOWED = '×';
- var BREAK_ALLOWED = '÷';
- var codePointToClass = function (codePoint) { return UnicodeTrie.get(codePoint); };
- var _graphemeBreakAtIndex = function (_codePoints, classTypes, index) {
- var prevIndex = index - 2;
- var prev = classTypes[prevIndex];
- var current = classTypes[index - 1];
- var next = classTypes[index];
- // GB3 Do not break between a CR and LF
- if (current === CR && next === LF) {
- return BREAK_NOT_ALLOWED;
- }
- // GB4 Otherwise, break before and after controls.
- if (current === CR || current === LF || current === Control) {
- return BREAK_ALLOWED;
- }
- // GB5
- if (next === CR || next === LF || next === Control) {
- return BREAK_ALLOWED;
- }
- // Do not break Hangul syllable sequences.
- // GB6
- if (current === L && [L, V, LV, LVT].indexOf(next) !== -1) {
- return BREAK_NOT_ALLOWED;
- }
- // GB7
- if ((current === LV || current === V) && (next === V || next === T)) {
- return BREAK_NOT_ALLOWED;
- }
- // GB8
- if ((current === LVT || current === T) && next === T) {
- return BREAK_NOT_ALLOWED;
- }
- // GB9 Do not break before extending characters or ZWJ.
- if (next === ZWJ || next === Extend) {
- return BREAK_NOT_ALLOWED;
- }
- // Do not break before SpacingMarks, or after Prepend characters.
- // GB9a
- if (next === SpacingMark) {
- return BREAK_NOT_ALLOWED;
- }
- // GB9a
- if (current === Prepend) {
- return BREAK_NOT_ALLOWED;
- }
- // GB11 Do not break within emoji modifier sequences or emoji zwj sequences.
- if (current === ZWJ && next === Extended_Pictographic) {
- while (prev === Extend) {
- prev = classTypes[--prevIndex];
- }
- if (prev === Extended_Pictographic) {
- return BREAK_NOT_ALLOWED;
- }
- }
- // GB12 Do not break within emoji flag sequences.
- // That is, do not break between regional indicator (RI) symbols
- // if there is an odd number of RI characters before the break point.
- if (current === RI && next === RI) {
- var countRI = 0;
- while (prev === RI) {
- countRI++;
- prev = classTypes[--prevIndex];
- }
- if (countRI % 2 === 0) {
- return BREAK_NOT_ALLOWED;
- }
- }
- return BREAK_ALLOWED;
- };
- var GraphemeBreaker = function (str) {
- var codePoints = toCodePoints(str);
- var length = codePoints.length;
- var index = 0;
- var lastEnd = 0;
- var classTypes = codePoints.map(codePointToClass);
- return {
- next: function () {
- if (index >= length) {
- return { done: true, value: null };
- }
- var graphemeBreak = BREAK_NOT_ALLOWED;
- while (index < length &&
- (graphemeBreak = _graphemeBreakAtIndex(codePoints, classTypes, ++index)) === BREAK_NOT_ALLOWED) { }
- if (graphemeBreak !== BREAK_NOT_ALLOWED || index === length) {
- var value = fromCodePoint.apply(null, codePoints.slice(lastEnd, index));
- lastEnd = index;
- return { value: value, done: false };
- }
- return { done: true, value: null };
- },
- };
- };
- var splitGraphemes = function (str) {
- var breaker = GraphemeBreaker(str);
- var graphemes = [];
- var bk;
- while (!(bk = breaker.next()).done) {
- if (bk.value) {
- graphemes.push(bk.value.slice());
- }
- }
- return graphemes;
- };
-
- var testRangeBounds = function (document) {
- var TEST_HEIGHT = 123;
- if (document.createRange) {
- var range = document.createRange();
- if (range.getBoundingClientRect) {
- var testElement = document.createElement('boundtest');
- testElement.style.height = TEST_HEIGHT + "px";
- testElement.style.display = 'block';
- document.body.appendChild(testElement);
- range.selectNode(testElement);
- var rangeBounds = range.getBoundingClientRect();
- var rangeHeight = Math.round(rangeBounds.height);
- document.body.removeChild(testElement);
- if (rangeHeight === TEST_HEIGHT) {
- return true;
- }
- }
- }
- return false;
- };
- var testIOSLineBreak = function (document) {
- var testElement = document.createElement('boundtest');
- testElement.style.width = '50px';
- testElement.style.display = 'block';
- testElement.style.fontSize = '12px';
- testElement.style.letterSpacing = '0px';
- testElement.style.wordSpacing = '0px';
- document.body.appendChild(testElement);
- var range = document.createRange();
- testElement.innerHTML = typeof ''.repeat === 'function' ? '👨'.repeat(10) : '';
- var node = testElement.firstChild;
- var textList = toCodePoints$1(node.data).map(function (i) { return fromCodePoint$1(i); });
- var offset = 0;
- var prev = {};
- // ios 13 does not handle range getBoundingClientRect line changes correctly #2177
- var supports = textList.every(function (text, i) {
- range.setStart(node, offset);
- range.setEnd(node, offset + text.length);
- var rect = range.getBoundingClientRect();
- offset += text.length;
- var boundAhead = rect.x > prev.x || rect.y > prev.y;
- prev = rect;
- if (i === 0) {
- return true;
- }
- return boundAhead;
- });
- document.body.removeChild(testElement);
- return supports;
- };
- var testCORS = function () { return typeof new Image().crossOrigin !== 'undefined'; };
- var testResponseType = function () { return typeof new XMLHttpRequest().responseType === 'string'; };
- var testSVG = function (document) {
- var img = new Image();
- var canvas = document.createElement('canvas');
- var ctx = canvas.getContext('2d');
- if (!ctx) {
- return false;
- }
- img.src = "data:image/svg+xml,";
- try {
- ctx.drawImage(img, 0, 0);
- canvas.toDataURL();
- }
- catch (e) {
- return false;
- }
- return true;
- };
- var isGreenPixel = function (data) {
- return data[0] === 0 && data[1] === 255 && data[2] === 0 && data[3] === 255;
- };
- var testForeignObject = function (document) {
- var canvas = document.createElement('canvas');
- var size = 100;
- canvas.width = size;
- canvas.height = size;
- var ctx = canvas.getContext('2d');
- if (!ctx) {
- return Promise.reject(false);
- }
- ctx.fillStyle = 'rgb(0, 255, 0)';
- ctx.fillRect(0, 0, size, size);
- var img = new Image();
- var greenImageSrc = canvas.toDataURL();
- img.src = greenImageSrc;
- var svg = createForeignObjectSVG(size, size, 0, 0, img);
- ctx.fillStyle = 'red';
- ctx.fillRect(0, 0, size, size);
- return loadSerializedSVG$1(svg)
- .then(function (img) {
- ctx.drawImage(img, 0, 0);
- var data = ctx.getImageData(0, 0, size, size).data;
- ctx.fillStyle = 'red';
- ctx.fillRect(0, 0, size, size);
- var node = document.createElement('div');
- node.style.backgroundImage = "url(" + greenImageSrc + ")";
- node.style.height = size + "px";
- // Firefox 55 does not render inline tags
- return isGreenPixel(data)
- ? loadSerializedSVG$1(createForeignObjectSVG(size, size, 0, 0, node))
- : Promise.reject(false);
- })
- .then(function (img) {
- ctx.drawImage(img, 0, 0);
- // Edge does not render background-images
- return isGreenPixel(ctx.getImageData(0, 0, size, size).data);
- })
- .catch(function () { return false; });
- };
- var createForeignObjectSVG = function (width, height, x, y, node) {
- var xmlns = 'http://www.w3.org/2000/svg';
- var svg = document.createElementNS(xmlns, 'svg');
- var foreignObject = document.createElementNS(xmlns, 'foreignObject');
- svg.setAttributeNS(null, 'width', width.toString());
- svg.setAttributeNS(null, 'height', height.toString());
- foreignObject.setAttributeNS(null, 'width', '100%');
- foreignObject.setAttributeNS(null, 'height', '100%');
- foreignObject.setAttributeNS(null, 'x', x.toString());
- foreignObject.setAttributeNS(null, 'y', y.toString());
- foreignObject.setAttributeNS(null, 'externalResourcesRequired', 'true');
- svg.appendChild(foreignObject);
- foreignObject.appendChild(node);
- return svg;
- };
- var loadSerializedSVG$1 = function (svg) {
- return new Promise(function (resolve, reject) {
- var img = new Image();
- img.onload = function () { return resolve(img); };
- img.onerror = reject;
- img.src = "data:image/svg+xml;charset=utf-8," + encodeURIComponent(new XMLSerializer().serializeToString(svg));
- });
- };
- var FEATURES = {
- get SUPPORT_RANGE_BOUNDS() {
- var value = testRangeBounds(document);
- Object.defineProperty(FEATURES, 'SUPPORT_RANGE_BOUNDS', { value: value });
- return value;
- },
- get SUPPORT_WORD_BREAKING() {
- var value = FEATURES.SUPPORT_RANGE_BOUNDS && testIOSLineBreak(document);
- Object.defineProperty(FEATURES, 'SUPPORT_WORD_BREAKING', { value: value });
- return value;
- },
- get SUPPORT_SVG_DRAWING() {
- var value = testSVG(document);
- Object.defineProperty(FEATURES, 'SUPPORT_SVG_DRAWING', { value: value });
- return value;
- },
- get SUPPORT_FOREIGNOBJECT_DRAWING() {
- var value = typeof Array.from === 'function' && typeof window.fetch === 'function'
- ? testForeignObject(document)
- : Promise.resolve(false);
- Object.defineProperty(FEATURES, 'SUPPORT_FOREIGNOBJECT_DRAWING', { value: value });
- return value;
- },
- get SUPPORT_CORS_IMAGES() {
- var value = testCORS();
- Object.defineProperty(FEATURES, 'SUPPORT_CORS_IMAGES', { value: value });
- return value;
- },
- get SUPPORT_RESPONSE_TYPE() {
- var value = testResponseType();
- Object.defineProperty(FEATURES, 'SUPPORT_RESPONSE_TYPE', { value: value });
- return value;
- },
- get SUPPORT_CORS_XHR() {
- var value = 'withCredentials' in new XMLHttpRequest();
- Object.defineProperty(FEATURES, 'SUPPORT_CORS_XHR', { value: value });
- return value;
- },
- get SUPPORT_NATIVE_TEXT_SEGMENTATION() {
- // eslint-disable-next-line @typescript-eslint/no-explicit-any
- var value = !!(typeof Intl !== 'undefined' && Intl.Segmenter);
- Object.defineProperty(FEATURES, 'SUPPORT_NATIVE_TEXT_SEGMENTATION', { value: value });
- return value;
- }
- };
-
- var TextBounds = /** @class */ (function () {
- function TextBounds(text, bounds) {
- this.text = text;
- this.bounds = bounds;
- }
- return TextBounds;
- }());
- var parseTextBounds = function (context, value, styles, node) {
- var textList = breakText(value, styles);
- var textBounds = [];
- var offset = 0;
- textList.forEach(function (text) {
- if (styles.textDecorationLine.length || text.trim().length > 0) {
- if (FEATURES.SUPPORT_RANGE_BOUNDS) {
- var clientRects = createRange(node, offset, text.length).getClientRects();
- if (clientRects.length > 1) {
- var subSegments = segmentGraphemes(text);
- var subOffset_1 = 0;
- subSegments.forEach(function (subSegment) {
- textBounds.push(new TextBounds(subSegment, Bounds.fromDOMRectList(context, createRange(node, subOffset_1 + offset, subSegment.length).getClientRects())));
- subOffset_1 += subSegment.length;
- });
- }
- else {
- textBounds.push(new TextBounds(text, Bounds.fromDOMRectList(context, clientRects)));
- }
- }
- else {
- var replacementNode = node.splitText(text.length);
- textBounds.push(new TextBounds(text, getWrapperBounds(context, node)));
- node = replacementNode;
- }
- }
- else if (!FEATURES.SUPPORT_RANGE_BOUNDS) {
- node = node.splitText(text.length);
- }
- offset += text.length;
- });
- return textBounds;
- };
- var getWrapperBounds = function (context, node) {
- var ownerDocument = node.ownerDocument;
- if (ownerDocument) {
- var wrapper = ownerDocument.createElement('html2canvaswrapper');
- wrapper.appendChild(node.cloneNode(true));
- var parentNode = node.parentNode;
- if (parentNode) {
- parentNode.replaceChild(wrapper, node);
- var bounds = parseBounds(context, wrapper);
- if (wrapper.firstChild) {
- parentNode.replaceChild(wrapper.firstChild, wrapper);
- }
- return bounds;
- }
- }
- return Bounds.EMPTY;
- };
- var createRange = function (node, offset, length) {
- var ownerDocument = node.ownerDocument;
- if (!ownerDocument) {
- throw new Error('Node has no owner document');
- }
- var range = ownerDocument.createRange();
- range.setStart(node, offset);
- range.setEnd(node, offset + length);
- return range;
- };
- var segmentGraphemes = function (value) {
- if (FEATURES.SUPPORT_NATIVE_TEXT_SEGMENTATION) {
- // eslint-disable-next-line @typescript-eslint/no-explicit-any
- var segmenter = new Intl.Segmenter(void 0, { granularity: 'grapheme' });
- // eslint-disable-next-line @typescript-eslint/no-explicit-any
- return Array.from(segmenter.segment(value)).map(function (segment) { return segment.segment; });
- }
- return splitGraphemes(value);
- };
- var segmentWords = function (value, styles) {
- if (FEATURES.SUPPORT_NATIVE_TEXT_SEGMENTATION) {
- // eslint-disable-next-line @typescript-eslint/no-explicit-any
- var segmenter = new Intl.Segmenter(void 0, {
- granularity: 'word'
- });
- // eslint-disable-next-line @typescript-eslint/no-explicit-any
- return Array.from(segmenter.segment(value)).map(function (segment) { return segment.segment; });
- }
- return breakWords(value, styles);
- };
- var breakText = function (value, styles) {
- return styles.letterSpacing !== 0 ? segmentGraphemes(value) : segmentWords(value, styles);
- };
- // https://drafts.csswg.org/css-text/#word-separator
- var wordSeparators = [0x0020, 0x00a0, 0x1361, 0x10100, 0x10101, 0x1039, 0x1091];
- var breakWords = function (str, styles) {
- var breaker = LineBreaker(str, {
- lineBreak: styles.lineBreak,
- wordBreak: styles.overflowWrap === "break-word" /* BREAK_WORD */ ? 'break-word' : styles.wordBreak
- });
- var words = [];
- var bk;
- var _loop_1 = function () {
- if (bk.value) {
- var value = bk.value.slice();
- var codePoints = toCodePoints$1(value);
- var word_1 = '';
- codePoints.forEach(function (codePoint) {
- if (wordSeparators.indexOf(codePoint) === -1) {
- word_1 += fromCodePoint$1(codePoint);
- }
- else {
- if (word_1.length) {
- words.push(word_1);
- }
- words.push(fromCodePoint$1(codePoint));
- word_1 = '';
- }
- });
- if (word_1.length) {
- words.push(word_1);
- }
- }
- };
- while (!(bk = breaker.next()).done) {
- _loop_1();
- }
- return words;
- };
-
- var TextContainer = /** @class */ (function () {
- function TextContainer(context, node, styles) {
- this.text = transform(node.data, styles.textTransform);
- this.textBounds = parseTextBounds(context, this.text, styles, node);
- }
- return TextContainer;
- }());
- var transform = function (text, transform) {
- switch (transform) {
- case 1 /* LOWERCASE */:
- return text.toLowerCase();
- case 3 /* CAPITALIZE */:
- return text.replace(CAPITALIZE, capitalize);
- case 2 /* UPPERCASE */:
- return text.toUpperCase();
- default:
- return text;
- }
- };
- var CAPITALIZE = /(^|\s|:|-|\(|\))([a-z])/g;
- var capitalize = function (m, p1, p2) {
- if (m.length > 0) {
- return p1 + p2.toUpperCase();
- }
- return m;
- };
-
- var ImageElementContainer = /** @class */ (function (_super) {
- __extends(ImageElementContainer, _super);
- function ImageElementContainer(context, img) {
- var _this = _super.call(this, context, img) || this;
- _this.src = img.currentSrc || img.src;
- _this.intrinsicWidth = img.naturalWidth;
- _this.intrinsicHeight = img.naturalHeight;
- _this.context.cache.addImage(_this.src);
- return _this;
- }
- return ImageElementContainer;
- }(ElementContainer));
-
- var CanvasElementContainer = /** @class */ (function (_super) {
- __extends(CanvasElementContainer, _super);
- function CanvasElementContainer(context, canvas) {
- var _this = _super.call(this, context, canvas) || this;
- _this.canvas = canvas;
- _this.intrinsicWidth = canvas.width;
- _this.intrinsicHeight = canvas.height;
- return _this;
- }
- return CanvasElementContainer;
- }(ElementContainer));
-
- var SVGElementContainer = /** @class */ (function (_super) {
- __extends(SVGElementContainer, _super);
- function SVGElementContainer(context, img) {
- var _this = _super.call(this, context, img) || this;
- var s = new XMLSerializer();
- var bounds = parseBounds(context, img);
- img.setAttribute('width', bounds.width + "px");
- img.setAttribute('height', bounds.height + "px");
- _this.svg = "data:image/svg+xml," + encodeURIComponent(s.serializeToString(img));
- _this.intrinsicWidth = img.width.baseVal.value;
- _this.intrinsicHeight = img.height.baseVal.value;
- _this.context.cache.addImage(_this.svg);
- return _this;
- }
- return SVGElementContainer;
- }(ElementContainer));
-
- var LIElementContainer = /** @class */ (function (_super) {
- __extends(LIElementContainer, _super);
- function LIElementContainer(context, element) {
- var _this = _super.call(this, context, element) || this;
- _this.value = element.value;
- return _this;
- }
- return LIElementContainer;
- }(ElementContainer));
-
- var OLElementContainer = /** @class */ (function (_super) {
- __extends(OLElementContainer, _super);
- function OLElementContainer(context, element) {
- var _this = _super.call(this, context, element) || this;
- _this.start = element.start;
- _this.reversed = typeof element.reversed === 'boolean' && element.reversed === true;
- return _this;
- }
- return OLElementContainer;
- }(ElementContainer));
-
- var CHECKBOX_BORDER_RADIUS = [
- {
- type: 15 /* DIMENSION_TOKEN */,
- flags: 0,
- unit: 'px',
- number: 3
- }
- ];
- var RADIO_BORDER_RADIUS = [
- {
- type: 16 /* PERCENTAGE_TOKEN */,
- flags: 0,
- number: 50
- }
- ];
- var reformatInputBounds = function (bounds) {
- if (bounds.width > bounds.height) {
- return new Bounds(bounds.left + (bounds.width - bounds.height) / 2, bounds.top, bounds.height, bounds.height);
- }
- else if (bounds.width < bounds.height) {
- return new Bounds(bounds.left, bounds.top + (bounds.height - bounds.width) / 2, bounds.width, bounds.width);
- }
- return bounds;
- };
- var getInputValue = function (node) {
- var value = node.type === PASSWORD ? new Array(node.value.length + 1).join('\u2022') : node.value;
- return value.length === 0 ? node.placeholder || '' : value;
- };
- var CHECKBOX = 'checkbox';
- var RADIO = 'radio';
- var PASSWORD = 'password';
- var INPUT_COLOR = 0x2a2a2aff;
- var InputElementContainer = /** @class */ (function (_super) {
- __extends(InputElementContainer, _super);
- function InputElementContainer(context, input) {
- var _this = _super.call(this, context, input) || this;
- _this.type = input.type.toLowerCase();
- _this.checked = input.checked;
- _this.value = getInputValue(input);
- if (_this.type === CHECKBOX || _this.type === RADIO) {
- _this.styles.backgroundColor = 0xdededeff;
- _this.styles.borderTopColor =
- _this.styles.borderRightColor =
- _this.styles.borderBottomColor =
- _this.styles.borderLeftColor =
- 0xa5a5a5ff;
- _this.styles.borderTopWidth =
- _this.styles.borderRightWidth =
- _this.styles.borderBottomWidth =
- _this.styles.borderLeftWidth =
- 1;
- _this.styles.borderTopStyle =
- _this.styles.borderRightStyle =
- _this.styles.borderBottomStyle =
- _this.styles.borderLeftStyle =
- 1 /* SOLID */;
- _this.styles.backgroundClip = [0 /* BORDER_BOX */];
- _this.styles.backgroundOrigin = [0 /* BORDER_BOX */];
- _this.bounds = reformatInputBounds(_this.bounds);
- }
- switch (_this.type) {
- case CHECKBOX:
- _this.styles.borderTopRightRadius =
- _this.styles.borderTopLeftRadius =
- _this.styles.borderBottomRightRadius =
- _this.styles.borderBottomLeftRadius =
- CHECKBOX_BORDER_RADIUS;
- break;
- case RADIO:
- _this.styles.borderTopRightRadius =
- _this.styles.borderTopLeftRadius =
- _this.styles.borderBottomRightRadius =
- _this.styles.borderBottomLeftRadius =
- RADIO_BORDER_RADIUS;
- break;
- }
- return _this;
- }
- return InputElementContainer;
- }(ElementContainer));
-
- var SelectElementContainer = /** @class */ (function (_super) {
- __extends(SelectElementContainer, _super);
- function SelectElementContainer(context, element) {
- var _this = _super.call(this, context, element) || this;
- var option = element.options[element.selectedIndex || 0];
- _this.value = option ? option.text || '' : '';
- return _this;
- }
- return SelectElementContainer;
- }(ElementContainer));
-
- var TextareaElementContainer = /** @class */ (function (_super) {
- __extends(TextareaElementContainer, _super);
- function TextareaElementContainer(context, element) {
- var _this = _super.call(this, context, element) || this;
- _this.value = element.value;
- return _this;
- }
- return TextareaElementContainer;
- }(ElementContainer));
-
- var IFrameElementContainer = /** @class */ (function (_super) {
- __extends(IFrameElementContainer, _super);
- function IFrameElementContainer(context, iframe) {
- var _this = _super.call(this, context, iframe) || this;
- _this.src = iframe.src;
- _this.width = parseInt(iframe.width, 10) || 0;
- _this.height = parseInt(iframe.height, 10) || 0;
- _this.backgroundColor = _this.styles.backgroundColor;
- try {
- if (iframe.contentWindow &&
- iframe.contentWindow.document &&
- iframe.contentWindow.document.documentElement) {
- _this.tree = parseTree(context, iframe.contentWindow.document.documentElement);
- // http://www.w3.org/TR/css3-background/#special-backgrounds
- var documentBackgroundColor = iframe.contentWindow.document.documentElement
- ? parseColor(context, getComputedStyle(iframe.contentWindow.document.documentElement).backgroundColor)
- : COLORS.TRANSPARENT;
- var bodyBackgroundColor = iframe.contentWindow.document.body
- ? parseColor(context, getComputedStyle(iframe.contentWindow.document.body).backgroundColor)
- : COLORS.TRANSPARENT;
- _this.backgroundColor = isTransparent(documentBackgroundColor)
- ? isTransparent(bodyBackgroundColor)
- ? _this.styles.backgroundColor
- : bodyBackgroundColor
- : documentBackgroundColor;
- }
- }
- catch (e) { }
- return _this;
- }
- return IFrameElementContainer;
- }(ElementContainer));
-
- var LIST_OWNERS = ['OL', 'UL', 'MENU'];
- var parseNodeTree = function (context, node, parent, root) {
- for (var childNode = node.firstChild, nextNode = void 0; childNode; childNode = nextNode) {
- nextNode = childNode.nextSibling;
- if (isTextNode(childNode) && childNode.data.trim().length > 0) {
- parent.textNodes.push(new TextContainer(context, childNode, parent.styles));
- }
- else if (isElementNode(childNode)) {
- if (isSlotElement(childNode) && childNode.assignedNodes) {
- childNode.assignedNodes().forEach(function (childNode) { return parseNodeTree(context, childNode, parent, root); });
- }
- else {
- var container = createContainer(context, childNode);
- if (container.styles.isVisible()) {
- if (createsRealStackingContext(childNode, container, root)) {
- container.flags |= 4 /* CREATES_REAL_STACKING_CONTEXT */;
- }
- else if (createsStackingContext(container.styles)) {
- container.flags |= 2 /* CREATES_STACKING_CONTEXT */;
- }
- if (LIST_OWNERS.indexOf(childNode.tagName) !== -1) {
- container.flags |= 8 /* IS_LIST_OWNER */;
- }
- parent.elements.push(container);
- childNode.slot;
- if (childNode.shadowRoot) {
- parseNodeTree(context, childNode.shadowRoot, container, root);
- }
- else if (!isTextareaElement(childNode) &&
- !isSVGElement(childNode) &&
- !isSelectElement(childNode)) {
- parseNodeTree(context, childNode, container, root);
- }
- }
- }
- }
- }
- };
- var createContainer = function (context, element) {
- if (isImageElement(element)) {
- return new ImageElementContainer(context, element);
- }
- if (isCanvasElement(element)) {
- return new CanvasElementContainer(context, element);
- }
- if (isSVGElement(element)) {
- return new SVGElementContainer(context, element);
- }
- if (isLIElement(element)) {
- return new LIElementContainer(context, element);
- }
- if (isOLElement(element)) {
- return new OLElementContainer(context, element);
- }
- if (isInputElement(element)) {
- return new InputElementContainer(context, element);
- }
- if (isSelectElement(element)) {
- return new SelectElementContainer(context, element);
- }
- if (isTextareaElement(element)) {
- return new TextareaElementContainer(context, element);
- }
- if (isIFrameElement(element)) {
- return new IFrameElementContainer(context, element);
- }
- return new ElementContainer(context, element);
- };
- var parseTree = function (context, element) {
- var container = createContainer(context, element);
- container.flags |= 4 /* CREATES_REAL_STACKING_CONTEXT */;
- parseNodeTree(context, element, container, container);
- return container;
- };
- var createsRealStackingContext = function (node, container, root) {
- return (container.styles.isPositionedWithZIndex() ||
- container.styles.opacity < 1 ||
- container.styles.isTransformed() ||
- (isBodyElement(node) && root.styles.isTransparent()));
- };
- var createsStackingContext = function (styles) { return styles.isPositioned() || styles.isFloating(); };
- var isTextNode = function (node) { return node.nodeType === Node.TEXT_NODE; };
- var isElementNode = function (node) { return node.nodeType === Node.ELEMENT_NODE; };
- var isHTMLElementNode = function (node) {
- return isElementNode(node) && typeof node.style !== 'undefined' && !isSVGElementNode(node);
- };
- var isSVGElementNode = function (element) {
- return typeof element.className === 'object';
- };
- var isLIElement = function (node) { return node.tagName === 'LI'; };
- var isOLElement = function (node) { return node.tagName === 'OL'; };
- var isInputElement = function (node) { return node.tagName === 'INPUT'; };
- var isHTMLElement = function (node) { return node.tagName === 'HTML'; };
- var isSVGElement = function (node) { return node.tagName === 'svg'; };
- var isBodyElement = function (node) { return node.tagName === 'BODY'; };
- var isCanvasElement = function (node) { return node.tagName === 'CANVAS'; };
- var isVideoElement = function (node) { return node.tagName === 'VIDEO'; };
- var isImageElement = function (node) { return node.tagName === 'IMG'; };
- var isIFrameElement = function (node) { return node.tagName === 'IFRAME'; };
- var isStyleElement = function (node) { return node.tagName === 'STYLE'; };
- var isScriptElement = function (node) { return node.tagName === 'SCRIPT'; };
- var isTextareaElement = function (node) { return node.tagName === 'TEXTAREA'; };
- var isSelectElement = function (node) { return node.tagName === 'SELECT'; };
- var isSlotElement = function (node) { return node.tagName === 'SLOT'; };
- // https://html.spec.whatwg.org/multipage/custom-elements.html#valid-custom-element-name
- var isCustomElement = function (node) { return node.tagName.indexOf('-') > 0; };
-
- var CounterState = /** @class */ (function () {
- function CounterState() {
- this.counters = {};
- }
- CounterState.prototype.getCounterValue = function (name) {
- var counter = this.counters[name];
- if (counter && counter.length) {
- return counter[counter.length - 1];
- }
- return 1;
- };
- CounterState.prototype.getCounterValues = function (name) {
- var counter = this.counters[name];
- return counter ? counter : [];
- };
- CounterState.prototype.pop = function (counters) {
- var _this = this;
- counters.forEach(function (counter) { return _this.counters[counter].pop(); });
- };
- CounterState.prototype.parse = function (style) {
- var _this = this;
- var counterIncrement = style.counterIncrement;
- var counterReset = style.counterReset;
- var canReset = true;
- if (counterIncrement !== null) {
- counterIncrement.forEach(function (entry) {
- var counter = _this.counters[entry.counter];
- if (counter && entry.increment !== 0) {
- canReset = false;
- if (!counter.length) {
- counter.push(1);
- }
- counter[Math.max(0, counter.length - 1)] += entry.increment;
- }
- });
- }
- var counterNames = [];
- if (canReset) {
- counterReset.forEach(function (entry) {
- var counter = _this.counters[entry.counter];
- counterNames.push(entry.counter);
- if (!counter) {
- counter = _this.counters[entry.counter] = [];
- }
- counter.push(entry.reset);
- });
- }
- return counterNames;
- };
- return CounterState;
- }());
- var ROMAN_UPPER = {
- integers: [1000, 900, 500, 400, 100, 90, 50, 40, 10, 9, 5, 4, 1],
- values: ['M', 'CM', 'D', 'CD', 'C', 'XC', 'L', 'XL', 'X', 'IX', 'V', 'IV', 'I']
- };
- var ARMENIAN = {
- integers: [
- 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 900, 800, 700, 600, 500, 400, 300, 200, 100, 90, 80, 70,
- 60, 50, 40, 30, 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1
- ],
- values: [
- 'Ք',
- 'Փ',
- 'Ւ',
- 'Ց',
- 'Ր',
- 'Տ',
- 'Վ',
- 'Ս',
- 'Ռ',
- 'Ջ',
- 'Պ',
- 'Չ',
- 'Ո',
- 'Շ',
- 'Ն',
- 'Յ',
- 'Մ',
- 'Ճ',
- 'Ղ',
- 'Ձ',
- 'Հ',
- 'Կ',
- 'Ծ',
- 'Խ',
- 'Լ',
- 'Ի',
- 'Ժ',
- 'Թ',
- 'Ը',
- 'Է',
- 'Զ',
- 'Ե',
- 'Դ',
- 'Գ',
- 'Բ',
- 'Ա'
- ]
- };
- var HEBREW = {
- integers: [
- 10000, 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 400, 300, 200, 100, 90, 80, 70, 60, 50, 40, 30, 20,
- 19, 18, 17, 16, 15, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1
- ],
- values: [
- 'י׳',
- 'ט׳',
- 'ח׳',
- 'ז׳',
- 'ו׳',
- 'ה׳',
- 'ד׳',
- 'ג׳',
- 'ב׳',
- 'א׳',
- 'ת',
- 'ש',
- 'ר',
- 'ק',
- 'צ',
- 'פ',
- 'ע',
- 'ס',
- 'נ',
- 'מ',
- 'ל',
- 'כ',
- 'יט',
- 'יח',
- 'יז',
- 'טז',
- 'טו',
- 'י',
- 'ט',
- 'ח',
- 'ז',
- 'ו',
- 'ה',
- 'ד',
- 'ג',
- 'ב',
- 'א'
- ]
- };
- var GEORGIAN = {
- integers: [
- 10000, 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 900, 800, 700, 600, 500, 400, 300, 200, 100, 90,
- 80, 70, 60, 50, 40, 30, 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1
- ],
- values: [
- 'ჵ',
- 'ჰ',
- 'ჯ',
- 'ჴ',
- 'ხ',
- 'ჭ',
- 'წ',
- 'ძ',
- 'ც',
- 'ჩ',
- 'შ',
- 'ყ',
- 'ღ',
- 'ქ',
- 'ფ',
- 'ჳ',
- 'ტ',
- 'ს',
- 'რ',
- 'ჟ',
- 'პ',
- 'ო',
- 'ჲ',
- 'ნ',
- 'მ',
- 'ლ',
- 'კ',
- 'ი',
- 'თ',
- 'ჱ',
- 'ზ',
- 'ვ',
- 'ე',
- 'დ',
- 'გ',
- 'ბ',
- 'ა'
- ]
- };
- var createAdditiveCounter = function (value, min, max, symbols, fallback, suffix) {
- if (value < min || value > max) {
- return createCounterText(value, fallback, suffix.length > 0);
- }
- return (symbols.integers.reduce(function (string, integer, index) {
- while (value >= integer) {
- value -= integer;
- string += symbols.values[index];
- }
- return string;
- }, '') + suffix);
- };
- var createCounterStyleWithSymbolResolver = function (value, codePointRangeLength, isNumeric, resolver) {
- var string = '';
- do {
- if (!isNumeric) {
- value--;
- }
- string = resolver(value) + string;
- value /= codePointRangeLength;
- } while (value * codePointRangeLength >= codePointRangeLength);
- return string;
- };
- var createCounterStyleFromRange = function (value, codePointRangeStart, codePointRangeEnd, isNumeric, suffix) {
- var codePointRangeLength = codePointRangeEnd - codePointRangeStart + 1;
- return ((value < 0 ? '-' : '') +
- (createCounterStyleWithSymbolResolver(Math.abs(value), codePointRangeLength, isNumeric, function (codePoint) {
- return fromCodePoint$1(Math.floor(codePoint % codePointRangeLength) + codePointRangeStart);
- }) +
- suffix));
- };
- var createCounterStyleFromSymbols = function (value, symbols, suffix) {
- if (suffix === void 0) { suffix = '. '; }
- var codePointRangeLength = symbols.length;
- return (createCounterStyleWithSymbolResolver(Math.abs(value), codePointRangeLength, false, function (codePoint) { return symbols[Math.floor(codePoint % codePointRangeLength)]; }) + suffix);
- };
- var CJK_ZEROS = 1 << 0;
- var CJK_TEN_COEFFICIENTS = 1 << 1;
- var CJK_TEN_HIGH_COEFFICIENTS = 1 << 2;
- var CJK_HUNDRED_COEFFICIENTS = 1 << 3;
- var createCJKCounter = function (value, numbers, multipliers, negativeSign, suffix, flags) {
- if (value < -9999 || value > 9999) {
- return createCounterText(value, 4 /* CJK_DECIMAL */, suffix.length > 0);
- }
- var tmp = Math.abs(value);
- var string = suffix;
- if (tmp === 0) {
- return numbers[0] + string;
- }
- for (var digit = 0; tmp > 0 && digit <= 4; digit++) {
- var coefficient = tmp % 10;
- if (coefficient === 0 && contains(flags, CJK_ZEROS) && string !== '') {
- string = numbers[coefficient] + string;
- }
- else if (coefficient > 1 ||
- (coefficient === 1 && digit === 0) ||
- (coefficient === 1 && digit === 1 && contains(flags, CJK_TEN_COEFFICIENTS)) ||
- (coefficient === 1 && digit === 1 && contains(flags, CJK_TEN_HIGH_COEFFICIENTS) && value > 100) ||
- (coefficient === 1 && digit > 1 && contains(flags, CJK_HUNDRED_COEFFICIENTS))) {
- string = numbers[coefficient] + (digit > 0 ? multipliers[digit - 1] : '') + string;
- }
- else if (coefficient === 1 && digit > 0) {
- string = multipliers[digit - 1] + string;
- }
- tmp = Math.floor(tmp / 10);
- }
- return (value < 0 ? negativeSign : '') + string;
- };
- var CHINESE_INFORMAL_MULTIPLIERS = '十百千萬';
- var CHINESE_FORMAL_MULTIPLIERS = '拾佰仟萬';
- var JAPANESE_NEGATIVE = 'マイナス';
- var KOREAN_NEGATIVE = '마이너스';
- var createCounterText = function (value, type, appendSuffix) {
- var defaultSuffix = appendSuffix ? '. ' : '';
- var cjkSuffix = appendSuffix ? '、' : '';
- var koreanSuffix = appendSuffix ? ', ' : '';
- var spaceSuffix = appendSuffix ? ' ' : '';
- switch (type) {
- case 0 /* DISC */:
- return '•' + spaceSuffix;
- case 1 /* CIRCLE */:
- return '◦' + spaceSuffix;
- case 2 /* SQUARE */:
- return '◾' + spaceSuffix;
- case 5 /* DECIMAL_LEADING_ZERO */:
- var string = createCounterStyleFromRange(value, 48, 57, true, defaultSuffix);
- return string.length < 4 ? "0" + string : string;
- case 4 /* CJK_DECIMAL */:
- return createCounterStyleFromSymbols(value, '〇一二三四五六七八九', cjkSuffix);
- case 6 /* LOWER_ROMAN */:
- return createAdditiveCounter(value, 1, 3999, ROMAN_UPPER, 3 /* DECIMAL */, defaultSuffix).toLowerCase();
- case 7 /* UPPER_ROMAN */:
- return createAdditiveCounter(value, 1, 3999, ROMAN_UPPER, 3 /* DECIMAL */, defaultSuffix);
- case 8 /* LOWER_GREEK */:
- return createCounterStyleFromRange(value, 945, 969, false, defaultSuffix);
- case 9 /* LOWER_ALPHA */:
- return createCounterStyleFromRange(value, 97, 122, false, defaultSuffix);
- case 10 /* UPPER_ALPHA */:
- return createCounterStyleFromRange(value, 65, 90, false, defaultSuffix);
- case 11 /* ARABIC_INDIC */:
- return createCounterStyleFromRange(value, 1632, 1641, true, defaultSuffix);
- case 12 /* ARMENIAN */:
- case 49 /* UPPER_ARMENIAN */:
- return createAdditiveCounter(value, 1, 9999, ARMENIAN, 3 /* DECIMAL */, defaultSuffix);
- case 35 /* LOWER_ARMENIAN */:
- return createAdditiveCounter(value, 1, 9999, ARMENIAN, 3 /* DECIMAL */, defaultSuffix).toLowerCase();
- case 13 /* BENGALI */:
- return createCounterStyleFromRange(value, 2534, 2543, true, defaultSuffix);
- case 14 /* CAMBODIAN */:
- case 30 /* KHMER */:
- return createCounterStyleFromRange(value, 6112, 6121, true, defaultSuffix);
- case 15 /* CJK_EARTHLY_BRANCH */:
- return createCounterStyleFromSymbols(value, '子丑寅卯辰巳午未申酉戌亥', cjkSuffix);
- case 16 /* CJK_HEAVENLY_STEM */:
- return createCounterStyleFromSymbols(value, '甲乙丙丁戊己庚辛壬癸', cjkSuffix);
- case 17 /* CJK_IDEOGRAPHIC */:
- case 48 /* TRAD_CHINESE_INFORMAL */:
- return createCJKCounter(value, '零一二三四五六七八九', CHINESE_INFORMAL_MULTIPLIERS, '負', cjkSuffix, CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS);
- case 47 /* TRAD_CHINESE_FORMAL */:
- return createCJKCounter(value, '零壹貳參肆伍陸柒捌玖', CHINESE_FORMAL_MULTIPLIERS, '負', cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS);
- case 42 /* SIMP_CHINESE_INFORMAL */:
- return createCJKCounter(value, '零一二三四五六七八九', CHINESE_INFORMAL_MULTIPLIERS, '负', cjkSuffix, CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS);
- case 41 /* SIMP_CHINESE_FORMAL */:
- return createCJKCounter(value, '零壹贰叁肆伍陆柒捌玖', CHINESE_FORMAL_MULTIPLIERS, '负', cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS);
- case 26 /* JAPANESE_INFORMAL */:
- return createCJKCounter(value, '〇一二三四五六七八九', '十百千万', JAPANESE_NEGATIVE, cjkSuffix, 0);
- case 25 /* JAPANESE_FORMAL */:
- return createCJKCounter(value, '零壱弐参四伍六七八九', '拾百千万', JAPANESE_NEGATIVE, cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS);
- case 31 /* KOREAN_HANGUL_FORMAL */:
- return createCJKCounter(value, '영일이삼사오육칠팔구', '십백천만', KOREAN_NEGATIVE, koreanSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS);
- case 33 /* KOREAN_HANJA_INFORMAL */:
- return createCJKCounter(value, '零一二三四五六七八九', '十百千萬', KOREAN_NEGATIVE, koreanSuffix, 0);
- case 32 /* KOREAN_HANJA_FORMAL */:
- return createCJKCounter(value, '零壹貳參四五六七八九', '拾百千', KOREAN_NEGATIVE, koreanSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS);
- case 18 /* DEVANAGARI */:
- return createCounterStyleFromRange(value, 0x966, 0x96f, true, defaultSuffix);
- case 20 /* GEORGIAN */:
- return createAdditiveCounter(value, 1, 19999, GEORGIAN, 3 /* DECIMAL */, defaultSuffix);
- case 21 /* GUJARATI */:
- return createCounterStyleFromRange(value, 0xae6, 0xaef, true, defaultSuffix);
- case 22 /* GURMUKHI */:
- return createCounterStyleFromRange(value, 0xa66, 0xa6f, true, defaultSuffix);
- case 22 /* HEBREW */:
- return createAdditiveCounter(value, 1, 10999, HEBREW, 3 /* DECIMAL */, defaultSuffix);
- case 23 /* HIRAGANA */:
- return createCounterStyleFromSymbols(value, 'あいうえおかきくけこさしすせそたちつてとなにぬねのはひふへほまみむめもやゆよらりるれろわゐゑをん');
- case 24 /* HIRAGANA_IROHA */:
- return createCounterStyleFromSymbols(value, 'いろはにほへとちりぬるをわかよたれそつねならむうゐのおくやまけふこえてあさきゆめみしゑひもせす');
- case 27 /* KANNADA */:
- return createCounterStyleFromRange(value, 0xce6, 0xcef, true, defaultSuffix);
- case 28 /* KATAKANA */:
- return createCounterStyleFromSymbols(value, 'アイウエオカキクケコサシスセソタチツテトナニヌネノハヒフヘホマミムメモヤユヨラリルレロワヰヱヲン', cjkSuffix);
- case 29 /* KATAKANA_IROHA */:
- return createCounterStyleFromSymbols(value, 'イロハニホヘトチリヌルヲワカヨタレソツネナラムウヰノオクヤマケフコエテアサキユメミシヱヒモセス', cjkSuffix);
- case 34 /* LAO */:
- return createCounterStyleFromRange(value, 0xed0, 0xed9, true, defaultSuffix);
- case 37 /* MONGOLIAN */:
- return createCounterStyleFromRange(value, 0x1810, 0x1819, true, defaultSuffix);
- case 38 /* MYANMAR */:
- return createCounterStyleFromRange(value, 0x1040, 0x1049, true, defaultSuffix);
- case 39 /* ORIYA */:
- return createCounterStyleFromRange(value, 0xb66, 0xb6f, true, defaultSuffix);
- case 40 /* PERSIAN */:
- return createCounterStyleFromRange(value, 0x6f0, 0x6f9, true, defaultSuffix);
- case 43 /* TAMIL */:
- return createCounterStyleFromRange(value, 0xbe6, 0xbef, true, defaultSuffix);
- case 44 /* TELUGU */:
- return createCounterStyleFromRange(value, 0xc66, 0xc6f, true, defaultSuffix);
- case 45 /* THAI */:
- return createCounterStyleFromRange(value, 0xe50, 0xe59, true, defaultSuffix);
- case 46 /* TIBETAN */:
- return createCounterStyleFromRange(value, 0xf20, 0xf29, true, defaultSuffix);
- case 3 /* DECIMAL */:
- default:
- return createCounterStyleFromRange(value, 48, 57, true, defaultSuffix);
- }
- };
-
- var IGNORE_ATTRIBUTE = 'data-html2canvas-ignore';
- var DocumentCloner = /** @class */ (function () {
- function DocumentCloner(context, element, options) {
- this.context = context;
- this.options = options;
- this.scrolledElements = [];
- this.referenceElement = element;
- this.counters = new CounterState();
- this.quoteDepth = 0;
- if (!element.ownerDocument) {
- throw new Error('Cloned element does not have an owner document');
- }
- this.documentElement = this.cloneNode(element.ownerDocument.documentElement, false);
- }
- DocumentCloner.prototype.toIFrame = function (ownerDocument, windowSize) {
- var _this = this;
- var iframe = createIFrameContainer(ownerDocument, windowSize);
- if (!iframe.contentWindow) {
- return Promise.reject("Unable to find iframe window");
- }
- var scrollX = ownerDocument.defaultView.pageXOffset;
- var scrollY = ownerDocument.defaultView.pageYOffset;
- var cloneWindow = iframe.contentWindow;
- var documentClone = cloneWindow.document;
- /* Chrome doesn't detect relative background-images assigned in inline
-
-
-
-
-
Daniel Chih
-
-
-
1- How did you hear about SharpestMinds? What motivated you to do a mentorship with SM? - Found on Google search - first result. Was looking to start mentoring to help and guide the next generation of people and make them comfortable with career transition. Also, a great way to reinforce learning by teaching and mentoring. Have tried ISA before but didn't have good experience so not very keen on trying this - but open to working with PAYG.
2- How's your career journey in Data engineering been like? - Pursued Mech Engg. and was working as a Design engg for the first 1 and half year. - Moved on to application project management role for 4 years which also involved sales. Got introduced to Data and cloud in this role. Helped company save $$ while working. - Did a D.E. bootcamp and landed a job as a Data engineer consultant. - Currently working as senior data engineer at Nasdaq - Leading projects and managing cloud services. Always wanted to work in capital markets and financial institutions.
3- Preivous mentorship experience? - Mentoring with the BootCamp from where he did a Data engineer course.
4- What mistakes do beginners make or challenges do they face when breaking into the Data engineering field? - Having the right mindset and motivation. Having a good support system to have a positive impact w.r.t to what they want to achieve and how to go about it. Understanding that not everyone learns the same way and stop looking at other people and their journey. Have focused goal and a good mentor can be helpful. D.E. is a broad and massive field and it can be easy to get overwhelmed by lot of information available online. SQL is a good start w.r.t. to learning a language in beginning but also understanding how to work with data is important.
5- Questions about SM? - What are the next steps and what does the process look like?