diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Devexpress 12.1 Full 16 Explore the Features and Benefits of DevExpress UI Controls Reporting Systems and IDE Productivity Tools.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Devexpress 12.1 Full 16 Explore the Features and Benefits of DevExpress UI Controls Reporting Systems and IDE Productivity Tools.md
deleted file mode 100644
index 45d6a30811ffdde278157085788ca5bdb4a03d8e..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Devexpress 12.1 Full 16 Explore the Features and Benefits of DevExpress UI Controls Reporting Systems and IDE Productivity Tools.md
+++ /dev/null
@@ -1,212 +0,0 @@
-
-
Download Devexpress 12.1 Full 16 - A Comprehensive Guide
-
If you are a web or desktop developer, you might have heard of Devexpress, a popular suite of tools and components that can help you create stunning applications with ease. In this article, we will show you how to download Devexpress 12.1 Full 16, the latest version of this powerful software, and how to use it effectively in your projects.
-
What is Devexpress?
-
Devexpress is a software company that provides a wide range of products for web and desktop development, such as:
UI Controls: These are ready-made user interface elements that you can use in your applications, such as grids, charts, editors, gauges, calendars, ribbons, menus, etc.
-
Reporting Tools: These are tools that allow you to create and display reports in your applications, such as report designers, viewers, print previews, etc.
-
IDE Productivity Tools: These are tools that enhance your development experience in Visual Studio, such as code analysis, refactoring, debugging, testing, etc.
-
Business Application Frameworks: These are frameworks that help you create business applications faster and easier, such as XAF (eXpressApp Framework) and XPO (eXpress Persistent Objects).
-
-
Devexpress supports various platforms and technologies, such as .NET Framework, .NET Core, .NET 5+, ASP.NET Web Forms, ASP.NET MVC, ASP.NET Core MVC, Blazor, HTML JS Technologies (AngularJS, KnockoutJS), WinForms, WPF, etc.
-
Why do you need Devexpress?
-
Devexpress can help you improve your web and desktop development in many ways, such as:
-
-
Saving time and effort: You can use the Devexpress controls and components instead of writing them from scratch or using third-party libraries that may not be compatible or reliable.
-
Enhancing functionality and performance: You can use the Devexpress controls and components that offer advanced features and capabilities that are not available in standard controls or components.
-
Improving user experience and satisfaction: You can use the Devexpress controls and components that have a modern and attractive appearance and behavior that can impress your users and customers.
-
Getting support and updates: You can access the documentation and support for Devexpress products online or offline, as well as get regular updates and bug fixes.
-
-
How to download Devexpress 12.1 Full 16?
-
To download Devexpress 12.1 Full 16, you need to follow these steps:
-
Step 1: Check your system requirements
-
Before you download Devexpress 12.1 Full 16, you need to make sure that your system meets the minimum or recommended requirements for this software. Here are some of the requirements:
-
-
Requirement
Minimum
Recommended
-
Operating System
Windows Vista SP2 or later
Windows 10 or later
-
.NET Framework Version
.NET Framework 4.0 or later
.NET Framework 4.5.2 or later
-
.NET Core Version
.NET Core 2.0 or later
.NET Core 3.0 or later
-
.NET Version
.NET Framework only
.NET Framework or .NET Core or .NET 5+
-
IDE Version
Visual Studio 2010 or later
Visual Studio 2019 or later
-
Disk Space
At least 4 GB free space
At least 8 GB free space
-
CPU Speed
At least dual-core processor with at least 2 GHz speed
At least quad-core processor with at least 3 GHz speed
-
RAM Size
At least 4 GB RAM
At least 8 GB RAM
-
Display Resolution
At least HD (1366 x768) resolution
FHD (1920 x1080) resolution or higher
-
Step 2: Choose your subscription plan
-16, you need to choose a subscription plan that suits your needs and budget. Devexpress offers various subscription plans and pricing options for its products, such as:
-
-
Universal Subscription: This is the most comprehensive subscription plan that includes all Devexpress products for web and desktop development, as well as priority support and source code access. The price for this plan is $2,199.99 per year.
-
DXperience Subscription: This is a subscription plan that includes all Devexpress products for web development, such as ASP.NET, HTML JS Technologies, Blazor, Reporting Tools, etc. The price for this plan is $1,499.99 per year.
-
WinForms Subscription: This is a subscription plan that includes all Devexpress products for WinForms development, such as UI Controls, Reporting Tools, IDE Productivity Tools, etc. The price for this plan is $999.99 per year.
-
WPF Subscription: This is a subscription plan that includes all Devexpress products for WPF development, such as UI Controls, Reporting Tools, IDE Productivity Tools, etc. The price for this plan is $999.99 per year.
-
Other Subscription Plans: Devexpress also offers other subscription plans for specific products or platforms, such as VCL Subscription, XAF Subscription, XPO Subscription, etc. You can check the details and prices of these plans on the official website.
-
-
You can also choose to buy individual products or components instead of a subscription plan if you only need a specific feature or functionality. However, buying a subscription plan can save you money and give you access to more products and updates.
-
How to download Devexpress 12.1 full version for free
-Devexpress 12.1 full crack download link
-Download Devexpress 12.1 full offline installer
-Devexpress 12.1 full license key generator
-Download Devexpress 12.1 full with source code
-Devexpress 12.1 full documentation download
-Download Devexpress 12.1 full for Visual Studio 2019
-Devexpress 12.1 full tutorial download
-Download Devexpress 12.1 full for Windows 10
-Devexpress 12.1 full patch download
-Download Devexpress 12.1 full for ASP.NET MVC
-Devexpress 12.1 full demo download
-Download Devexpress 12.1 full for WPF
-Devexpress 12.1 full activation code download
-Download Devexpress 12.1 full for WinForms
-Devexpress 12.1 full trial download
-Download Devexpress 12.1 full for Blazor
-Devexpress 12.1 full serial number download
-Download Devexpress 12.1 full for Angular
-Devexpress 12.1 full setup download
-Download Devexpress 12.1 full for React
-Devexpress 12.1 full keygen download
-Download Devexpress 12.1 full for Xamarin
-Devexpress 12.1 full registration code download
-Download Devexpress 12.1 full for .NET Core
-Devexpress 12.1 full torrent download
-Download Devexpress 12.1 full for PHP
-Devexpress 12.1 full product key download
-Download Devexpress 12.1 full for HTML5
-Devexpress 12.1 full activation key download
-Download Devexpress 12.1 full for JavaScript
-Devexpress 12.1 full license code download
-Download Devexpress 12.1 full for SQL Server
-Devexpress 12.1 full crack keygen download
-Download Devexpress 12.1 full for Oracle
-Devexpress 12.1 full serial key download
-Download Devexpress 12.1 full for MySQL
-Devexpress 12.1 full license key crack download
-Download Devexpress 12.1 full for PostgreSQL
-Devexpress 12.1 full activation key crack download
-Download Devexpress 12.1 full for MongoDB
-Devexpress 12.1 full serial number crack download
-Download Devexpress 12.1 full for Firebase
-Devexpress 12.1 full registration code crack download
-Download Devexpress 12.1 full for Azure SQL Database
-Devexpress 12.1 full product key crack download
-Download Devexpress 12.1 full for AWS DynamoDB
-Devexpress 12.1 full activation code crack download
-Download Devexpress 12.1 full for Google Cloud Firestore
-Devexpress 12.1 full license code crack download
-
Step 3: Download the installer
-
After you choose your subscription plan and complete the payment process, you can download the installer for Devexpress 12.1 Full 16 from the official website. To do this, you need to:
Select your subscription plan from the drop-down menu and click the Download button.
-
Select the version 12.1 Full 16 from the list and click the Download Installer button.
-
Save the installer file (DevExpressComponents-12.1.16.exe) to your computer and wait for the download to finish.
-
-
Step 4: Run the installer
-
After you download the installer file, you can run it to install Devexpress 12.1 Full 16 on your computer. To do this, you need to:
-
-
Double-click the installer file (DevExpressComponents-12.1.16.exe) to launch it.
-
Click Yes if prompted by User Account Control (UAC).
-
Select your preferred language and click OK.
-
Read and accept the license agreement and click Next.
-
Select the components that you want to install and click Next. You can choose to install all components or only specific ones according to your needs.
-
Select the installation folder and click Next. You can use the default folder or choose a custom one.
-
Select the start menu folder and click Next. You can use the default folder or choose a custom one.
-
Select whether you want to create a desktop shortcut and click Next.
-
Select whether you want to check for updates automatically and click Next.
-
Click Install to start the installation process and wait for it to finish.
-
Click Finish to exit the installer.
-
-
Step 5: Activate your license
-
To use Devexpress 12.1 Full 16, you need to activate your license and register your product. To do this, you need to:
-
-
Launch Visual Studio and open or create a project that uses Devexpress components.
-
A dialog box will appear asking you to activate your license. Click Login & Activate Now.
-
A web browser will open asking you to sign in with your account. Enter your email and password and click Login & Activate Now.
-
A confirmation message will appear saying that your license has been activated successfully. Click Close Browser & Return To Visual Studio.
-
A dialog box will appear asking you to register your product. Click Login & Register Now.
-
A web browser will open asking you to sign in with your account again. Enter your email and password and click Login & Register Now.
-
A confirmation message will appear saying that your product has been registered successfully. Click Close Browser & Return To Visual Studio.
-
How to use Devexpress 12.1 Full 16?
- g>Devexpress 12.1 Full 16, you need to know some tips and tricks that can help you create stunning applications with ease. Here are some of them:
-
How to create a project with Devexpress 12.1 Full 16?
-
To create a project with Devexpress 12.1 Full 16, you can use the Devexpress Template Gallery, which is a tool that allows you to create projects based on predefined templates that include Devexpress controls and components. To do this, you need to:
-
-
Launch Visual Studio and click File > New > Project.
-
Select Devexpress v20.2 Template Gallery from the list of templates and click Next.
-
Select the platform and technology that you want to use for your project, such as WinForms, WPF, ASP.NET Web Forms, ASP.NET MVC, etc.
-
Select the template that you want to use for your project, such as Blank Application, Ribbon Application, Outlook-Inspired Application, etc.
-
Enter the name and location of your project and click Create.
-
A new project will be created with the selected template and Devexpress controls and components.
-
-
How to use the Devexpress controls and components?
-
To use the Devexpress controls and components in your project, you can use the Devexpress Toolbox, which is a tool that allows you to drag and drop Devexpress controls and components onto your forms or pages. To do this, you need to:
-
-
Open a form or a page in your project in the designer mode.
-
Open the Devexpress Toolbox by clicking View > Toolbox.
-
Select the Devexpress control or component that you want to use from the list of categories, such as Data & Analytics, Navigation & Layout, Editors & Simple Controls, etc.
-
Drag and drop the Devexpress control or component onto your form or page.
-
A new Devexpress control or component will be added to your form or page with default settings.
-
-
How to customize the appearance and behavior of the Devexpress controls and components?
-
To customize the appearance and behavior of the Devexpress controls and components in your project, you can use the Properties Window, which is a tool that allows you to change the properties, events, methods, and styles of Devexpress controls and components. To do this, you need to:
-
-
Select a Devexpress control or component on your form or page in the designer mode.
-
Open the Properties Window by clicking View > Properties Window.
-
Select the property, event, method, or style that you want to change from the list of categories, such as Appearance, Behavior, Data Source, Layout Options, etc.
-
Edit the value of the property, event, method, or style according to your needs.
-
The appearance and behavior of the Devexpress control or component will be updated accordingly.
-
-
How to access the documentation and support for Devexpress 12.1 Full 16?
- g>Devexpress 12.1 Full 16, you can use the Help menu in Visual Studio, which is a tool that allows you to access the online or offline documentation and support for Devexpress products. To do this, you need to:
-
-
Launch Visual Studio and open a project that uses Devexpress components.
-
Click Help > DevExpress Help.
-
Select the option that you want to use, such as Online Documentation, Offline Documentation, Support Center, Knowledge Base, etc.
-
A web browser will open with the selected option and you can browse the documentation and support for Devexpress products.
-
-
Conclusion
-
In this article, we have shown you how to download Devexpress 12.1 Full 16, the latest version of this powerful software suite for web and desktop development, and how to use it effectively in your projects. We have covered the following topics:
-
-
What is Devexpress and why you need it.
-
How to download Devexpress 12.1 Full 16.
-
How to use Devexpress 12.1 Full 16.
-
How to access the documentation and support for Devexpress 12.1 Full 16.
-
-
We hope that this article has been helpful and informative for you. If you want to learn more about Devexpress products and features, you can visit the official website or contact the support team. If you want to try Devexpress products for free, you can download a fully-functional 30-day trial version from the website. If you are ready to buy Devexpress products, you can choose a subscription plan that suits your needs and budget.
-
Thank you for reading this article and happy coding!
-
Frequently Asked Questions
-
Here are some of the frequently asked questions about Devexpress 12.1 Full 16:
-
Q: What are the new features and improvements in Devexpress 12.1 Full 16?
-
A: Devexpress 12.1 Full 16 includes many new features and improvements for web and desktop development, such as:
-
-
New Blazor UI Components for creating modern web applications with C#.
-
New WinForms Controls and Components for creating rich desktop applications with .NET Core or .NET Framework.
-
New WPF Controls and Components for creating powerful desktop applications with .NET Core or .NET Framework.
-
New Reporting Tools and Components for creating and displaying reports in web and desktop applications.
-
New IDE Productivity Tools and Components for enhancing your development experience in Visual Studio.
-
New Business Application Frameworks and Components for creating business applications faster and easier with XAF and XPO.
-
New Themes and Styles for customizing the appearance of your applications with Devexpress controls and components.
-
New Documentation and Support for accessing the online or offline documentation and support for Devexpress products.
-
-
Q: How can I update my existing Devexpress products to Devexpress 12.1 Full 16?
-
A: If you have an active subscription plan for Devexpress products, you can update your existing Devexpress products to Devexpress 12.1 Full 16 using the Devexpress Project Converter, which is a tool that allows you to update your projects to use the latest version of Devexpress controls and components. To do this, you need to:
-
-
Download and install Devexpress 12.1 Full 16 on your computer.
-
Launch Visual Studio and open a project that uses Devexpress components.
-
Select DevExpress > Project Converter.
-
Select the option Update all DevExpress references in current solution/project(s) to a newer version.
-
Select the version v20.2 (12.1) from the drop-down menu.
-
Select whether you want to backup your project files before updating them.
-
Select whether you want to update your project files automatically or manually.
-
Click Start Conversion.
-
The tool will update your project files to use the latest version of Devexpress controls and components.
-
Q: How can I get help or report a problem with Devexpress 12.1 Full 16?
-
A: If you need help or want to report a problem with Devexpress 12.1 Full 16, you can contact the support team by submitting a ticket on the official website or by sending an email to support@devexpress.com. You can also browse the knowledge base or the forums on the website for answers or solutions to common issues or questions.
-
Q: How can I learn more about Devexpress products and features?
-
A: If you want to learn more about Devexpress products and features, you can visit the official website or follow the blog or social media channels of Devexpress. You can also watch the videos or webinars on the YouTube channel of Devexpress or attend the events or trainings hosted by Devexpress or its partners.
-
Q: How can I give feedback or suggest a feature for Devexpress products?
-
A: If you want to give feedback or suggest a feature for Devexpress products, you can use the User Voice portal on the official website, which is a tool that allows you to share your ideas or opinions with other users and developers of Devexpress products. You can also vote or comment on existing ideas or suggestions on the portal.
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Boku No Pico Sin Censura ((FULL)).md b/spaces/1gistliPinn/ChatGPT4/Examples/Boku No Pico Sin Censura ((FULL)).md
deleted file mode 100644
index 5f9f98be33dbeaf86c85b029a37662ee99d23170..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Boku No Pico Sin Censura ((FULL)).md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-You are on the Activation Link page. This page is protected and can only be accessed by persons who have been invited to visit it. Please note that you do not need to have a My HealtheVet account to access your information or to make a donation.
-
-You are here
-
-Privacy
-
-For medical, marketing or research purposes, we may share and disclose information with the following organizations or companies:
-
-Vietnam Veterans of America.
-
-We are required to protect your information in accordance with HIPAA, which protects your health information from unauthorized access or disclosure. Your information is stored in a secure location and is not shared with third parties or sold to others. When you are given your password, it will be your responsibility to keep it secure and private. If you forget your password, please contact us as soon as possible.
-
-In accordance with United States of America Patriot Act, we are required to collect, maintain, and make available to authorized law enforcement and other government agencies, or their authorized agents, physical and electronic access to all records and other documents and other information relating to you. Such records may include your date of birth, social security number, insurance ID number, or other personally identifying information. We may release your records or information to agents or third parties as follows:
-
-We may also use this information to contact you for promotional, marketing and research purposes.
-
-Other websites and mobile applications: If you access the My HealtheVet Account or the My HealtheVet Portal using your wireless device, we may request information from your wireless service provider to verify your identity and that you are authorized to use the wireless network. We may also use this information to track your location and content of visit to My HealtheVet Account.
-
-Your wireless provider may also access this information to aid in the delivery of your messages or other services.
-
-We may provide information about you to our service providers and/or agents, including but not limited to insurance companies, marketers, professional advisors, and others, for the purpose of processing payments, performing business operations, sending you marketing or research materials, or delivering our services. We may share information with these agents or service providers for marketing or research purposes, as permitted by the Privacy Policy, and they may contact you via mail, email, or telephone. These agents and service providers may not contact you about their own products or services unless you give them your express consent.
-
-Some of our third-party service providers may use 4fefd39f24
-
-
-
diff --git a/spaces/1phancelerku/anime-remove-background/5000rubl nece manatdir A simple guide to currency conversion.md b/spaces/1phancelerku/anime-remove-background/5000rubl nece manatdir A simple guide to currency conversion.md
deleted file mode 100644
index dc96356a8c7522678566c29a4ab1ce99c1484f5e..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/5000rubl nece manatdir A simple guide to currency conversion.md
+++ /dev/null
@@ -1,148 +0,0 @@
-
-
5000 rubl nece manatdir?
-
If you are planning to travel or do business in Azerbaijan, you might be wondering how much 5000 rubles are worth in Azerbaijani manats. In this article, we will answer this question and provide you with some useful information on how to exchange currency in Azerbaijan. We will also give you some tips on where to find the best exchange rates and how to avoid scams and fees.
-
Introduction
-
The official currency of Azerbaijan is the Azerbaijani manat, with symbol ₼ and currency code AZN. The manat is subdivided into 100 qapik. The current series of banknotes and coins was introduced in 2006, when the manat was redenominated at a rate of 5000 old manats to 1 new manat.
The official currency of Russia is the Russian ruble, with symbol ₽ and currency code RUB. The ruble is subdivided into 100 kopeks. The current series of banknotes and coins was introduced in 1998, after the ruble was redenominated at a rate of 1000 old rubles to 1 new ruble.
-
What is the exchange rate of Russian ruble to Azerbaijani manat?
-
The exchange rate of Russian ruble to Azerbaijani manat is the price of one ruble in terms of one manat. It tells you how many manats you can get for one ruble or vice versa. The exchange rate can change over time due to various factors, such as supply and demand, inflation, interest rates, speculation, and so on.
-
As of June 22, 2023, the mid-market exchange rate of Russian ruble to Azerbaijani manat was 0.0209865 AZN per RUB, according to Xe.com. This means that 5000 rubles were worth about 104.93 manats on that date.
-
What factors affect the exchange rate of Russian ruble to Azerbaijani manat?
-
There are many factors that influence the exchange rate of Russian ruble to Azerbaijani manat, some of them are:
-
-
Differentials in inflation: Typically, a country with a lower inflation rate has a stronger currency value, as its purchasing power increases relative to other currencies. For example, if inflation in Azerbaijan is lower than in Russia, the manat will tend to appreciate against the ruble.
-
Differentials in interest rates: Interest rates affect the demand for and supply of currencies, as well as the cost of borrowing and lending. For example, if interest rates in Azerbaijan are higher than in Russia, investors will be attracted to deposit money in Azerbaijan, increasing the demand for manats and pushing up their value against rubles.
-
Speculation: Speculators are traders who buy and sell currencies based on their expectations of future movements in exchange rates. They can have a significant impact on the short-term fluctuations of currencies. For example, if speculators anticipate that the manat will rise against the ruble in the future, they will buy more manats now, driving up their price.
-
Change in competitiveness: The competitiveness of a country's goods and services affects its trade balance and its currency value. For example, if Azerbaijani goods become more attractive and cheaper than Russian goods, there will be an increase in demand for Azerbaijani exports and a decrease in demand for Russian imports. This will improve Azerbaijan's trade surplus and cause its currency to appreciate against Russia's.
-
How to exchange rubles to manats?
-
If you want to exchange rubles to manats, you have several options. You can either do it before you travel, at your local bank or currency exchange office, or after you arrive, at the airport, hotel, bank, or exchange office in Azerbaijan. You can also use online platforms or apps that allow you to transfer money or exchange currency digitally.
-
However, not all options are equally convenient, safe, and cost-effective. You should always compare the exchange rates and fees offered by different providers and choose the one that gives you the best value for your money. You should also avoid exchanging currency in black markets or unofficial dealers, as they may scam you or give you counterfeit notes.
-
Best places to exchange currency in Azerbaijan
-
Once you are in Azerbaijan, you will find many places where you can exchange currency. However, some of them may offer better rates and services than others. Here are some of the best places to exchange currency in Azerbaijan:
-
Banks
-
Banks are one of the most reliable and secure places to exchange currency in Azerbaijan. They usually offer competitive rates and low fees, and they accept various currencies, including rubles. You can also withdraw manats from ATMs using your debit or credit card, but you may incur additional charges from your bank or the ATM operator.
-
How to convert 5000 rubles to manats
-5000 rubl nece manatdir xe
-5000 rubl nece manatdir wise
-5000 rubl nece manatdir exchange rate
-5000 rubl nece manatdir calculator
-5000 rubl nece manatdir today
-5000 rubl nece manatdir in dollars
-5000 rubl nece manatdir in euros
-5000 rubl nece manatdir in pounds
-5000 rubl nece manatdir in lira
-5000 rubl nece manatdir in rials
-5000 rubl nece manatdir in dirhams
-5000 rubl nece manatdir in rupees
-5000 rubl nece manatdir in yuan
-5000 rubl nece manatdir in yen
-5000 rubl nece manatdir in krona
-5000 rubl nece manatdir in francs
-5000 rubl nece manatdir in pesos
-5000 rubl nece manatdir in reals
-5000 rubl nece manatdir in zloty
-Best way to exchange 5000 rubles to manats
-Where to exchange 5000 rubles to manats
-How much is 5000 rubles in manats
-How much is 5000 rubles worth in manats
-How much is 5000 rubles in Azerbaijani currency
-How much is 5000 Russian currency in Azerbaijan
-How to send money from Russia to Azerbaijan
-How to transfer money from Russia to Azerbaijan
-How to receive money from Russia in Azerbaijan
-How to withdraw money from Russia in Azerbaijan
-What is the currency of Azerbaijan called
-What is the currency of Russia called
-What is the symbol of Azerbaijani currency
-What is the symbol of Russian currency
-What is the exchange rate of Azerbaijani currency to Russian currency
-What is the exchange rate of Russian currency to Azerbaijani currency
-How to check the exchange rate of Azerbaijani currency and Russian currency
-How to compare the exchange rate of Azerbaijani currency and Russian currency
-How to find the best exchange rate of Azerbaijani currency and Russian currency
-How to save money on exchanging Azerbaijani currency and Russian currency
-
Some of the major banks in Azerbaijan that offer currency exchange services are:
-
-
Bank
Website
-
Kapital Bank
[Kapital Bank]
-
PASHA Bank
[PASHA Bank]
-
International Bank of Azerbaijan
[International Bank of Azerbaijan]
-
Bank Respublika
[Bank Respublika]
-
Nikoil Bank
[Nikoil Bank]
-
-
Exchange offices
-
Exchange offices are another common place to exchange currency in Azerbaijan. They are usually located in busy areas, such as airports, hotels, shopping malls, and tourist attractions. They are convenient and fast, but they may charge higher fees and offer lower rates than banks. You should always check the exchange rate and the commission before you make a transaction.
-
Some of the reputable exchange offices in Azerbaijan are:
-
-
Exchange office
Location
-
Azərpoçt
Various branches across the country
-
Baku Express Exchange
Baku International Airport
-
Currency Exchange Baku
Nizami Street 67/71, Baku
-
Ganja Exchange
Ganja Mall, Ganja
-
Lankaran Exchange
Lankaran Heydar Aliyev Avenue 59A, Lankaran
-
-
Online platforms
-
Online platforms are a modern and convenient way to exchange currency in Azerbaijan. They allow you to transfer money or exchange currency digitally, using your smartphone or computer. You can either use an online platform that connects you with a local agent who will deliver cash to you or collect cash from you, or use an online platform that allows you to send money to a bank account or a mobile wallet.
-
Some of the online platforms that offer currency exchange services in Azerbaijan are:
-
-
Online platform
Website
-
Azimo
[Azimo]
-
CurrencyFair
[CurrencyFair]
-
Moneymove
[Moneymove]
-
Skrill
[Skrill]
-
TransferWise
[TransferWise]
-
-
Conclusion
-
Summary of the main points
-
We have learned that:
-
-
The exchange rate of Russian ruble to Azerbaijani manat is the price of one ruble in terms of one manat. It can change over time due to various factors, such as inflation, interest rates, speculation, competitiveness, and other currencies.
-
As of June 22, 2023, the mid-market exchange rate of Russian ruble to Azerbaijani manat was 0.0209865 AZN per RUB, according to Xe.com. This means that 5000 rubles were worth about 104.93 manats on that date.
-
You can exchange rubles to manats before you travel, at your local bank or currency exchange office, or after you arrive, at the airport, hotel, bank, or exchange office in Azerbaijan. You can also use online platforms or apps that allow you to transfer money or exchange currency digitally.
-
You should always compare the exchange rates and fees offered by different providers and choose the one that gives you the best value for your money. You should also avoid exchanging currency in black markets or unofficial dealers, as they may scam you or give you counterfeit notes.
-
Some of the best places to exchange currency in Azerbaijan are banks, exchange offices, and online platforms. They offer different advantages and disadvantages in terms of convenience, security, and cost-effectiveness.
-
-
Recommendations for travelers and business people
-
Based on the information we have provided, here are some recommendations for travelers and business people who want to exchange rubles to manats:
-
-
Plan ahead and check the current exchange rate before you travel. You can use online tools like Xe.com or Google Currency Converter to get an idea of how much your money is worth in Azerbaijan.
-
Exchange some cash before you travel, but not too much. It is good to have some local currency on hand when you arrive, but you don't want to carry too much cash with you for safety reasons. You can also use your debit or credit card to withdraw money from ATMs in Azerbaijan, but be aware of the fees and charges involved.
-
Shop around and compare different providers when you exchange currency in Azerbaijan. Don't just go for the first option you see, as you may end up paying more than you need to. Look for signs that display the exchange rate and the commission, and ask for a receipt after every transaction.
-
Avoid exchanging currency at airports or hotels, as they usually offer the worst rates and charge the highest fees. Instead, look for banks or reputable exchange offices in the city center or near tourist attractions. You can also use online platforms that offer low-cost and fast currency exchange services.
-
Keep track of your spending and budget accordingly. Azerbaijan is a relatively affordable country compared to other European destinations, but it is still important to manage your money wisely. You can use apps like Mint or Expensify to track your expenses and set spending limits.
-
-
FAQs
-
Here are some frequently asked questions about exchanging rubles to manats:
-
-
How do I pronounce Azerbaijani manat?
-
The Azerbaijani manat is pronounced as "mah-nat", with emphasis on the second syllable. The plural form is "manatlar", pronounced as "mah-nat-lar". The qapik is pronounced as "gah-pik", with emphasis on the first syllable. The plural form is "qapiklar", pronounced as "gah-pik-lar".
-
What are the denominations of Azerbaijani manat?
-
The Azerbaijani manat comes in banknotes of 1, 5, 10, 20, 50, 100, and 200 manats, and coins of 1, 3, 5, 10, 20, and 50 qapiks. The banknotes feature portraits of prominent Azerbaijani figures and landmarks on both sides. The coins feature the national emblem and name of Azerbaijan on one side and the denomination and year of issue on the other side.
-
What are some tips for handling Azerbaijani manat?
-
Some tips for handling Azerbaijani manat are:
-
-exchange office. They may not be accepted by some merchants or service providers.
-
Carry a mix of small and large denominations. You may need small bills and coins for public transportation, street vendors, tips, and other minor expenses. You may also need large bills for hotels, restaurants, shops, and other major expenses. However, don't carry too much cash with you for safety reasons.
-
Count your money carefully and check for authenticity. When you exchange or receive money, make sure you get the correct amount and the right currency. You can use a currency converter app or a calculator to verify the exchange rate and the total amount. You can also check for security features on the banknotes and coins, such as watermarks, holograms, microprinting, and magnetic strips.
-
-
How do I tip in Azerbaijan?
-
Tipping is not mandatory in Azerbaijan, but it is appreciated and expected in some situations. You can tip according to the quality of service and your satisfaction. Here are some general guidelines for tipping in Azerbaijan:
-
-
Restaurants: You can tip 10% of the bill or round up to the nearest manat. If the service charge is included in the bill, you don't need to tip extra.
-
Taxis: You can tip 10% of the fare or round up to the nearest manat. You can also tip more if the driver helps you with your luggage or gives you useful information.
-
Hotels: You can tip 1-2 manats per bag to the bellboy and 2-5 manats per day to the housekeeper. You can also tip 5-10 manats to the concierge if they provide you with special assistance or recommendations.
-
Tours: You can tip 10-15% of the tour price to the guide and 5-10% to the driver. You can also tip more if they are exceptionally friendly or knowledgeable.
-
Spas and salons: You can tip 10-15% of the service price to the staff who provide you with massage, hair, nail, or beauty treatments.
-
-
What are some common scams and pitfalls to avoid when exchanging currency in Azerbaijan?
-
Some common scams and pitfalls to avoid when exchanging currency in Azerbaijan are:
-
-
Black market dealers: These are people who offer to exchange currency at very attractive rates, usually on the street or in public places. They may try to lure you with promises of saving money or avoiding fees. However, they may also try to cheat you by giving you counterfeit notes, incorrect change, or a different currency. They may also try to rob you or harm you if you follow them to a secluded place.
-
Dishonest merchants: These are people who try to take advantage of your unfamiliarity with the local currency or prices. They may try to overcharge you, give you wrong change, or use a rigged calculator or scale. They may also try to trick you into paying in a different currency or accepting a different currency as change.
-
Dynamic currency conversion: This is a service that allows you to pay in your home currency instead of the local currency at some merchants or ATMs. It may seem convenient and transparent, but it usually comes with a high fee and a poor exchange rate. You will end up paying more than you need to. It is better to always pay in the local currency and let your bank or card issuer handle the conversion.
-
-
-
I hope this article has helped you understand how much 5000 rubles are worth in Azerbaijani manats and how to exchange currency in Azerbaijan. If you have any questions or comments, please feel free to leave them below. Thank you for reading!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Blast Away with 3D Bubble Shooter A Free and Fun Game for All Ages.md b/spaces/1phancelerku/anime-remove-background/Blast Away with 3D Bubble Shooter A Free and Fun Game for All Ages.md
deleted file mode 100644
index 7df2c2bd3dca61fd732b4cf62e65c2acd7c6cb09..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Blast Away with 3D Bubble Shooter A Free and Fun Game for All Ages.md
+++ /dev/null
@@ -1,140 +0,0 @@
-
-
3D Bubble Shooter Game Free Download: A Fun and Addictive Way to Relax and Enjoy
-
If you are looking for a fun and addictive game that can help you relax and enjoy your free time, you should try playing a 3D bubble shooter game. A 3D bubble shooter game is a classic puzzle game that involves shooting colorful bubbles and matching them with other bubbles of the same color. The goal is to clear all the bubbles from the board and win levels. Sounds easy, right? Well, not so fast. A 3D bubble shooter game can also be challenging and exciting, especially when you play it in 3D mode. In this article, we will tell you everything you need to know about 3D bubble shooter games, including how to download and play them for free, what are the features and benefits of playing them, and how to improve your skills and strategies in them. So, let's get started!
A 3D bubble shooter game is a type of puzzle game that belongs to the genre of tile-matching or match-three games. In these games, you have to match three or more tiles or objects of the same color or shape to make them disappear from the board. Some examples of popular tile-matching games are Candy Crush Saga, Bejeweled, Tetris, and of course, Bubble Shooter.
-
The basic gameplay of bubble shooter games
-
The basic gameplay of bubble shooter games is simple and easy to learn. You have a cannon or a launcher at the bottom of the screen that shoots bubbles of different colors. You can aim and shoot the bubbles by tapping or clicking on the screen. You have to shoot the bubbles towards the top of the screen, where there are other bubbles already arranged in rows or clusters. When you shoot a bubble, it will stick to the other bubbles of the same color if they are adjacent or touching. If you manage to create a group of three or more bubbles of the same color, they will pop and disappear from the board. The more bubbles you pop at once, the more points you score. You can also create combos by popping multiple groups of bubbles in succession. The game ends when you clear all the bubbles from the board or when the bubbles reach the bottom of the screen.
-
The advantages of playing in 3D mode
-
While most bubble shooter games are played in 2D mode, some games offer you the option to play in 3D mode. This means that instead of having a flat board with rows or columns of bubbles, you have a spherical or cylindrical board with bubbles arranged in layers or rings. This adds a new dimension to the gameplay, as you have to consider not only the horizontal and vertical angles, but also the depth and perspective of your shots. Playing in 3D mode can make the game more realistic, immersive, and challenging. You can also enjoy different views and angles of the board as it rotates or tilts according to your movements. Playing in 3D mode can also enhance your spatial awareness, coordination, and concentration skills.
-
How to download and play 3D bubble shooter games for free?
-
If you are interested in playing 3D bubble shooter games for free, you have several options to choose from. You can find many free 3D bubble shooter games online, on various websites and platforms. You can also download and install free 3D bubble shooter games on your device, such as your smartphone, tablet, laptop, or desktop computer. Here are some tips on how to do that.
-
The best sources to find free 3D bubble shooter games
-
One of the best sources to find free 3D bubble shooter games is the internet. There are many websites and platforms that offer a wide range of 3D bubble shooter games that you can play online, without downloading or installing anything. Some of the most popular and reliable websites and platforms are:
-
3d bubble pop game free download
-3d bubble blast game free download
-3d bubble shooter offline game free download
-3d bubble shooter game for pc free download
-3d bubble shooter game for android free download
-3d bubble shooter game with physics free download
-3d bubble shooter game with levels free download
-3d bubble shooter game with boosters free download
-3d bubble shooter game with puzzles free download
-3d bubble shooter game with arcade mode free download
-3d bubble fall game free download
-3d bubble crush game free download
-3d bubble breaker game free download
-3d bubble match game free download
-3d bubble swap game free download
-3d bubble burst game free download
-3d bubble drop game free download
-3d bubble bounce game free download
-3d bubble smash game free download
-3d bubble shoot game free download
-best 3d bubble shooter game free download
-new 3d bubble shooter game free download
-classic 3d bubble shooter game free download
-original 3d bubble shooter game free download
-addictive 3d bubble shooter game free download
-relaxing 3d bubble shooter game free download
-fun 3d bubble shooter game free download
-challenging 3d bubble shooter game free download
-exciting 3d bubble shooter game free download
-awesome 3d bubble shooter game free download
-colorful 3d bubble shooter game free download
-realistic 3d bubble shooter game free download
-smooth 3d bubble shooter game free download
-easy 3d bubble shooter game free download
-simple 3d bubble shooter game free download
-amazing 3d bubble shooter game free download
-cool 3d bubble shooter game free download
-cute 3d bubble shooter game free download
-beautiful 3d bubble shooter game free download
-fantastic 3d bubble shooter game free download
-voodoo 3d bubble shooter game free download
-tarboosh 3d bubble shooter game free download
-bubbleshooter orig 3d bubble shooter game free download
-bubbleshooter android 3d bubble shooter game free download
-google play store 3d bubble shooter game free download
-app store 3d bubble shooter game free download
-apk file 3d bubble shooter game free download
-mod version 3d bubble shooter game free download
-unlimited coins and lives in the app of the same name.
-
-
Bubble Shooter 3D: This website features a collection of 3D bubble shooter games that you can play for free on your browser. You can choose from different themes, such as animals, fruits, candy, jewels, and more. You can also adjust the difficulty level and the speed of the game. The website has a simple and user-friendly interface, and you can also access it from your mobile device.
-
Bubble Shooter 3D - Play Free Online Games: This website is part of the Play Free Online Games network, which offers thousands of free online games in various categories and genres. You can find several 3D bubble shooter games on this website, such as Bubble Shooter 3D Galaxy, Bubble Shooter 3D Magic Forest, Bubble Shooter 3D Halloween, and more. You can play these games for free on your browser, without registration or download.
-
Bubble Shooter - Apps on Google Play: This is the official app of the classic bubble shooter game, developed by Ilyon Dynamics Ltd. You can download and install this app for free on your Android device from the Google Play Store. The app offers hundreds of levels of 3D bubble shooter games, with different themes, modes, and challenges. You can also enjoy stunning graphics, sound effects, and animations. The app is easy to use and compatible with most devices.
-
Bubble Shooter 3D - App Store - Apple: This is another app that offers 3D bubble shooter games for free on your iOS device. You can download and install this app from the App Store on your iPhone or iPad. The app features over 1000 levels of 3D bubble shooter games, with various themes, modes, and difficulties. You can also use boosters and power-ups to enhance your gameplay. The app has a sleek and intuitive design, and it supports offline play.
-
-
The steps to download and install 3D bubble shooter games on your device
-
If you prefer to download and install 3D bubble shooter games on your device, rather than playing them online, you need to follow some simple steps. Here are the general steps to do that:
-
-
Choose a source or a platform that offers free 3D bubble shooter games for download. You can use the ones we mentioned above, or you can search for other options on the internet.
-
Select a game that you want to download and play. Make sure that the game is compatible with your device and meets the system requirements.
-
Click on the download button or link to start the download process. You may need to grant some permissions or accept some terms and conditions before downloading.
-
Wait for the download to finish. Depending on the size of the game and your internet speed, this may take a few minutes or longer.
-
Once the download is complete, locate the game file on your device and open it. Follow the instructions to install the game on your device.
-
After the installation is done, launch the game and enjoy playing it.
-
-
What are the features and benefits of playing 3D bubble shooter games?
-
Playing 3D bubble shooter games can be a lot of fun and rewarding. There are many features and benefits that you can enjoy while playing these games. Here are some of them:
-
The different modes and levels of 3D bubble shooter games
-
One of the features that make 3D bubble shooter games interesting and varied is the different modes and levels that they offer. You can choose from different modes of gameplay, such as classic mode, arcade mode, puzzle mode, adventure mode, time mode, etc. Each mode has its own rules and objectives that you have to follow and achieve. You can also play different levels of difficulty, ranging from easy to hard. Each level has its own layout, design, color scheme, number of bubbles , and obstacles. You can also unlock new levels as you progress and complete the previous ones. The different modes and levels of 3D bubble shooter games can keep you entertained and challenged for hours.
-
The cool boosters and power-ups to help you win
-
Another feature that makes 3D bubble shooter games fun and exciting is the cool boosters and power-ups that you can use to help you win. Boosters and power-ups are special bubbles or items that have different effects and abilities. For example, some boosters and power-ups can change the color of the bubbles, pop more bubbles at once, clear a whole row or column of bubbles, freeze the board, etc. You can get boosters and power-ups by popping certain bubbles, completing certain tasks, or buying them with coins or gems. You can also use them strategically to overcome difficult situations or to score higher points. Boosters and power-ups can add more fun and variety to your gameplay.
-
The amazing graphics and sound effects of 3D bubble shooter games
-
One of the benefits of playing 3D bubble shooter games is that you can enjoy amazing graphics and sound effects that enhance your gaming experience. 3D bubble shooter games have high-quality graphics that make the bubbles look realistic, colorful, and shiny. You can also see the bubbles pop and burst in 3D animation, which is satisfying and rewarding. The sound effects of 3D bubble shooter games are also impressive and immersive. You can hear the bubbles pop, bounce, splash, and crackle as you shoot them. You can also hear the background music and the voice-overs that match the theme and mood of the game. The graphics and sound effects of 3D bubble shooter games can make you feel like you are playing in a real 3D environment.
-
How to improve your skills and strategies in 3D bubble shooter games?
-
Playing 3D bubble shooter games can be easy to learn, but hard to master. If you want to improve your skills and strategies in these games, you need to practice regularly and follow some tips and tricks. Here are some of them:
-
The tips and tricks to aim and shoot accurately
-
One of the most important skills in 3D bubble shooter games is to aim and shoot accurately. You need to be able to hit the right spot with the right bubble at the right time. To do that, you need to pay attention to several factors, such as:
-
-
The direction and angle of your shot: You can use the arrow or the line on your cannon or launcher to guide your shot. You can also adjust the direction and angle by moving your finger or mouse on the screen. You need to consider the curvature and gravity of your shot, as well as the rotation and tilt of the board.
-
The color and position of the bubbles: You need to match the color of your bubble with the color of the bubbles on the board. You also need to aim for the bubbles that are close to each other or form a group, rather than the ones that are isolated or scattered.
-
The walls and edges of the board: You can use the walls and edges of the board to bounce your bubbles off them. This can help you reach areas that are hard to access or create angles that are otherwise impossible.
-
The timing and speed of your shot: You need to shoot your bubbles quickly before they reach the bottom of the screen or before new bubbles appear on the board. You also need to shoot your bubbles with enough force to make them stick to the other bubbles, rather than fall off or slide down.
-
-
The best ways to clear the board and score high points
-
One of the main goals in 3D bubble shooter games is to clear the board and score high points. To do that, you need to follow some strategies, such as:
-
-
Pop as many bubbles as possible at once: The more bubbles you pop at once, the more points you score. You can also create combos by popping multiple groups of bubbles in succession. You can also look for special bubbles that can pop more bubbles at once, such as bomb bubbles, rainbow bubbles, fire bubbles, etc.
-
Clear the top and bottom rows of the board: The top and bottom rows of the board are the most important ones to clear, as they can affect the rest of the board. If you clear the top row of the board, you can make the whole board drop down and create more space for your shots. If you clear the bottom row of the board, you can prevent the bubbles from reaching the bottom of the screen and ending the game.
-
Use boosters and power-ups wisely: Boosters and power-ups can help you clear the board and score high points, but you need to use them wisely. You should save them for difficult situations or when you need a big boost. You should also use them strategically, such as using a color changer to create a large group of bubbles, or using a fireball to clear a whole row or column of bubbles.
-
-
The challenges and rewards of playing 3D bubble shooter games
-
Playing 3D bubble shooter games can be challenging and rewarding at the same time. There are many challenges that you can face while playing these games, such as:
-
-
The increasing difficulty and complexity of the levels: As you progress in the game, the levels become harder and more complex. You may encounter more colors, shapes, and types of bubbles, as well as more obstacles, such as metal bubbles, ice bubbles, stone bubbles, etc. You may also have to deal with more limited time, moves, or shots.
-
The unpredictable and random nature of the game: The game is based on luck and chance, as well as skill and strategy. You never know what color or type of bubble you will get next, or where it will land on the board. You also have to adapt to the changing conditions and situations of the game.
-
The competition and comparison with other players: The game can be competitive and comparative, as you can see your score and rank on the leaderboard, or compare your performance with other players online or offline. You may also feel pressured or motivated to beat your own records or achievements.
-
-
However, there are also many rewards that you can get from playing 3D bubble shooter games, such as:
-
-
The fun and enjoyment of the game: The game is fun and enjoyable to play, as it can keep you entertained and engaged for hours. You can also experience different emotions and sensations while playing, such as excitement, satisfaction, frustration, relief, etc.
-
The relaxation and stress relief of the game: The game can help you relax and relieve stress, as it can distract you from your worries and problems. You can also use the game as a way to unwind and relax after a long day or a busy week.
-
The improvement and development of your skills: The game can help you improve and develop your skills, such as your spatial awareness, coordination, concentration, memory, logic, problem-solving, creativity, etc. You can also learn new things and gain new knowledge from playing the game.
-
-
Conclusion
-
In conclusion, 3D bubble shooter games are a fun and addictive way to relax and enjoy your free time. They are easy to learn but hard to master puzzle games that involve shooting colorful bubbles and matching them with other bubbles of the same color. They offer different modes and levels of gameplay, cool boosters and power-ups to help you win , amazing graphics and sound effects to enhance your gaming experience, and many challenges and rewards to keep you motivated and satisfied. You can download and play 3D bubble shooter games for free on your device, from various sources and platforms. You can also improve your skills and strategies in 3D bubble shooter games by following some tips and tricks. 3D bubble shooter games are a great way to have fun and relax, as well as to improve your mental abilities and learn new things. So, what are you waiting for? Download a 3D bubble shooter game today and start popping some bubbles!
-
FAQs
-
Here are some frequently asked questions about 3D bubble shooter games:
-
-
What is the difference between 2D and 3D bubble shooter games?
-
A: The main difference between 2D and 3D bubble shooter games is the shape and orientation of the board. In 2D bubble shooter games, the board is flat and has rows or columns of bubbles. In 3D bubble shooter games, the board is spherical or cylindrical and has layers or rings of bubbles. This affects the gameplay, as you have to consider the depth and perspective of your shots in 3D mode.
-
How can I get more coins or gems in 3D bubble shooter games?
-
A: Coins or gems are the currency of 3D bubble shooter games, which you can use to buy boosters, power-ups, or extra lives. You can get more coins or gems by completing levels, achieving goals, watching ads, or making in-app purchases.
-
How can I play 3D bubble shooter games offline?
-
A: Some 3D bubble shooter games support offline play, which means that you can play them without an internet connection. To do that, you need to download and install the game on your device first, and then launch it while offline. However, some features or functions of the game may not be available offline, such as leaderboards, achievements, or updates.
-
Are 3D bubble shooter games suitable for children?
-
A: Yes, 3D bubble shooter games are suitable for children, as they are fun, colorful, and easy to play. They can also help children develop their cognitive, motor, and social skills, as well as their creativity and imagination. However, parents should supervise their children while playing these games, especially when it comes to online interactions or in-app purchases.
-
What are some of the best 3D bubble shooter games to play?
-
A: There are many 3D bubble shooter games to choose from, but some of the best ones are:
-
-
Bubble Shooter 3D Galaxy: This game takes you to a galaxy full of bubbles that you have to pop and explore. You can enjoy over 1000 levels of cosmic fun, with different planets, stars, asteroids, and aliens. You can also use awesome boosters and power-ups to blast your way through the galaxy.
-
Bubble Shooter 3D Magic Forest: This game transports you to a magical forest where you have to shoot bubbles and match them with the animals and plants. You can play over 500 levels of enchanting fun, with different creatures, flowers, mushrooms, and more. You can also use magical boosters and power-ups to help you in your adventure.
-
Bubble Shooter 3D Halloween: This game invites you to a spooky Halloween party where you have to shoot bubbles and match them with the ghosts and monsters. You can play over 300 levels of scary fun, with different pumpkins, bats, spiders, and more. You can also use creepy boosters and power-ups to spook your way through the party.
-
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download PUBG MOBILE MOD APK with Unlimited Features and Anti-Ban.md b/spaces/1phancelerku/anime-remove-background/Download PUBG MOBILE MOD APK with Unlimited Features and Anti-Ban.md
deleted file mode 100644
index 4f0a864877e2a99aee6b99a3952a816dea1f4e2a..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download PUBG MOBILE MOD APK with Unlimited Features and Anti-Ban.md
+++ /dev/null
@@ -1,101 +0,0 @@
-
-
How to Download PUBG Mobile Mod APK and Enjoy Its Amazing Features
-
PUBG Mobile is one of the most popular and addictive mobile games in the world. Millions of players enjoy its thrilling and realistic gameplay, where they have to survive in a shrinking map with up to 100 other players. However, some players are not satisfied with the official game, and they look for ways to enhance their gaming experience. One of these ways is to download PUBG Mobile Mod APK, a modified version of the game that offers various features that are not available in the original game. In this article, we will tell you what PUBG Mobile Mod APK is, why you should download it, how to download it, what are the risks of downloading it, and some frequently asked questions about it.
PUBG Mobile Mod APK is a modified version of the popular battle royale game PUBG Mobile
-
PUBG Mobile Mod APK is a modified version of the official PUBG Mobile game, which is developed by Krafton and Level Infinite. The modded version is created by third-party developers or hackers, who modify the original game files and add new features or functions to the game. These features or functions are usually called hacks or cheats, as they give an unfair advantage to the players who use them.
-
It offers various features that are not available in the official game, such as ESP, aimbot, wallhack, speed hack, jump hack, and more
-
PUBG Mobile Mod APK offers various features that are not available in the official game, such as ESP (Extra Sensory Perception), aimbot (auto-aiming), wallhack (seeing through walls), speed hack (increasing movement speed), jump hack (increasing jump height), and more. These features can help you spot your enemies easily, shoot them accurately, move faster, jump higher, and more. These features can make the game more fun and exciting, as you can dominate the battlefield and win more matches. However, they can also make the game unfair and unbalanced, as you can gain an edge over your opponents who play the game normally.
-
Why Download PUBG Mobile Mod APK?
-
PUBG Mobile Mod APK can enhance your gaming experience and give you an edge over your opponents
-
PUBG Mobile Mod APK can enhance your gaming experience and give you an edge over your opponents, as you can use the features that are not available in the official game. You can improve your skills, performance, and stats, as you can spot, shoot, move, and jump better than your enemies. You can also enjoy the game more, as you can explore new possibilities and scenarios that are not possible in the original game. You can have more fun and excitement, as you can win more matches and rank higher in the leaderboards.
-
You can access all the premium items, skins, weapons, and vehicles for free
-
PUBG Mobile Mod APK also allows you to access all the premium items, skins, weapons, and vehicles for free, without spending any real money or UC (Unknown Cash), which is the in-game currency of PUBG Mobile. You can unlock and use all the items that are otherwise only available through purchasing or completing missions or events. You can customize your character and equipment according to your preference and style. You can also impress your friends and other players with your rare and exclusive items.
-
You can customize your game settings and preferences according to your liking
-
PUBG Mobile Mod APK also lets you customize your game settings and preferences according to your liking, without following the default or recommended settings of the official game. You can adjust the graphics quality, sound effects, controls, sensitivity, frame rate, and more. You can also enable or disable the features of the modded version according to your needs and wishes. You can tailor your game experience to suit your device specifications and personal taste.
-
How to Download PUBG Mobile Mod APK?
-
You need to find a reliable and safe source to download the modded APK file
-
The first step to download PUBG Mobile Mod APK is to find a reliable and safe source to download the modded APK file. There are many websites and platforms that claim to offer PUBG Mobile Mod APK for free, but not all of them are trustworthy or secure. Some of them may contain malware or viruses that can harm your device or data. Some of them may also provide fake or outdated versions of the modded APK file that may not work properly or at all. Therefore, you need to do some research and check the reviews and ratings of the source before downloading anything from it.
-
You need to enable the installation of unknown sources on your device
-
The next step to download PUBG Mobile Mod APK is to enable the installation of unknown sources on your device. This is because PUBG Mobile Mod APK is not an official app from Google Play Store or App Store, and it is considered as an unknown or third-party app by your device. Therefore, you need to allow your device to install apps from sources other than the official ones. To do this, you need to go to your device settings, security settings, and enable the option of unknown sources or allow from this source.
-
You need to uninstall the original PUBG Mobile game from your device
-
The third step to download PUBG Mobile Mod APK is to uninstall the original PUBG Mobile game from your device. This is because PUBG Mobile Mod APK cannot coexist with the official game on the same device, as they have the same package name and signature. Therefore, you need to remove the original game from your device before installing the modded version. To do this, you need to go to your device settings, apps settings, find PUBG Mobile app, and uninstall it.
-
download pubg mobile mod apk latest version
-download pubg mobile mod menu
-download pubg mobile mod esp
-download pubg mobile mod unlimited uc
-download pubg mobile mod aimbot
-download pubg mobile mod anti ban
-download pubg mobile mod no recoil
-download pubg mobile mod obb
-download pubg mobile mod global
-download pubg mobile mod kr
-download pubg mobile mod data
-download pubg mobile mod free fire
-download pubg mobile mod god mode
-download pubg mobile mod hack
-download pubg mobile mod injector
-download pubg mobile mod ios
-download pubg mobile mod magic bullet
-download pubg mobile mod new era
-download pubg mobile mod offline
-download pubg mobile mod online
-download pubg mobile mod plus
-download pubg mobile mod root
-download pubg mobile mod script
-download pubg mobile mod speed hack
-download pubg mobile mod vip
-download pubg mobile mod wallhack
-download pubg mobile lite mod apk
-download pubg mobile lite mod menu
-download pubg mobile lite mod esp
-download pubg mobile lite mod unlimited bc
-download pubg mobile lite mod aimbot
-download pubg mobile lite mod anti ban
-download pubg mobile lite mod no recoil
-download pubg mobile lite mod obb
-download pubg mobile lite mod global
-download pubg mobile lite mod data
-download pubg mobile lite mod free fire
-download pubg mobile lite mod god mode
-download pubg mobile lite mod hack
-download pubg mobile lite mod injector
-download pubg mobile lite mod ios
-download pubg mobile lite mod magic bullet
-download pubg mobile lite mod new era
-download pubg mobile lite mod offline
-download pubg mobile lite mod online
-download pubg mobile lite mod plus
-download pubg mobile lite mod root
-download pubg mobile lite mod script
-download pubg mobile lite mod speed hack
You need to install the PUBG Mobile Mod APK file and grant the required permissions
-
The fourth step to download PUBG Mobile Mod APK is to install the PUBG Mobile Mod APK file and grant the required permissions. To do this, you need to locate the downloaded file on your device storage, tap on it, and follow the installation instructions. You may also need to grant some permissions to the app, such as storage, camera, microphone, location, and more. These permissions are necessary for the app to function properly and access the features of the modded version.
-
You need to launch the game and enjoy its features
-
The final step to download PUBG Mobile Mod APK is to launch the game and enjoy its features. To do this, you need to open the app icon on your device screen, sign in with your account or create a new one, and start playing the game. You can access the features of the modded version from the game menu or settings. You can also use some hotkeys or commands to activate or deactivate some features during the game. You can now enjoy the game with more features and advantages than before.
-
What are the Risks of Downloading PUBG Mobile Mod APK?
-
PUBG Mobile Mod APK is not an official product of Krafton or Level Infinite, and it violates their terms of service
-
One of the risks of downloading PUBG Mobile Mod APK is that it is not an official product of Krafton or Level Infinite, and it violates their terms of service. PUBG Mobile Mod APK is created by unauthorized developers or hackers, who have no affiliation or permission from the original game developers or publishers. By downloading and using PUBG Mobile Mod APK, you are breaking the rules and regulations of the official game, and you may face legal consequences or penalties for doing so.
-
You may face legal issues or penalties for using unauthorized software or cheating in the game
-
Another risk of downloading PUBG Mobile Mod APK is that you may face legal issues or penalties for using unauthorized software or cheating in the game. PUBG Mobile Mod APK is considered as a form of software piracy or intellectual property theft, as it infringes on the rights and interests of the original game developers and publishers. By downloading and using PUBG Mobile Mod APK, you are committing a crime and you may be sued or fined for doing so. Moreover, PUBG Mobile Mod APK is also considered as a form of cheating or hacking in the game, as it gives an unfair advantage to the players who use it. By downloading and using PUBG Mobile Mod APK, you are violating the fair play and sportsmanship of the game, and you may be banned or suspended from the game for doing so.
You may expose your device to malware or viruses that can harm your data or privacy
-
A third risk of downloading PUBG Mobile Mod APK is that you may expose your device to malware or viruses that can harm your data or privacy. PUBG Mobile Mod APK is not a verified or tested app, and it may contain malicious code or software that can infect your device or steal your information. By downloading and installing PUBG Mobile Mod APK, you are risking your device security and performance, and you may lose your data or compromise your privacy. You may also face identity theft, fraud, or phishing attacks from hackers or scammers who may use your data for illegal purposes.
-
Conclusion
-
PUBG Mobile Mod APK is a tempting option for players who want to enjoy the game with more features and advantages
-
PUBG Mobile Mod APK is a tempting option for players who want to enjoy the game with more features and advantages than the official game. It offers various features that are not available in the original game, such as ESP, aimbot, wallhack, speed hack, jump hack, and more. It also allows you to access all the premium items, skins, weapons, and vehicles for free. It also lets you customize your game settings and preferences according to your liking.
-
However, it also comes with many risks and drawbacks that can ruin your gaming experience and reputation
-
However, PUBG Mobile Mod APK also comes with many risks and drawbacks that can ruin your gaming experience and reputation. It is not an official product of Krafton or Level Infinite, and it violates their terms of service. You may face legal issues or penalties for using unauthorized software or cheating in the game. You may get banned or suspended from the game for using hacks or exploits. You may expose your device to malware or viruses that can harm your data or privacy.
-
It is advisable to play the game fairly and ethically, and avoid using any cheats or hacks that can harm yourself or others
-
Therefore, it is advisable to play the game fairly and ethically, and avoid using any cheats or hacks that can harm yourself or others. PUBG Mobile is a fun and challenging game that requires skill, strategy, and teamwork. It is more rewarding and satisfying to play the game without any unfair advantages or shortcuts. It is also more respectful and honorable to play the game without any dishonesty or deception. It is also safer and smarter to play the game without any risks or threats to your device or data.
-
FAQs
-
Is PUBG Mobile Mod APK legal?
-
No, PUBG Mobile Mod APK is not legal, as it is a modified version of the official PUBG Mobile game, which is developed by Krafton and Level Infinite. The modded version is created by unauthorized developers or hackers, who have no affiliation or permission from the original game developers or publishers. By downloading and using PUBG Mobile Mod APK, you are breaking the rules and regulations of the official game, and you may face legal consequences or penalties for doing so.
-
How can I avoid getting banned for using PUBG Mobile Mod APK?
-
The best way to avoid getting banned for using PUBG Mobile Mod APK is to not use it at all. PUBG Mobile has a strict anti-cheat system that can detect any abnormal activities or behaviors in the game. If you are caught using any hacks or cheats in the game, you will be banned or suspended from the game immediately. There is no guarantee that any PUBG Mobile Mod APK can bypass the anti-cheat system or protect you from getting banned. Therefore, it is better to play the game normally and fairly, without using any cheats or hacks.
-
What are some of the best features of PUBG Mobile Mod APK?
-
Some of the best features of PUBG Mobile Mod APK are: - ESP (Extra Sensory Perception): This feature allows you to see your enemies' location, health, name, distance, weapons, items, and more on your screen. - Aimbot (auto-aiming): This feature allows you to automatically aim at your enemies' head or body, and shoot them with high accuracy and precision. - Wallhack (seeing through walls): This feature allows you to see through walls and other obstacles, and spot your enemies behind them. - Speed hack (increasing movement speed): This feature allows you to increase your movement speed, and run faster than normal. - Jump hack (increasing jump height): This feature allows you to increase your jump height, and jump higher than normal.
-
Where can I download PUBG Mobile Mod APK safely?
-
There is no safe source to download PUBG Mobile Mod APK, as it is an unofficial and unverified app that may contain malware or viruses that can harm your device or data. PUBG Mobile Mod APK is also illegal and unethical, and it may get you banned or penalized from the game. Therefore, it is not recommended to download PUBG Mobile Mod APK from any source. The only safe and legal way to play PUBG Mobile is to download the official game from Google Play Store or App Store, and play it without any cheats or hacks.
-
How can I update PUBG Mobile Mod APK?
-
You cannot update PUBG Mobile Mod APK from the official game, as they are not compatible or synchronized with each other. If you want to update PUBG Mobile Mod APK, you need to find a new version of the modded APK file from the source where you downloaded it, and install it on your device. However, this may not be easy or safe, as the source may not provide regular updates or may provide fake or harmful updates. Therefore, it is better to avoid using PUBG Mobile Mod APK, and stick to the official game that provides frequent and secure updates.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/2ndelement/voicevox/voicevox_engine/utility/__init__.py b/spaces/2ndelement/voicevox/voicevox_engine/utility/__init__.py
deleted file mode 100644
index d40fea3e6c22f8bcb960ca12cf626e1f3a40afef..0000000000000000000000000000000000000000
--- a/spaces/2ndelement/voicevox/voicevox_engine/utility/__init__.py
+++ /dev/null
@@ -1,20 +0,0 @@
-from .connect_base64_waves import (
- ConnectBase64WavesException,
- connect_base64_waves,
- decode_base64_waves,
-)
-from .core_version_utility import get_latest_core_version, parse_core_version
-from .mutex_utility import mutex_wrapper
-from .path_utility import delete_file, engine_root, get_save_dir
-
-__all__ = [
- "ConnectBase64WavesException",
- "connect_base64_waves",
- "decode_base64_waves",
- "get_latest_core_version",
- "parse_core_version",
- "delete_file",
- "engine_root",
- "get_save_dir",
- "mutex_wrapper",
-]
diff --git a/spaces/AIConsultant/MusicGen/tests/common_utils/temp_utils.py b/spaces/AIConsultant/MusicGen/tests/common_utils/temp_utils.py
deleted file mode 100644
index b45d896836799edcf1fee271409b390b3b6e4127..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/tests/common_utils/temp_utils.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import os
-import tempfile
-
-
-class TempDirMixin:
- """Mixin to provide easy access to temp dir.
- """
-
- temp_dir_ = None
-
- @classmethod
- def get_base_temp_dir(cls):
- # If AUDIOCRAFT_TEST_DIR is set, use it instead of temporary directory.
- # this is handy for debugging.
- key = "AUDIOCRAFT_TEST_DIR"
- if key in os.environ:
- return os.environ[key]
- if cls.temp_dir_ is None:
- cls.temp_dir_ = tempfile.TemporaryDirectory()
- return cls.temp_dir_.name
-
- @classmethod
- def tearDownClass(cls):
- if cls.temp_dir_ is not None:
- try:
- cls.temp_dir_.cleanup()
- cls.temp_dir_ = None
- except PermissionError:
- # On Windows there is a know issue with `shutil.rmtree`,
- # which fails intermittently.
- # https://github.com/python/cpython/issues/74168
- # Following the above thread, we ignore it.
- pass
- super().tearDownClass()
-
- @property
- def id(self):
- return self.__class__.__name__
-
- def get_temp_path(self, *paths):
- temp_dir = os.path.join(self.get_base_temp_dir(), self.id)
- path = os.path.join(temp_dir, *paths)
- os.makedirs(os.path.dirname(path), exist_ok=True)
- return path
-
- def get_temp_dir(self, *paths):
- temp_dir = os.path.join(self.get_base_temp_dir(), self.id)
- path = os.path.join(temp_dir, *paths)
- os.makedirs(path, exist_ok=True)
- return path
diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/openai.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/openai.py
deleted file mode 100644
index 9911b6e135e51970177fcac067c12192b0b57c1c..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/openai.py
+++ /dev/null
@@ -1,129 +0,0 @@
-""" OpenAI pretrained model functions
-
-Adapted from https://github.com/openai/CLIP. Originally MIT License, Copyright (c) 2021 OpenAI.
-"""
-
-import os
-import warnings
-from typing import Union, List
-
-import torch
-
-from .model import build_model_from_openai_state_dict
-from .pretrained import get_pretrained_url, list_pretrained_tag_models, download_pretrained
-
-__all__ = ["list_openai_models", "load_openai_model"]
-
-
-def list_openai_models() -> List[str]:
- """Returns the names of available CLIP models"""
- return list_pretrained_tag_models('openai')
-
-
-def load_openai_model(
- name: str,
- model_cfg,
- device: Union[str, torch.device] = "cuda" if torch.cuda.is_available() else "cpu",
- jit=True,
- cache_dir=os.path.expanduser("~/.cache/clip"),
- enable_fusion: bool = False,
- fusion_type: str = 'None'
-):
- """Load a CLIP model, preserve its text pretrained part, and set in the CLAP model
-
- Parameters
- ----------
- name : str
- A model name listed by `clip.available_models()`, or the path to a model checkpoint containing the state_dict
- device : Union[str, torch.device]
- The device to put the loaded model
- jit : bool
- Whether to load the optimized JIT model (default) or more hackable non-JIT model.
-
- Returns
- -------
- model : torch.nn.Module
- The CLAP model
- preprocess : Callable[[PIL.Image], torch.Tensor]
- A torchvision transform that converts a PIL image into a tensor that the returned model can take as its input
- """
- if get_pretrained_url(name, 'openai'):
- model_path = download_pretrained(get_pretrained_url(name, 'openai'), root=cache_dir)
- elif os.path.isfile(name):
- model_path = name
- else:
- raise RuntimeError(f"Model {name} not found; available models = {list_openai_models()}")
-
- try:
- # loading JIT archive
- model = torch.jit.load(model_path, map_location=device if jit else "cpu").eval()
- state_dict = None
- except RuntimeError:
- # loading saved state dict
- if jit:
- warnings.warn(f"File {model_path} is not a JIT archive. Loading as a state dict instead")
- jit = False
- state_dict = torch.load(model_path, map_location="cpu")
-
- if not jit:
- try:
- model = build_model_from_openai_state_dict(state_dict or model.state_dict(), model_cfg, enable_fusion, fusion_type).to(device)
- except KeyError:
- sd = {k[7:]: v for k, v in state_dict["state_dict"].items()}
- model = build_model_from_openai_state_dict(sd, model_cfg, enable_fusion, fusion_type).to(device)
-
- if str(device) == "cpu":
- model.float()
- return model
-
- # patch the device names
- device_holder = torch.jit.trace(lambda: torch.ones([]).to(torch.device(device)), example_inputs=[])
- device_node = [n for n in device_holder.graph.findAllNodes("prim::Constant") if "Device" in repr(n)][-1]
-
- def patch_device(module):
- try:
- graphs = [module.graph] if hasattr(module, "graph") else []
- except RuntimeError:
- graphs = []
-
- if hasattr(module, "forward1"):
- graphs.append(module.forward1.graph)
-
- for graph in graphs:
- for node in graph.findAllNodes("prim::Constant"):
- if "value" in node.attributeNames() and str(node["value"]).startswith("cuda"):
- node.copyAttributes(device_node)
-
- model.apply(patch_device)
- patch_device(model.encode_audio)
- patch_device(model.encode_text)
-
- # patch dtype to float32 on CPU
- if str(device) == "cpu":
- float_holder = torch.jit.trace(lambda: torch.ones([]).float(), example_inputs=[])
- float_input = list(float_holder.graph.findNode("aten::to").inputs())[1]
- float_node = float_input.node()
-
- def patch_float(module):
- try:
- graphs = [module.graph] if hasattr(module, "graph") else []
- except RuntimeError:
- graphs = []
-
- if hasattr(module, "forward1"):
- graphs.append(module.forward1.graph)
-
- for graph in graphs:
- for node in graph.findAllNodes("aten::to"):
- inputs = list(node.inputs())
- for i in [1, 2]: # dtype can be the second or third argument to aten::to()
- if inputs[i].node()["value"] == 5:
- inputs[i].node().copyAttributes(float_node)
-
- model.apply(patch_float)
- patch_float(model.encode_audio)
- patch_float(model.encode_text)
- model.float()
-
- model.audio_branch.audio_length = model.audio_cfg.audio_length
- return model
diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/losses_audio/contperceptual.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/losses_audio/contperceptual.py
deleted file mode 100644
index 3e3018da79c5c24d85af1687f6f0875530dcc7c6..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/losses_audio/contperceptual.py
+++ /dev/null
@@ -1,123 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import sys
-
-sys.path.insert(0, '.') # nopep8
-from ldm.modules.losses_audio.vqperceptual import *
-
-
-class LPAPSWithDiscriminator(nn.Module):
- def __init__(self, disc_start, logvar_init=0.0, kl_weight=1.0, pixelloss_weight=1.0,
- disc_num_layers=3, disc_in_channels=3, disc_factor=1.0, disc_weight=1.0,
- perceptual_weight=1.0, use_actnorm=False, disc_conditional=False,
- disc_loss="hinge"):
-
- super().__init__()
- assert disc_loss in ["hinge", "vanilla"]
- self.kl_weight = kl_weight
- self.pixel_weight = pixelloss_weight
- self.perceptual_loss = LPAPS().eval()# LPIPS用于日常图像,而LPAPS用于梅尔谱图
- self.perceptual_weight = perceptual_weight
- # output log variance
- self.logvar = nn.Parameter(torch.ones(size=()) * logvar_init)
-
- self.discriminator = NLayerDiscriminator(input_nc=disc_in_channels,
- n_layers=disc_num_layers,
- use_actnorm=use_actnorm,
- ).apply(weights_init)
- self.discriminator_iter_start = disc_start
- if disc_loss == "hinge":
- self.disc_loss = hinge_d_loss
- elif disc_loss == "vanilla":
- self.disc_loss = vanilla_d_loss
- else:
- raise ValueError(f"Unknown GAN loss '{disc_loss}'.")
- print(f"LPAPSWithDiscriminator running with {disc_loss} loss.")
- self.disc_factor = disc_factor
- self.discriminator_weight = disc_weight
- self.disc_conditional = disc_conditional
-
-
- def calculate_adaptive_weight(self, nll_loss, g_loss, last_layer=None):
- if last_layer is not None:
- nll_grads = torch.autograd.grad(nll_loss, last_layer, retain_graph=True)[0]
- g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0]
- else:
- nll_grads = torch.autograd.grad(nll_loss, self.last_layer[0], retain_graph=True)[0]
- g_grads = torch.autograd.grad(g_loss, self.last_layer[0], retain_graph=True)[0]
-
- d_weight = torch.norm(nll_grads) / (torch.norm(g_grads) + 1e-4)
- d_weight = torch.clamp(d_weight, 0.0, 1e4).detach()
- d_weight = d_weight * self.discriminator_weight
- return d_weight
-
- def forward(self, inputs, reconstructions, posteriors, optimizer_idx,
- global_step, last_layer=None, cond=None, split="train", weights=None):
- rec_loss = torch.abs(inputs.contiguous() - reconstructions.contiguous())
- if self.perceptual_weight > 0:
- p_loss = self.perceptual_loss(inputs.contiguous(), reconstructions.contiguous())
- # print(f"p_loss {p_loss}")
- rec_loss = rec_loss + self.perceptual_weight * p_loss
- else:
- p_loss = torch.tensor([0.0])
-
- nll_loss = rec_loss / torch.exp(self.logvar) + self.logvar
- weighted_nll_loss = nll_loss
- if weights is not None:
- weighted_nll_loss = weights*nll_loss
- weighted_nll_loss = torch.sum(weighted_nll_loss) / weighted_nll_loss.shape[0]
- nll_loss = torch.sum(nll_loss) / nll_loss.shape[0]
- kl_loss = posteriors.kl()
- kl_loss = torch.sum(kl_loss) / kl_loss.shape[0]
-
- # now the GAN part
- if optimizer_idx == 0:
- # generator update
- if cond is None:
- assert not self.disc_conditional
- logits_fake = self.discriminator(reconstructions.contiguous())
- else:
- assert self.disc_conditional
- logits_fake = self.discriminator(torch.cat((reconstructions.contiguous(), cond), dim=1))
- g_loss = -torch.mean(logits_fake)
-
- try:
- d_weight = self.calculate_adaptive_weight(nll_loss, g_loss, last_layer=last_layer)
- except RuntimeError:
- assert not self.training
- d_weight = torch.tensor(0.0)
-
- disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start)
- loss = weighted_nll_loss + self.kl_weight * kl_loss + d_weight * disc_factor * g_loss
-
- log = {"{}/total_loss".format(split): loss.clone().detach().mean(),
- "{}/logvar".format(split): self.logvar.detach(),
- "{}/kl_loss".format(split): kl_loss.detach().mean(),
- "{}/nll_loss".format(split): nll_loss.detach().mean(),
- "{}/rec_loss".format(split): rec_loss.detach().mean(),
- "{}/d_weight".format(split): d_weight.detach(),
- "{}/disc_factor".format(split): torch.tensor(disc_factor),
- "{}/g_loss".format(split): g_loss.detach().mean(),
- }
- return loss, log
-
- if optimizer_idx == 1:
- # second pass for discriminator update
- if cond is None:
- logits_real = self.discriminator(inputs.contiguous().detach())
- logits_fake = self.discriminator(reconstructions.contiguous().detach())
- else:
- logits_real = self.discriminator(torch.cat((inputs.contiguous().detach(), cond), dim=1))
- logits_fake = self.discriminator(torch.cat((reconstructions.contiguous().detach(), cond), dim=1))
-
- disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start)
- d_loss = disc_factor * self.disc_loss(logits_real, logits_fake)
-
- log = {"{}/disc_loss".format(split): d_loss.clone().detach().mean(),
- "{}/logits_real".format(split): logits_real.detach().mean(),
- "{}/logits_fake".format(split): logits_fake.detach().mean()
- }
- return d_loss, log
-
-
diff --git a/spaces/AgentVerse/agentVerse/agentverse_command/__init__.py b/spaces/AgentVerse/agentVerse/agentverse_command/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/scroller.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/scroller.js
deleted file mode 100644
index 5f6bd2f00d882e55735af0a8592bfb6a9a694b0e..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/scroller.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import Scroller from './input/scroller/Scroller.js';
-export default Scroller;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/spinner-components.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/spinner-components.js
deleted file mode 100644
index f059f3eb560f8debddacfb5db161c0080274dcfa..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/spinner-components.js
+++ /dev/null
@@ -1,39 +0,0 @@
-import Audio from './audio/Audio.js';
-import Ball from './ball/Ball.js';
-import Bars from './bars/Bars.js';
-import Box from './box/Box.js';
-import Clock from './clock/Clock.js';
-import Cube from './cube/Cube.js';
-import Custom from './custom/Custom.js';
-import Dots from './dots/Dots.js';
-import Facebook from './facebook/Facebook.js';
-import Grid from './grid/Grid.js';
-import Los from './los/Los.js';
-import Orbit from './orbit/Orbit.js';
-import Oval from './oval/Oval.js';
-import Pie from './pie/Pie.js';
-import Puff from './puff/Puff.js';
-import Radio from './radio/Radio.js';
-import Rings from './rings/Rings.js';
-import Spinner from './spinner/Spinner.js';
-
-export {
- Audio,
- Ball,
- Bars,
- Box,
- Clock,
- Cube,
- Custom,
- Dots,
- Facebook,
- Grid,
- Los,
- Orbit,
- Oval,
- Pie,
- Puff,
- Radio,
- Rings,
- Spinner
-}
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/folder/Folder.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/folder/Folder.d.ts
deleted file mode 100644
index 202d5f243587f166dbc733d2da38313ce3aa7607..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/folder/Folder.d.ts
+++ /dev/null
@@ -1,65 +0,0 @@
-// import * as Phaser from 'phaser';
-import Sizer from '../sizer/Sizer';
-import OpenCloseTransition from '../../../plugins/behaviors/openclosetransition/OpenCloseTransition';
-
-export default Folder;
-
-declare namespace Folder {
-
- interface IConfig extends Sizer.IConfig {
- background?: Phaser.GameObjects.GameObject,
-
- title: Phaser.GameObjects.GameObject,
-
- child: Phaser.GameObjects.GameObject,
- customChildOrigin?: boolean,
-
- toggleByTarget?: Phaser.GameObjects.GameObject,
- toggleClickConfig?: {
- mode?: 0 | 1 | 'pointerdown' | 'pointerup' | 'press' | 'release',
- clickInterval?: number,
- threshold?: number,
- },
-
- align?: {
- title?: Sizer.AlignTypes,
- child?: Sizer.AlignTypes,
- },
-
- expand?: {
- title?: boolean,
- child?: boolean,
- },
-
- transition?: {
- duration?: number,
- expandCallback?: OpenCloseTransition.TransitCallbackType,
- collapseCallback?: OpenCloseTransition.TransitCallbackType,
- },
-
- reLayoutTarget?: Phaser.GameObjects.GameObject,
-
- onExpandStart?: (folder: this) => void,
- onExpandComplete?: (folder: this) => void,
- onCollapseStart?: (folder: this) => void,
- onCollapseComplete?: (folder: this) => void,
- }
-}
-
-declare class Folder extends Sizer {
- constructor(
- scene: Phaser.Scene,
- config?: Folder.IConfig
- );
-
- setTransitionDuration(duration?: number): this;
- transitionDuration: number;
-
- setExpandCallback(callback?: OpenCloseTransition.TransitCallbackType): this;
- setCollapseCallback(callback?: OpenCloseTransition.TransitCallbackType): this;
-
- expand(duration?: number): this;
- collapse(duration?: number): this;
- toggle(duration?: number): this;
- readonly expanded: boolean;
-}
\ No newline at end of file
diff --git a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/text/english.py b/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/text/english.py
deleted file mode 100644
index 6817392ba8a9eb830351de89fb7afc5ad72f5e42..0000000000000000000000000000000000000000
--- a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/text/english.py
+++ /dev/null
@@ -1,188 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-
-'''
-Cleaners are transformations that run over the input text at both training and eval time.
-
-Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners"
-hyperparameter. Some cleaners are English-specific. You'll typically want to use:
- 1. "english_cleaners" for English text
- 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using
- the Unidecode library (https://pypi.python.org/pypi/Unidecode)
- 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update
- the symbols in symbols.py to match your data).
-'''
-
-
-# Regular expression matching whitespace:
-
-
-import re
-import inflect
-from unidecode import unidecode
-import eng_to_ipa as ipa
-_inflect = inflect.engine()
-_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])')
-_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)')
-_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)')
-_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)')
-_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)')
-_number_re = re.compile(r'[0-9]+')
-
-# List of (regular expression, replacement) pairs for abbreviations:
-_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [
- ('mrs', 'misess'),
- ('mr', 'mister'),
- ('dr', 'doctor'),
- ('st', 'saint'),
- ('co', 'company'),
- ('jr', 'junior'),
- ('maj', 'major'),
- ('gen', 'general'),
- ('drs', 'doctors'),
- ('rev', 'reverend'),
- ('lt', 'lieutenant'),
- ('hon', 'honorable'),
- ('sgt', 'sergeant'),
- ('capt', 'captain'),
- ('esq', 'esquire'),
- ('ltd', 'limited'),
- ('col', 'colonel'),
- ('ft', 'fort'),
-]]
-
-
-# List of (ipa, lazy ipa) pairs:
-_lazy_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('r', 'ɹ'),
- ('æ', 'e'),
- ('ɑ', 'a'),
- ('ɔ', 'o'),
- ('ð', 'z'),
- ('θ', 's'),
- ('ɛ', 'e'),
- ('ɪ', 'i'),
- ('ʊ', 'u'),
- ('ʒ', 'ʥ'),
- ('ʤ', 'ʥ'),
- ('ˈ', '↓'),
-]]
-
-# List of (ipa, lazy ipa2) pairs:
-_lazy_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('r', 'ɹ'),
- ('ð', 'z'),
- ('θ', 's'),
- ('ʒ', 'ʑ'),
- ('ʤ', 'dʑ'),
- ('ˈ', '↓'),
-]]
-
-# List of (ipa, ipa2) pairs
-_ipa_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('r', 'ɹ'),
- ('ʤ', 'dʒ'),
- ('ʧ', 'tʃ')
-]]
-
-
-def expand_abbreviations(text):
- for regex, replacement in _abbreviations:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def collapse_whitespace(text):
- return re.sub(r'\s+', ' ', text)
-
-
-def _remove_commas(m):
- return m.group(1).replace(',', '')
-
-
-def _expand_decimal_point(m):
- return m.group(1).replace('.', ' point ')
-
-
-def _expand_dollars(m):
- match = m.group(1)
- parts = match.split('.')
- if len(parts) > 2:
- return match + ' dollars' # Unexpected format
- dollars = int(parts[0]) if parts[0] else 0
- cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0
- if dollars and cents:
- dollar_unit = 'dollar' if dollars == 1 else 'dollars'
- cent_unit = 'cent' if cents == 1 else 'cents'
- return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit)
- elif dollars:
- dollar_unit = 'dollar' if dollars == 1 else 'dollars'
- return '%s %s' % (dollars, dollar_unit)
- elif cents:
- cent_unit = 'cent' if cents == 1 else 'cents'
- return '%s %s' % (cents, cent_unit)
- else:
- return 'zero dollars'
-
-
-def _expand_ordinal(m):
- return _inflect.number_to_words(m.group(0))
-
-
-def _expand_number(m):
- num = int(m.group(0))
- if num > 1000 and num < 3000:
- if num == 2000:
- return 'two thousand'
- elif num > 2000 and num < 2010:
- return 'two thousand ' + _inflect.number_to_words(num % 100)
- elif num % 100 == 0:
- return _inflect.number_to_words(num // 100) + ' hundred'
- else:
- return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ')
- else:
- return _inflect.number_to_words(num, andword='')
-
-
-def normalize_numbers(text):
- text = re.sub(_comma_number_re, _remove_commas, text)
- text = re.sub(_pounds_re, r'\1 pounds', text)
- text = re.sub(_dollars_re, _expand_dollars, text)
- text = re.sub(_decimal_number_re, _expand_decimal_point, text)
- text = re.sub(_ordinal_re, _expand_ordinal, text)
- text = re.sub(_number_re, _expand_number, text)
- return text
-
-
-def mark_dark_l(text):
- return re.sub(r'l([^aeiouæɑɔəɛɪʊ ]*(?: |$))', lambda x: 'ɫ'+x.group(1), text)
-
-
-def english_to_ipa(text):
- text = unidecode(text).lower()
- text = expand_abbreviations(text)
- text = normalize_numbers(text)
- phonemes = ipa.convert(text)
- phonemes = collapse_whitespace(phonemes)
- return phonemes
-
-
-def english_to_lazy_ipa(text):
- text = english_to_ipa(text)
- for regex, replacement in _lazy_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def english_to_ipa2(text):
- text = english_to_ipa(text)
- text = mark_dark_l(text)
- for regex, replacement in _ipa_to_ipa2:
- text = re.sub(regex, replacement, text)
- return text.replace('...', '…')
-
-
-def english_to_lazy_ipa2(text):
- text = english_to_ipa(text)
- for regex, replacement in _lazy_ipa2:
- text = re.sub(regex, replacement, text)
- return text
diff --git a/spaces/Alpaca233/SadTalker/src/facerender/animate.py b/spaces/Alpaca233/SadTalker/src/facerender/animate.py
deleted file mode 100644
index 781f5a3318a086049cc6b74393073ddda7001d5e..0000000000000000000000000000000000000000
--- a/spaces/Alpaca233/SadTalker/src/facerender/animate.py
+++ /dev/null
@@ -1,257 +0,0 @@
-import os
-import cv2
-import yaml
-import numpy as np
-import warnings
-from skimage import img_as_ubyte
-import safetensors
-import safetensors.torch
-warnings.filterwarnings('ignore')
-
-
-import imageio
-import torch
-import torchvision
-
-
-from src.facerender.modules.keypoint_detector import HEEstimator, KPDetector
-from src.facerender.modules.mapping import MappingNet
-from src.facerender.modules.generator import OcclusionAwareGenerator, OcclusionAwareSPADEGenerator
-from src.facerender.modules.make_animation import make_animation
-
-from pydub import AudioSegment
-from src.utils.face_enhancer import enhancer_generator_with_len, enhancer_list
-from src.utils.paste_pic import paste_pic
-from src.utils.videoio import save_video_with_watermark
-
-try:
- import webui # in webui
- in_webui = True
-except:
- in_webui = False
-
-class AnimateFromCoeff():
-
- def __init__(self, sadtalker_path, device):
-
- with open(sadtalker_path['facerender_yaml']) as f:
- config = yaml.safe_load(f)
-
- generator = OcclusionAwareSPADEGenerator(**config['model_params']['generator_params'],
- **config['model_params']['common_params'])
- kp_extractor = KPDetector(**config['model_params']['kp_detector_params'],
- **config['model_params']['common_params'])
- he_estimator = HEEstimator(**config['model_params']['he_estimator_params'],
- **config['model_params']['common_params'])
- mapping = MappingNet(**config['model_params']['mapping_params'])
-
- generator.to(device)
- kp_extractor.to(device)
- he_estimator.to(device)
- mapping.to(device)
- for param in generator.parameters():
- param.requires_grad = False
- for param in kp_extractor.parameters():
- param.requires_grad = False
- for param in he_estimator.parameters():
- param.requires_grad = False
- for param in mapping.parameters():
- param.requires_grad = False
-
- if sadtalker_path is not None:
- if 'checkpoint' in sadtalker_path: # use safe tensor
- self.load_cpk_facevid2vid_safetensor(sadtalker_path['checkpoint'], kp_detector=kp_extractor, generator=generator, he_estimator=None)
- else:
- self.load_cpk_facevid2vid(sadtalker_path['free_view_checkpoint'], kp_detector=kp_extractor, generator=generator, he_estimator=he_estimator)
- else:
- raise AttributeError("Checkpoint should be specified for video head pose estimator.")
-
- if sadtalker_path['mappingnet_checkpoint'] is not None:
- self.load_cpk_mapping(sadtalker_path['mappingnet_checkpoint'], mapping=mapping)
- else:
- raise AttributeError("Checkpoint should be specified for video head pose estimator.")
-
- self.kp_extractor = kp_extractor
- self.generator = generator
- self.he_estimator = he_estimator
- self.mapping = mapping
-
- self.kp_extractor.eval()
- self.generator.eval()
- self.he_estimator.eval()
- self.mapping.eval()
-
- self.device = device
-
- def load_cpk_facevid2vid_safetensor(self, checkpoint_path, generator=None,
- kp_detector=None, he_estimator=None,
- device="cpu"):
-
- checkpoint = safetensors.torch.load_file(checkpoint_path)
-
- if generator is not None:
- x_generator = {}
- for k,v in checkpoint.items():
- if 'generator' in k:
- x_generator[k.replace('generator.', '')] = v
- generator.load_state_dict(x_generator)
- if kp_detector is not None:
- x_generator = {}
- for k,v in checkpoint.items():
- if 'kp_extractor' in k:
- x_generator[k.replace('kp_extractor.', '')] = v
- kp_detector.load_state_dict(x_generator)
- if he_estimator is not None:
- x_generator = {}
- for k,v in checkpoint.items():
- if 'he_estimator' in k:
- x_generator[k.replace('he_estimator.', '')] = v
- he_estimator.load_state_dict(x_generator)
-
- return None
-
- def load_cpk_facevid2vid(self, checkpoint_path, generator=None, discriminator=None,
- kp_detector=None, he_estimator=None, optimizer_generator=None,
- optimizer_discriminator=None, optimizer_kp_detector=None,
- optimizer_he_estimator=None, device="cpu"):
- checkpoint = torch.load(checkpoint_path, map_location=torch.device(device))
- if generator is not None:
- generator.load_state_dict(checkpoint['generator'])
- if kp_detector is not None:
- kp_detector.load_state_dict(checkpoint['kp_detector'])
- if he_estimator is not None:
- he_estimator.load_state_dict(checkpoint['he_estimator'])
- if discriminator is not None:
- try:
- discriminator.load_state_dict(checkpoint['discriminator'])
- except:
- print ('No discriminator in the state-dict. Dicriminator will be randomly initialized')
- if optimizer_generator is not None:
- optimizer_generator.load_state_dict(checkpoint['optimizer_generator'])
- if optimizer_discriminator is not None:
- try:
- optimizer_discriminator.load_state_dict(checkpoint['optimizer_discriminator'])
- except RuntimeError as e:
- print ('No discriminator optimizer in the state-dict. Optimizer will be not initialized')
- if optimizer_kp_detector is not None:
- optimizer_kp_detector.load_state_dict(checkpoint['optimizer_kp_detector'])
- if optimizer_he_estimator is not None:
- optimizer_he_estimator.load_state_dict(checkpoint['optimizer_he_estimator'])
-
- return checkpoint['epoch']
-
- def load_cpk_mapping(self, checkpoint_path, mapping=None, discriminator=None,
- optimizer_mapping=None, optimizer_discriminator=None, device='cpu'):
- checkpoint = torch.load(checkpoint_path, map_location=torch.device(device))
- if mapping is not None:
- mapping.load_state_dict(checkpoint['mapping'])
- if discriminator is not None:
- discriminator.load_state_dict(checkpoint['discriminator'])
- if optimizer_mapping is not None:
- optimizer_mapping.load_state_dict(checkpoint['optimizer_mapping'])
- if optimizer_discriminator is not None:
- optimizer_discriminator.load_state_dict(checkpoint['optimizer_discriminator'])
-
- return checkpoint['epoch']
-
- def generate(self, x, video_save_dir, pic_path, crop_info, enhancer=None, background_enhancer=None, preprocess='crop', img_size=256):
-
- source_image=x['source_image'].type(torch.FloatTensor)
- source_semantics=x['source_semantics'].type(torch.FloatTensor)
- target_semantics=x['target_semantics_list'].type(torch.FloatTensor)
- source_image=source_image.to(self.device)
- source_semantics=source_semantics.to(self.device)
- target_semantics=target_semantics.to(self.device)
- if 'yaw_c_seq' in x:
- yaw_c_seq = x['yaw_c_seq'].type(torch.FloatTensor)
- yaw_c_seq = x['yaw_c_seq'].to(self.device)
- else:
- yaw_c_seq = None
- if 'pitch_c_seq' in x:
- pitch_c_seq = x['pitch_c_seq'].type(torch.FloatTensor)
- pitch_c_seq = x['pitch_c_seq'].to(self.device)
- else:
- pitch_c_seq = None
- if 'roll_c_seq' in x:
- roll_c_seq = x['roll_c_seq'].type(torch.FloatTensor)
- roll_c_seq = x['roll_c_seq'].to(self.device)
- else:
- roll_c_seq = None
-
- frame_num = x['frame_num']
-
- predictions_video = make_animation(source_image, source_semantics, target_semantics,
- self.generator, self.kp_extractor, self.he_estimator, self.mapping,
- yaw_c_seq, pitch_c_seq, roll_c_seq, use_exp = True)
-
- predictions_video = predictions_video.reshape((-1,)+predictions_video.shape[2:])
- predictions_video = predictions_video[:frame_num]
-
- video = []
- for idx in range(predictions_video.shape[0]):
- image = predictions_video[idx]
- image = np.transpose(image.data.cpu().numpy(), [1, 2, 0]).astype(np.float32)
- video.append(image)
- result = img_as_ubyte(video)
-
- ### the generated video is 256x256, so we keep the aspect ratio,
- original_size = crop_info[0]
- if original_size:
- result = [ cv2.resize(result_i,(img_size, int(img_size * original_size[1]/original_size[0]) )) for result_i in result ]
-
- video_name = x['video_name'] + '.mp4'
- path = os.path.join(video_save_dir, 'temp_'+video_name)
-
- imageio.mimsave(path, result, fps=float(25))
-
- av_path = os.path.join(video_save_dir, video_name)
- return_path = av_path
-
- audio_path = x['audio_path']
- audio_name = os.path.splitext(os.path.split(audio_path)[-1])[0]
- new_audio_path = os.path.join(video_save_dir, audio_name+'.wav')
- start_time = 0
- # cog will not keep the .mp3 filename
- sound = AudioSegment.from_file(audio_path)
- frames = frame_num
- end_time = start_time + frames*1/25*1000
- word1=sound.set_frame_rate(16000)
- word = word1[start_time:end_time]
- word.export(new_audio_path, format="wav")
-
- save_video_with_watermark(path, new_audio_path, av_path, watermark= False)
- print(f'The generated video is named {video_save_dir}/{video_name}')
-
- if 'full' in preprocess.lower():
- # only add watermark to the full image.
- video_name_full = x['video_name'] + '_full.mp4'
- full_video_path = os.path.join(video_save_dir, video_name_full)
- return_path = full_video_path
- paste_pic(path, pic_path, crop_info, new_audio_path, full_video_path, extended_crop= True if 'ext' in preprocess.lower() else False)
- print(f'The generated video is named {video_save_dir}/{video_name_full}')
- else:
- full_video_path = av_path
-
- #### paste back then enhancers
- if enhancer:
- video_name_enhancer = x['video_name'] + '_enhanced.mp4'
- enhanced_path = os.path.join(video_save_dir, 'temp_'+video_name_enhancer)
- av_path_enhancer = os.path.join(video_save_dir, video_name_enhancer)
- return_path = av_path_enhancer
-
- try:
- enhanced_images_gen_with_len = enhancer_generator_with_len(full_video_path, method=enhancer, bg_upsampler=background_enhancer)
- imageio.mimsave(enhanced_path, enhanced_images_gen_with_len, fps=float(25))
- except:
- enhanced_images_gen_with_len = enhancer_list(full_video_path, method=enhancer, bg_upsampler=background_enhancer)
- imageio.mimsave(enhanced_path, enhanced_images_gen_with_len, fps=float(25))
-
- save_video_with_watermark(enhanced_path, new_audio_path, av_path_enhancer, watermark= False)
- print(f'The generated video is named {video_save_dir}/{video_name_enhancer}')
- os.remove(enhanced_path)
-
- os.remove(path)
- os.remove(new_audio_path)
-
- return return_path
-
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/stable_diffusion/text2img.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/stable_diffusion/text2img.md
deleted file mode 100644
index 8d09602d860554f847f2936fe2198deb871c7382..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/stable_diffusion/text2img.md
+++ /dev/null
@@ -1,59 +0,0 @@
-
-
-# Text-to-image
-
-The Stable Diffusion model was created by researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), [Runway](https://github.com/runwayml), and [LAION](https://laion.ai/). The [`StableDiffusionPipeline`] is capable of generating photorealistic images given any text input. It's trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer.
-
-The abstract from the paper is:
-
-*By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion.*
-
-
-
-Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!
-
-If you're interested in using one of the official checkpoints for a task, explore the [CompVis](https://huggingface.co/CompVis), [Runway](https://huggingface.co/runwayml), and [Stability AI](https://huggingface.co/stabilityai) Hub organizations!
-
-
-
-## StableDiffusionPipeline
-
-[[autodoc]] StableDiffusionPipeline
- - all
- - __call__
- - enable_attention_slicing
- - disable_attention_slicing
- - enable_vae_slicing
- - disable_vae_slicing
- - enable_xformers_memory_efficient_attention
- - disable_xformers_memory_efficient_attention
- - enable_vae_tiling
- - disable_vae_tiling
- - load_textual_inversion
- - from_single_file
- - load_lora_weights
- - save_lora_weights
-
-## StableDiffusionPipelineOutput
-
-[[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput
-
-## FlaxStableDiffusionPipeline
-
-[[autodoc]] FlaxStableDiffusionPipeline
- - all
- - __call__
-
-## FlaxStableDiffusionPipelineOutput
-
-[[autodoc]] pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/stable_diffusion_controlnet_reference.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/stable_diffusion_controlnet_reference.py
deleted file mode 100644
index f52da6f5a193e4a3b311a11778174fa3417105e3..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/stable_diffusion_controlnet_reference.py
+++ /dev/null
@@ -1,834 +0,0 @@
-# Inspired by: https://github.com/Mikubill/sd-webui-controlnet/discussions/1236 and https://github.com/Mikubill/sd-webui-controlnet/discussions/1280
-from typing import Any, Callable, Dict, List, Optional, Tuple, Union
-
-import numpy as np
-import PIL.Image
-import torch
-
-from diffusers import StableDiffusionControlNetPipeline
-from diffusers.models import ControlNetModel
-from diffusers.models.attention import BasicTransformerBlock
-from diffusers.models.unet_2d_blocks import CrossAttnDownBlock2D, CrossAttnUpBlock2D, DownBlock2D, UpBlock2D
-from diffusers.pipelines.controlnet.multicontrolnet import MultiControlNetModel
-from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
-from diffusers.utils import is_compiled_module, logging, randn_tensor
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-EXAMPLE_DOC_STRING = """
- Examples:
- ```py
- >>> import cv2
- >>> import torch
- >>> import numpy as np
- >>> from PIL import Image
- >>> from diffusers import UniPCMultistepScheduler
- >>> from diffusers.utils import load_image
-
- >>> input_image = load_image("https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png")
-
- >>> # get canny image
- >>> image = cv2.Canny(np.array(input_image), 100, 200)
- >>> image = image[:, :, None]
- >>> image = np.concatenate([image, image, image], axis=2)
- >>> canny_image = Image.fromarray(image)
-
- >>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
- >>> pipe = StableDiffusionControlNetReferencePipeline.from_pretrained(
- "runwayml/stable-diffusion-v1-5",
- controlnet=controlnet,
- safety_checker=None,
- torch_dtype=torch.float16
- ).to('cuda:0')
-
- >>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe_controlnet.scheduler.config)
-
- >>> result_img = pipe(ref_image=input_image,
- prompt="1girl",
- image=canny_image,
- num_inference_steps=20,
- reference_attn=True,
- reference_adain=True).images[0]
-
- >>> result_img.show()
- ```
-"""
-
-
-def torch_dfs(model: torch.nn.Module):
- result = [model]
- for child in model.children():
- result += torch_dfs(child)
- return result
-
-
-class StableDiffusionControlNetReferencePipeline(StableDiffusionControlNetPipeline):
- def prepare_ref_latents(self, refimage, batch_size, dtype, device, generator, do_classifier_free_guidance):
- refimage = refimage.to(device=device, dtype=dtype)
-
- # encode the mask image into latents space so we can concatenate it to the latents
- if isinstance(generator, list):
- ref_image_latents = [
- self.vae.encode(refimage[i : i + 1]).latent_dist.sample(generator=generator[i])
- for i in range(batch_size)
- ]
- ref_image_latents = torch.cat(ref_image_latents, dim=0)
- else:
- ref_image_latents = self.vae.encode(refimage).latent_dist.sample(generator=generator)
- ref_image_latents = self.vae.config.scaling_factor * ref_image_latents
-
- # duplicate mask and ref_image_latents for each generation per prompt, using mps friendly method
- if ref_image_latents.shape[0] < batch_size:
- if not batch_size % ref_image_latents.shape[0] == 0:
- raise ValueError(
- "The passed images and the required batch size don't match. Images are supposed to be duplicated"
- f" to a total batch size of {batch_size}, but {ref_image_latents.shape[0]} images were passed."
- " Make sure the number of images that you pass is divisible by the total requested batch size."
- )
- ref_image_latents = ref_image_latents.repeat(batch_size // ref_image_latents.shape[0], 1, 1, 1)
-
- ref_image_latents = torch.cat([ref_image_latents] * 2) if do_classifier_free_guidance else ref_image_latents
-
- # aligning device to prevent device errors when concating it with the latent model input
- ref_image_latents = ref_image_latents.to(device=device, dtype=dtype)
- return ref_image_latents
-
- @torch.no_grad()
- def __call__(
- self,
- prompt: Union[str, List[str]] = None,
- image: Union[
- torch.FloatTensor,
- PIL.Image.Image,
- np.ndarray,
- List[torch.FloatTensor],
- List[PIL.Image.Image],
- List[np.ndarray],
- ] = None,
- ref_image: Union[torch.FloatTensor, PIL.Image.Image] = None,
- height: Optional[int] = None,
- width: Optional[int] = None,
- num_inference_steps: int = 50,
- guidance_scale: float = 7.5,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- num_images_per_prompt: Optional[int] = 1,
- eta: float = 0.0,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- latents: Optional[torch.FloatTensor] = None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: int = 1,
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
- controlnet_conditioning_scale: Union[float, List[float]] = 1.0,
- guess_mode: bool = False,
- attention_auto_machine_weight: float = 1.0,
- gn_auto_machine_weight: float = 1.0,
- style_fidelity: float = 0.5,
- reference_attn: bool = True,
- reference_adain: bool = True,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
- instead.
- image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`,:
- `List[List[torch.FloatTensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
- The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If
- the type is specified as `Torch.FloatTensor`, it is passed to ControlNet as is. `PIL.Image.Image` can
- also be accepted as an image. The dimensions of the output image defaults to `image`'s dimensions. If
- height and/or width are passed, `image` is resized according to them. If multiple ControlNets are
- specified in init, images must be passed as a list such that each element of the list can be correctly
- batched for input to a single controlnet.
- ref_image (`torch.FloatTensor`, `PIL.Image.Image`):
- The Reference Control input condition. Reference Control uses this input condition to generate guidance to Unet. If
- the type is specified as `Torch.FloatTensor`, it is passed to Reference Control as is. `PIL.Image.Image` can
- also be accepted as an image.
- height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
- `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
- less than `1`).
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
- to make generation deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `generator`.
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
- provided, text embeddings will be generated from `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
- argument.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that will be called every `callback_steps` steps during inference. The function will be
- called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function will be called. If not specified, the callback will be
- called at every step.
- cross_attention_kwargs (`dict`, *optional*):
- A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
- `self.processor` in
- [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
- controlnet_conditioning_scale (`float` or `List[float]`, *optional*, defaults to 1.0):
- The outputs of the controlnet are multiplied by `controlnet_conditioning_scale` before they are added
- to the residual in the original unet. If multiple ControlNets are specified in init, you can set the
- corresponding scale as a list.
- guess_mode (`bool`, *optional*, defaults to `False`):
- In this mode, the ControlNet encoder will try best to recognize the content of the input image even if
- you remove all prompts. The `guidance_scale` between 3.0 and 5.0 is recommended.
- attention_auto_machine_weight (`float`):
- Weight of using reference query for self attention's context.
- If attention_auto_machine_weight=1.0, use reference query for all self attention's context.
- gn_auto_machine_weight (`float`):
- Weight of using reference adain. If gn_auto_machine_weight=2.0, use all reference adain plugins.
- style_fidelity (`float`):
- style fidelity of ref_uncond_xt. If style_fidelity=1.0, control more important,
- elif style_fidelity=0.0, prompt more important, else balanced.
- reference_attn (`bool`):
- Whether to use reference query for self attention's context.
- reference_adain (`bool`):
- Whether to use reference adain.
-
- Examples:
-
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
- When returning a tuple, the first element is a list with the generated images, and the second element is a
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
- (nsfw) content, according to the `safety_checker`.
- """
- assert reference_attn or reference_adain, "`reference_attn` or `reference_adain` must be True."
-
- # 1. Check inputs. Raise error if not correct
- self.check_inputs(
- prompt,
- image,
- callback_steps,
- negative_prompt,
- prompt_embeds,
- negative_prompt_embeds,
- controlnet_conditioning_scale,
- )
-
- # 2. Define call parameters
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- device = self._execution_device
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- controlnet = self.controlnet._orig_mod if is_compiled_module(self.controlnet) else self.controlnet
-
- if isinstance(controlnet, MultiControlNetModel) and isinstance(controlnet_conditioning_scale, float):
- controlnet_conditioning_scale = [controlnet_conditioning_scale] * len(controlnet.nets)
-
- global_pool_conditions = (
- controlnet.config.global_pool_conditions
- if isinstance(controlnet, ControlNetModel)
- else controlnet.nets[0].config.global_pool_conditions
- )
- guess_mode = guess_mode or global_pool_conditions
-
- # 3. Encode input prompt
- text_encoder_lora_scale = (
- cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None
- )
- prompt_embeds = self._encode_prompt(
- prompt,
- device,
- num_images_per_prompt,
- do_classifier_free_guidance,
- negative_prompt,
- prompt_embeds=prompt_embeds,
- negative_prompt_embeds=negative_prompt_embeds,
- lora_scale=text_encoder_lora_scale,
- )
-
- # 4. Prepare image
- if isinstance(controlnet, ControlNetModel):
- image = self.prepare_image(
- image=image,
- width=width,
- height=height,
- batch_size=batch_size * num_images_per_prompt,
- num_images_per_prompt=num_images_per_prompt,
- device=device,
- dtype=controlnet.dtype,
- do_classifier_free_guidance=do_classifier_free_guidance,
- guess_mode=guess_mode,
- )
- height, width = image.shape[-2:]
- elif isinstance(controlnet, MultiControlNetModel):
- images = []
-
- for image_ in image:
- image_ = self.prepare_image(
- image=image_,
- width=width,
- height=height,
- batch_size=batch_size * num_images_per_prompt,
- num_images_per_prompt=num_images_per_prompt,
- device=device,
- dtype=controlnet.dtype,
- do_classifier_free_guidance=do_classifier_free_guidance,
- guess_mode=guess_mode,
- )
-
- images.append(image_)
-
- image = images
- height, width = image[0].shape[-2:]
- else:
- assert False
-
- # 5. Preprocess reference image
- ref_image = self.prepare_image(
- image=ref_image,
- width=width,
- height=height,
- batch_size=batch_size * num_images_per_prompt,
- num_images_per_prompt=num_images_per_prompt,
- device=device,
- dtype=prompt_embeds.dtype,
- )
-
- # 6. Prepare timesteps
- self.scheduler.set_timesteps(num_inference_steps, device=device)
- timesteps = self.scheduler.timesteps
-
- # 7. Prepare latent variables
- num_channels_latents = self.unet.config.in_channels
- latents = self.prepare_latents(
- batch_size * num_images_per_prompt,
- num_channels_latents,
- height,
- width,
- prompt_embeds.dtype,
- device,
- generator,
- latents,
- )
-
- # 8. Prepare reference latent variables
- ref_image_latents = self.prepare_ref_latents(
- ref_image,
- batch_size * num_images_per_prompt,
- prompt_embeds.dtype,
- device,
- generator,
- do_classifier_free_guidance,
- )
-
- # 9. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
-
- # 9. Modify self attention and group norm
- MODE = "write"
- uc_mask = (
- torch.Tensor([1] * batch_size * num_images_per_prompt + [0] * batch_size * num_images_per_prompt)
- .type_as(ref_image_latents)
- .bool()
- )
-
- def hacked_basic_transformer_inner_forward(
- self,
- hidden_states: torch.FloatTensor,
- attention_mask: Optional[torch.FloatTensor] = None,
- encoder_hidden_states: Optional[torch.FloatTensor] = None,
- encoder_attention_mask: Optional[torch.FloatTensor] = None,
- timestep: Optional[torch.LongTensor] = None,
- cross_attention_kwargs: Dict[str, Any] = None,
- class_labels: Optional[torch.LongTensor] = None,
- ):
- if self.use_ada_layer_norm:
- norm_hidden_states = self.norm1(hidden_states, timestep)
- elif self.use_ada_layer_norm_zero:
- norm_hidden_states, gate_msa, shift_mlp, scale_mlp, gate_mlp = self.norm1(
- hidden_states, timestep, class_labels, hidden_dtype=hidden_states.dtype
- )
- else:
- norm_hidden_states = self.norm1(hidden_states)
-
- # 1. Self-Attention
- cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not None else {}
- if self.only_cross_attention:
- attn_output = self.attn1(
- norm_hidden_states,
- encoder_hidden_states=encoder_hidden_states if self.only_cross_attention else None,
- attention_mask=attention_mask,
- **cross_attention_kwargs,
- )
- else:
- if MODE == "write":
- self.bank.append(norm_hidden_states.detach().clone())
- attn_output = self.attn1(
- norm_hidden_states,
- encoder_hidden_states=encoder_hidden_states if self.only_cross_attention else None,
- attention_mask=attention_mask,
- **cross_attention_kwargs,
- )
- if MODE == "read":
- if attention_auto_machine_weight > self.attn_weight:
- attn_output_uc = self.attn1(
- norm_hidden_states,
- encoder_hidden_states=torch.cat([norm_hidden_states] + self.bank, dim=1),
- # attention_mask=attention_mask,
- **cross_attention_kwargs,
- )
- attn_output_c = attn_output_uc.clone()
- if do_classifier_free_guidance and style_fidelity > 0:
- attn_output_c[uc_mask] = self.attn1(
- norm_hidden_states[uc_mask],
- encoder_hidden_states=norm_hidden_states[uc_mask],
- **cross_attention_kwargs,
- )
- attn_output = style_fidelity * attn_output_c + (1.0 - style_fidelity) * attn_output_uc
- self.bank.clear()
- else:
- attn_output = self.attn1(
- norm_hidden_states,
- encoder_hidden_states=encoder_hidden_states if self.only_cross_attention else None,
- attention_mask=attention_mask,
- **cross_attention_kwargs,
- )
- if self.use_ada_layer_norm_zero:
- attn_output = gate_msa.unsqueeze(1) * attn_output
- hidden_states = attn_output + hidden_states
-
- if self.attn2 is not None:
- norm_hidden_states = (
- self.norm2(hidden_states, timestep) if self.use_ada_layer_norm else self.norm2(hidden_states)
- )
-
- # 2. Cross-Attention
- attn_output = self.attn2(
- norm_hidden_states,
- encoder_hidden_states=encoder_hidden_states,
- attention_mask=encoder_attention_mask,
- **cross_attention_kwargs,
- )
- hidden_states = attn_output + hidden_states
-
- # 3. Feed-forward
- norm_hidden_states = self.norm3(hidden_states)
-
- if self.use_ada_layer_norm_zero:
- norm_hidden_states = norm_hidden_states * (1 + scale_mlp[:, None]) + shift_mlp[:, None]
-
- ff_output = self.ff(norm_hidden_states)
-
- if self.use_ada_layer_norm_zero:
- ff_output = gate_mlp.unsqueeze(1) * ff_output
-
- hidden_states = ff_output + hidden_states
-
- return hidden_states
-
- def hacked_mid_forward(self, *args, **kwargs):
- eps = 1e-6
- x = self.original_forward(*args, **kwargs)
- if MODE == "write":
- if gn_auto_machine_weight >= self.gn_weight:
- var, mean = torch.var_mean(x, dim=(2, 3), keepdim=True, correction=0)
- self.mean_bank.append(mean)
- self.var_bank.append(var)
- if MODE == "read":
- if len(self.mean_bank) > 0 and len(self.var_bank) > 0:
- var, mean = torch.var_mean(x, dim=(2, 3), keepdim=True, correction=0)
- std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5
- mean_acc = sum(self.mean_bank) / float(len(self.mean_bank))
- var_acc = sum(self.var_bank) / float(len(self.var_bank))
- std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5
- x_uc = (((x - mean) / std) * std_acc) + mean_acc
- x_c = x_uc.clone()
- if do_classifier_free_guidance and style_fidelity > 0:
- x_c[uc_mask] = x[uc_mask]
- x = style_fidelity * x_c + (1.0 - style_fidelity) * x_uc
- self.mean_bank = []
- self.var_bank = []
- return x
-
- def hack_CrossAttnDownBlock2D_forward(
- self,
- hidden_states: torch.FloatTensor,
- temb: Optional[torch.FloatTensor] = None,
- encoder_hidden_states: Optional[torch.FloatTensor] = None,
- attention_mask: Optional[torch.FloatTensor] = None,
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
- encoder_attention_mask: Optional[torch.FloatTensor] = None,
- ):
- eps = 1e-6
-
- # TODO(Patrick, William) - attention mask is not used
- output_states = ()
-
- for i, (resnet, attn) in enumerate(zip(self.resnets, self.attentions)):
- hidden_states = resnet(hidden_states, temb)
- hidden_states = attn(
- hidden_states,
- encoder_hidden_states=encoder_hidden_states,
- cross_attention_kwargs=cross_attention_kwargs,
- attention_mask=attention_mask,
- encoder_attention_mask=encoder_attention_mask,
- return_dict=False,
- )[0]
- if MODE == "write":
- if gn_auto_machine_weight >= self.gn_weight:
- var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
- self.mean_bank.append([mean])
- self.var_bank.append([var])
- if MODE == "read":
- if len(self.mean_bank) > 0 and len(self.var_bank) > 0:
- var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
- std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5
- mean_acc = sum(self.mean_bank[i]) / float(len(self.mean_bank[i]))
- var_acc = sum(self.var_bank[i]) / float(len(self.var_bank[i]))
- std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5
- hidden_states_uc = (((hidden_states - mean) / std) * std_acc) + mean_acc
- hidden_states_c = hidden_states_uc.clone()
- if do_classifier_free_guidance and style_fidelity > 0:
- hidden_states_c[uc_mask] = hidden_states[uc_mask]
- hidden_states = style_fidelity * hidden_states_c + (1.0 - style_fidelity) * hidden_states_uc
-
- output_states = output_states + (hidden_states,)
-
- if MODE == "read":
- self.mean_bank = []
- self.var_bank = []
-
- if self.downsamplers is not None:
- for downsampler in self.downsamplers:
- hidden_states = downsampler(hidden_states)
-
- output_states = output_states + (hidden_states,)
-
- return hidden_states, output_states
-
- def hacked_DownBlock2D_forward(self, hidden_states, temb=None):
- eps = 1e-6
-
- output_states = ()
-
- for i, resnet in enumerate(self.resnets):
- hidden_states = resnet(hidden_states, temb)
-
- if MODE == "write":
- if gn_auto_machine_weight >= self.gn_weight:
- var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
- self.mean_bank.append([mean])
- self.var_bank.append([var])
- if MODE == "read":
- if len(self.mean_bank) > 0 and len(self.var_bank) > 0:
- var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
- std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5
- mean_acc = sum(self.mean_bank[i]) / float(len(self.mean_bank[i]))
- var_acc = sum(self.var_bank[i]) / float(len(self.var_bank[i]))
- std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5
- hidden_states_uc = (((hidden_states - mean) / std) * std_acc) + mean_acc
- hidden_states_c = hidden_states_uc.clone()
- if do_classifier_free_guidance and style_fidelity > 0:
- hidden_states_c[uc_mask] = hidden_states[uc_mask]
- hidden_states = style_fidelity * hidden_states_c + (1.0 - style_fidelity) * hidden_states_uc
-
- output_states = output_states + (hidden_states,)
-
- if MODE == "read":
- self.mean_bank = []
- self.var_bank = []
-
- if self.downsamplers is not None:
- for downsampler in self.downsamplers:
- hidden_states = downsampler(hidden_states)
-
- output_states = output_states + (hidden_states,)
-
- return hidden_states, output_states
-
- def hacked_CrossAttnUpBlock2D_forward(
- self,
- hidden_states: torch.FloatTensor,
- res_hidden_states_tuple: Tuple[torch.FloatTensor, ...],
- temb: Optional[torch.FloatTensor] = None,
- encoder_hidden_states: Optional[torch.FloatTensor] = None,
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
- upsample_size: Optional[int] = None,
- attention_mask: Optional[torch.FloatTensor] = None,
- encoder_attention_mask: Optional[torch.FloatTensor] = None,
- ):
- eps = 1e-6
- # TODO(Patrick, William) - attention mask is not used
- for i, (resnet, attn) in enumerate(zip(self.resnets, self.attentions)):
- # pop res hidden states
- res_hidden_states = res_hidden_states_tuple[-1]
- res_hidden_states_tuple = res_hidden_states_tuple[:-1]
- hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
- hidden_states = resnet(hidden_states, temb)
- hidden_states = attn(
- hidden_states,
- encoder_hidden_states=encoder_hidden_states,
- cross_attention_kwargs=cross_attention_kwargs,
- attention_mask=attention_mask,
- encoder_attention_mask=encoder_attention_mask,
- return_dict=False,
- )[0]
-
- if MODE == "write":
- if gn_auto_machine_weight >= self.gn_weight:
- var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
- self.mean_bank.append([mean])
- self.var_bank.append([var])
- if MODE == "read":
- if len(self.mean_bank) > 0 and len(self.var_bank) > 0:
- var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
- std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5
- mean_acc = sum(self.mean_bank[i]) / float(len(self.mean_bank[i]))
- var_acc = sum(self.var_bank[i]) / float(len(self.var_bank[i]))
- std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5
- hidden_states_uc = (((hidden_states - mean) / std) * std_acc) + mean_acc
- hidden_states_c = hidden_states_uc.clone()
- if do_classifier_free_guidance and style_fidelity > 0:
- hidden_states_c[uc_mask] = hidden_states[uc_mask]
- hidden_states = style_fidelity * hidden_states_c + (1.0 - style_fidelity) * hidden_states_uc
-
- if MODE == "read":
- self.mean_bank = []
- self.var_bank = []
-
- if self.upsamplers is not None:
- for upsampler in self.upsamplers:
- hidden_states = upsampler(hidden_states, upsample_size)
-
- return hidden_states
-
- def hacked_UpBlock2D_forward(self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None):
- eps = 1e-6
- for i, resnet in enumerate(self.resnets):
- # pop res hidden states
- res_hidden_states = res_hidden_states_tuple[-1]
- res_hidden_states_tuple = res_hidden_states_tuple[:-1]
- hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
- hidden_states = resnet(hidden_states, temb)
-
- if MODE == "write":
- if gn_auto_machine_weight >= self.gn_weight:
- var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
- self.mean_bank.append([mean])
- self.var_bank.append([var])
- if MODE == "read":
- if len(self.mean_bank) > 0 and len(self.var_bank) > 0:
- var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
- std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5
- mean_acc = sum(self.mean_bank[i]) / float(len(self.mean_bank[i]))
- var_acc = sum(self.var_bank[i]) / float(len(self.var_bank[i]))
- std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5
- hidden_states_uc = (((hidden_states - mean) / std) * std_acc) + mean_acc
- hidden_states_c = hidden_states_uc.clone()
- if do_classifier_free_guidance and style_fidelity > 0:
- hidden_states_c[uc_mask] = hidden_states[uc_mask]
- hidden_states = style_fidelity * hidden_states_c + (1.0 - style_fidelity) * hidden_states_uc
-
- if MODE == "read":
- self.mean_bank = []
- self.var_bank = []
-
- if self.upsamplers is not None:
- for upsampler in self.upsamplers:
- hidden_states = upsampler(hidden_states, upsample_size)
-
- return hidden_states
-
- if reference_attn:
- attn_modules = [module for module in torch_dfs(self.unet) if isinstance(module, BasicTransformerBlock)]
- attn_modules = sorted(attn_modules, key=lambda x: -x.norm1.normalized_shape[0])
-
- for i, module in enumerate(attn_modules):
- module._original_inner_forward = module.forward
- module.forward = hacked_basic_transformer_inner_forward.__get__(module, BasicTransformerBlock)
- module.bank = []
- module.attn_weight = float(i) / float(len(attn_modules))
-
- if reference_adain:
- gn_modules = [self.unet.mid_block]
- self.unet.mid_block.gn_weight = 0
-
- down_blocks = self.unet.down_blocks
- for w, module in enumerate(down_blocks):
- module.gn_weight = 1.0 - float(w) / float(len(down_blocks))
- gn_modules.append(module)
-
- up_blocks = self.unet.up_blocks
- for w, module in enumerate(up_blocks):
- module.gn_weight = float(w) / float(len(up_blocks))
- gn_modules.append(module)
-
- for i, module in enumerate(gn_modules):
- if getattr(module, "original_forward", None) is None:
- module.original_forward = module.forward
- if i == 0:
- # mid_block
- module.forward = hacked_mid_forward.__get__(module, torch.nn.Module)
- elif isinstance(module, CrossAttnDownBlock2D):
- module.forward = hack_CrossAttnDownBlock2D_forward.__get__(module, CrossAttnDownBlock2D)
- elif isinstance(module, DownBlock2D):
- module.forward = hacked_DownBlock2D_forward.__get__(module, DownBlock2D)
- elif isinstance(module, CrossAttnUpBlock2D):
- module.forward = hacked_CrossAttnUpBlock2D_forward.__get__(module, CrossAttnUpBlock2D)
- elif isinstance(module, UpBlock2D):
- module.forward = hacked_UpBlock2D_forward.__get__(module, UpBlock2D)
- module.mean_bank = []
- module.var_bank = []
- module.gn_weight *= 2
-
- # 11. Denoising loop
- num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
- with self.progress_bar(total=num_inference_steps) as progress_bar:
- for i, t in enumerate(timesteps):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # controlnet(s) inference
- if guess_mode and do_classifier_free_guidance:
- # Infer ControlNet only for the conditional batch.
- control_model_input = latents
- control_model_input = self.scheduler.scale_model_input(control_model_input, t)
- controlnet_prompt_embeds = prompt_embeds.chunk(2)[1]
- else:
- control_model_input = latent_model_input
- controlnet_prompt_embeds = prompt_embeds
-
- down_block_res_samples, mid_block_res_sample = self.controlnet(
- control_model_input,
- t,
- encoder_hidden_states=controlnet_prompt_embeds,
- controlnet_cond=image,
- conditioning_scale=controlnet_conditioning_scale,
- guess_mode=guess_mode,
- return_dict=False,
- )
-
- if guess_mode and do_classifier_free_guidance:
- # Infered ControlNet only for the conditional batch.
- # To apply the output of ControlNet to both the unconditional and conditional batches,
- # add 0 to the unconditional batch to keep it unchanged.
- down_block_res_samples = [torch.cat([torch.zeros_like(d), d]) for d in down_block_res_samples]
- mid_block_res_sample = torch.cat([torch.zeros_like(mid_block_res_sample), mid_block_res_sample])
-
- # ref only part
- noise = randn_tensor(
- ref_image_latents.shape, generator=generator, device=device, dtype=ref_image_latents.dtype
- )
- ref_xt = self.scheduler.add_noise(
- ref_image_latents,
- noise,
- t.reshape(
- 1,
- ),
- )
- ref_xt = self.scheduler.scale_model_input(ref_xt, t)
-
- MODE = "write"
- self.unet(
- ref_xt,
- t,
- encoder_hidden_states=prompt_embeds,
- cross_attention_kwargs=cross_attention_kwargs,
- return_dict=False,
- )
-
- # predict the noise residual
- MODE = "read"
- noise_pred = self.unet(
- latent_model_input,
- t,
- encoder_hidden_states=prompt_embeds,
- cross_attention_kwargs=cross_attention_kwargs,
- down_block_additional_residuals=down_block_res_samples,
- mid_block_additional_residual=mid_block_res_sample,
- return_dict=False,
- )[0]
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
-
- # call the callback, if provided
- if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
- progress_bar.update()
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- # If we do sequential model offloading, let's offload unet and controlnet
- # manually for max memory savings
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
- self.unet.to("cpu")
- self.controlnet.to("cpu")
- torch.cuda.empty_cache()
-
- if not output_type == "latent":
- image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
- image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype)
- else:
- image = latents
- has_nsfw_concept = None
-
- if has_nsfw_concept is None:
- do_denormalize = [True] * image.shape[0]
- else:
- do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept]
-
- image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize)
-
- # Offload last model to CPU
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
- self.final_offload_hook.offload()
-
- if not return_dict:
- return (image, has_nsfw_concept)
-
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky2_2/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky2_2/__init__.py
deleted file mode 100644
index 4997a2e4056bb291c557deef65957fc873ae9aa1..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky2_2/__init__.py
+++ /dev/null
@@ -1,25 +0,0 @@
-from ...utils import (
- OptionalDependencyNotAvailable,
- is_torch_available,
- is_transformers_available,
-)
-
-
-try:
- if not (is_transformers_available() and is_torch_available()):
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- from ...utils.dummy_torch_and_transformers_objects import *
-else:
- from .pipeline_kandinsky2_2 import KandinskyV22Pipeline
- from .pipeline_kandinsky2_2_combined import (
- KandinskyV22CombinedPipeline,
- KandinskyV22Img2ImgCombinedPipeline,
- KandinskyV22InpaintCombinedPipeline,
- )
- from .pipeline_kandinsky2_2_controlnet import KandinskyV22ControlnetPipeline
- from .pipeline_kandinsky2_2_controlnet_img2img import KandinskyV22ControlnetImg2ImgPipeline
- from .pipeline_kandinsky2_2_img2img import KandinskyV22Img2ImgPipeline
- from .pipeline_kandinsky2_2_inpainting import KandinskyV22InpaintPipeline
- from .pipeline_kandinsky2_2_prior import KandinskyV22PriorPipeline
- from .pipeline_kandinsky2_2_prior_emb2emb import KandinskyV22PriorEmb2EmbPipeline
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/spectrogram_diffusion/pipeline_spectrogram_diffusion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/spectrogram_diffusion/pipeline_spectrogram_diffusion.py
deleted file mode 100644
index bb3922e77fd1c59a91180d3dc1d67faedf3a1e0c..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/spectrogram_diffusion/pipeline_spectrogram_diffusion.py
+++ /dev/null
@@ -1,267 +0,0 @@
-# Copyright 2022 The Music Spectrogram Diffusion Authors.
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import math
-from typing import Any, Callable, List, Optional, Tuple, Union
-
-import numpy as np
-import torch
-
-from ...models import T5FilmDecoder
-from ...schedulers import DDPMScheduler
-from ...utils import is_onnx_available, logging, randn_tensor
-
-
-if is_onnx_available():
- from ..onnx_utils import OnnxRuntimeModel
-
-from ..pipeline_utils import AudioPipelineOutput, DiffusionPipeline
-from .continous_encoder import SpectrogramContEncoder
-from .notes_encoder import SpectrogramNotesEncoder
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-TARGET_FEATURE_LENGTH = 256
-
-
-class SpectrogramDiffusionPipeline(DiffusionPipeline):
- r"""
- Pipeline for unconditional audio generation.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
- implemented for all pipelines (downloading, saving, running on a particular device, etc.).
-
- Args:
- notes_encoder ([`SpectrogramNotesEncoder`]):
- continuous_encoder ([`SpectrogramContEncoder`]):
- decoder ([`T5FilmDecoder`]):
- A [`T5FilmDecoder`] to denoise the encoded audio latents.
- scheduler ([`DDPMScheduler`]):
- A scheduler to be used in combination with `decoder` to denoise the encoded audio latents.
- melgan ([`OnnxRuntimeModel`]):
- """
- _optional_components = ["melgan"]
-
- def __init__(
- self,
- notes_encoder: SpectrogramNotesEncoder,
- continuous_encoder: SpectrogramContEncoder,
- decoder: T5FilmDecoder,
- scheduler: DDPMScheduler,
- melgan: OnnxRuntimeModel if is_onnx_available() else Any,
- ) -> None:
- super().__init__()
-
- # From MELGAN
- self.min_value = math.log(1e-5) # Matches MelGAN training.
- self.max_value = 4.0 # Largest value for most examples
- self.n_dims = 128
-
- self.register_modules(
- notes_encoder=notes_encoder,
- continuous_encoder=continuous_encoder,
- decoder=decoder,
- scheduler=scheduler,
- melgan=melgan,
- )
-
- def scale_features(self, features, output_range=(-1.0, 1.0), clip=False):
- """Linearly scale features to network outputs range."""
- min_out, max_out = output_range
- if clip:
- features = torch.clip(features, self.min_value, self.max_value)
- # Scale to [0, 1].
- zero_one = (features - self.min_value) / (self.max_value - self.min_value)
- # Scale to [min_out, max_out].
- return zero_one * (max_out - min_out) + min_out
-
- def scale_to_features(self, outputs, input_range=(-1.0, 1.0), clip=False):
- """Invert by linearly scaling network outputs to features range."""
- min_out, max_out = input_range
- outputs = torch.clip(outputs, min_out, max_out) if clip else outputs
- # Scale to [0, 1].
- zero_one = (outputs - min_out) / (max_out - min_out)
- # Scale to [self.min_value, self.max_value].
- return zero_one * (self.max_value - self.min_value) + self.min_value
-
- def encode(self, input_tokens, continuous_inputs, continuous_mask):
- tokens_mask = input_tokens > 0
- tokens_encoded, tokens_mask = self.notes_encoder(
- encoder_input_tokens=input_tokens, encoder_inputs_mask=tokens_mask
- )
-
- continuous_encoded, continuous_mask = self.continuous_encoder(
- encoder_inputs=continuous_inputs, encoder_inputs_mask=continuous_mask
- )
-
- return [(tokens_encoded, tokens_mask), (continuous_encoded, continuous_mask)]
-
- def decode(self, encodings_and_masks, input_tokens, noise_time):
- timesteps = noise_time
- if not torch.is_tensor(timesteps):
- timesteps = torch.tensor([timesteps], dtype=torch.long, device=input_tokens.device)
- elif torch.is_tensor(timesteps) and len(timesteps.shape) == 0:
- timesteps = timesteps[None].to(input_tokens.device)
-
- # broadcast to batch dimension in a way that's compatible with ONNX/Core ML
- timesteps = timesteps * torch.ones(input_tokens.shape[0], dtype=timesteps.dtype, device=timesteps.device)
-
- logits = self.decoder(
- encodings_and_masks=encodings_and_masks, decoder_input_tokens=input_tokens, decoder_noise_time=timesteps
- )
- return logits
-
- @torch.no_grad()
- def __call__(
- self,
- input_tokens: List[List[int]],
- generator: Optional[torch.Generator] = None,
- num_inference_steps: int = 100,
- return_dict: bool = True,
- output_type: str = "numpy",
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: int = 1,
- ) -> Union[AudioPipelineOutput, Tuple]:
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
- r"""
- The call function to the pipeline for generation.
-
- Args:
- input_tokens (`List[List[int]]`):
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
- generation deterministic.
- num_inference_steps (`int`, *optional*, defaults to 100):
- The number of denoising steps. More denoising steps usually lead to a higher quality audio at the
- expense of slower inference.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.AudioPipelineOutput`] instead of a plain tuple.
- output_type (`str`, *optional*, defaults to `"numpy"`):
- The output format of the generated audio.
- callback (`Callable`, *optional*):
- A function that calls every `callback_steps` steps during inference. The function is called with the
- following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function is called. If not specified, the callback is called at
- every step.
-
- Example:
-
- ```py
- >>> from diffusers import SpectrogramDiffusionPipeline, MidiProcessor
-
- >>> pipe = SpectrogramDiffusionPipeline.from_pretrained("google/music-spectrogram-diffusion")
- >>> pipe = pipe.to("cuda")
- >>> processor = MidiProcessor()
-
- >>> # Download MIDI from: wget http://www.piano-midi.de/midis/beethoven/beethoven_hammerklavier_2.mid
- >>> output = pipe(processor("beethoven_hammerklavier_2.mid"))
-
- >>> audio = output.audios[0]
- ```
-
- Returns:
- [`pipelines.AudioPipelineOutput`] or `tuple`:
- If `return_dict` is `True`, [`pipelines.AudioPipelineOutput`] is returned, otherwise a `tuple` is
- returned where the first element is a list with the generated audio.
- """
-
- pred_mel = np.zeros([1, TARGET_FEATURE_LENGTH, self.n_dims], dtype=np.float32)
- full_pred_mel = np.zeros([1, 0, self.n_dims], np.float32)
- ones = torch.ones((1, TARGET_FEATURE_LENGTH), dtype=bool, device=self.device)
-
- for i, encoder_input_tokens in enumerate(input_tokens):
- if i == 0:
- encoder_continuous_inputs = torch.from_numpy(pred_mel[:1].copy()).to(
- device=self.device, dtype=self.decoder.dtype
- )
- # The first chunk has no previous context.
- encoder_continuous_mask = torch.zeros((1, TARGET_FEATURE_LENGTH), dtype=bool, device=self.device)
- else:
- # The full song pipeline does not feed in a context feature, so the mask
- # will be all 0s after the feature converter. Because we know we're
- # feeding in a full context chunk from the previous prediction, set it
- # to all 1s.
- encoder_continuous_mask = ones
-
- encoder_continuous_inputs = self.scale_features(
- encoder_continuous_inputs, output_range=[-1.0, 1.0], clip=True
- )
-
- encodings_and_masks = self.encode(
- input_tokens=torch.IntTensor([encoder_input_tokens]).to(device=self.device),
- continuous_inputs=encoder_continuous_inputs,
- continuous_mask=encoder_continuous_mask,
- )
-
- # Sample encoder_continuous_inputs shaped gaussian noise to begin loop
- x = randn_tensor(
- shape=encoder_continuous_inputs.shape,
- generator=generator,
- device=self.device,
- dtype=self.decoder.dtype,
- )
-
- # set step values
- self.scheduler.set_timesteps(num_inference_steps)
-
- # Denoising diffusion loop
- for j, t in enumerate(self.progress_bar(self.scheduler.timesteps)):
- output = self.decode(
- encodings_and_masks=encodings_and_masks,
- input_tokens=x,
- noise_time=t / self.scheduler.config.num_train_timesteps, # rescale to [0, 1)
- )
-
- # Compute previous output: x_t -> x_t-1
- x = self.scheduler.step(output, t, x, generator=generator).prev_sample
-
- mel = self.scale_to_features(x, input_range=[-1.0, 1.0])
- encoder_continuous_inputs = mel[:1]
- pred_mel = mel.cpu().float().numpy()
-
- full_pred_mel = np.concatenate([full_pred_mel, pred_mel[:1]], axis=1)
-
- # call the callback, if provided
- if callback is not None and i % callback_steps == 0:
- callback(i, full_pred_mel)
-
- logger.info("Generated segment", i)
-
- if output_type == "numpy" and not is_onnx_available():
- raise ValueError(
- "Cannot return output in 'np' format if ONNX is not available. Make sure to have ONNX installed or set 'output_type' to 'mel'."
- )
- elif output_type == "numpy" and self.melgan is None:
- raise ValueError(
- "Cannot return output in 'np' format if melgan component is not defined. Make sure to define `self.melgan` or set 'output_type' to 'mel'."
- )
-
- if output_type == "numpy":
- output = self.melgan(input_features=full_pred_mel.astype(np.float32))
- else:
- output = full_pred_mel
-
- if not return_dict:
- return (output,)
-
- return AudioPipelineOutput(audios=output)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth.py
deleted file mode 100644
index 99edce7ef8575ea8b945905eb4bc176c264fb2d6..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth.py
+++ /dev/null
@@ -1,645 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import inspect
-from typing import Any, Callable, Dict, List, Optional, Union
-
-import numpy as np
-import torch
-from transformers import CLIPTextModel, CLIPTokenizer
-
-from ...loaders import LoraLoaderMixin, TextualInversionLoaderMixin
-from ...models import AutoencoderKL, UNet3DConditionModel
-from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import (
- is_accelerate_available,
- is_accelerate_version,
- logging,
- randn_tensor,
- replace_example_docstring,
-)
-from ..pipeline_utils import DiffusionPipeline
-from . import TextToVideoSDPipelineOutput
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-EXAMPLE_DOC_STRING = """
- Examples:
- ```py
- >>> import torch
- >>> from diffusers import TextToVideoSDPipeline
- >>> from diffusers.utils import export_to_video
-
- >>> pipe = TextToVideoSDPipeline.from_pretrained(
- ... "damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16"
- ... )
- >>> pipe.enable_model_cpu_offload()
-
- >>> prompt = "Spiderman is surfing"
- >>> video_frames = pipe(prompt).frames
- >>> video_path = export_to_video(video_frames)
- >>> video_path
- ```
-"""
-
-
-def tensor2vid(video: torch.Tensor, mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) -> List[np.ndarray]:
- # This code is copied from https://github.com/modelscope/modelscope/blob/1509fdb973e5871f37148a4b5e5964cafd43e64d/modelscope/pipelines/multi_modal/text_to_video_synthesis_pipeline.py#L78
- # reshape to ncfhw
- mean = torch.tensor(mean, device=video.device).reshape(1, -1, 1, 1, 1)
- std = torch.tensor(std, device=video.device).reshape(1, -1, 1, 1, 1)
- # unnormalize back to [0,1]
- video = video.mul_(std).add_(mean)
- video.clamp_(0, 1)
- # prepare the final outputs
- i, c, f, h, w = video.shape
- images = video.permute(2, 3, 0, 4, 1).reshape(
- f, h, i * w, c
- ) # 1st (frames, h, batch_size, w, c) 2nd (frames, h, batch_size * w, c)
- images = images.unbind(dim=0) # prepare a list of indvidual (consecutive frames)
- images = [(image.cpu().numpy() * 255).astype("uint8") for image in images] # f h w c
- return images
-
-
-class TextToVideoSDPipeline(DiffusionPipeline, TextualInversionLoaderMixin, LoraLoaderMixin):
- r"""
- Pipeline for text-to-video generation.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
- implemented for all pipelines (downloading, saving, running on a particular device, etc.).
-
- Args:
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- text_encoder ([`CLIPTextModel`]):
- Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- tokenizer (`CLIPTokenizer`):
- A [`~transformers.CLIPTokenizer`] to tokenize text.
- unet ([`UNet3DConditionModel`]):
- A [`UNet3DConditionModel`] to denoise the encoded video latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
- """
-
- def __init__(
- self,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- tokenizer: CLIPTokenizer,
- unet: UNet3DConditionModel,
- scheduler: KarrasDiffusionSchedulers,
- ):
- super().__init__()
-
- self.register_modules(
- vae=vae,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- )
- self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing
- def enable_vae_slicing(self):
- r"""
- Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
- compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
- """
- self.vae.enable_slicing()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing
- def disable_vae_slicing(self):
- r"""
- Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
- computing decoding in one step.
- """
- self.vae.disable_slicing()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_tiling
- def enable_vae_tiling(self):
- r"""
- Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
- compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
- processing larger images.
- """
- self.vae.enable_tiling()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_tiling
- def disable_vae_tiling(self):
- r"""
- Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
- computing decoding in one step.
- """
- self.vae.disable_tiling()
-
- def enable_model_cpu_offload(self, gpu_id=0):
- r"""
- Offload all models to CPU to reduce memory usage with a low impact on performance. Moves one whole model at a
- time to the GPU when its `forward` method is called, and the model remains in GPU until the next model runs.
- Memory savings are lower than using `enable_sequential_cpu_offload`, but performance is much better due to the
- iterative execution of the `unet`.
- """
- if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
- from accelerate import cpu_offload_with_hook
- else:
- raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
-
- device = torch.device(f"cuda:{gpu_id}")
-
- if self.device.type != "cpu":
- self.to("cpu", silence_dtype_warnings=True)
- torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
-
- hook = None
- for cpu_offloaded_model in [self.text_encoder, self.unet, self.vae]:
- _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
-
- # We'll offload the last model manually.
- self.final_offload_hook = hook
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt
- def _encode_prompt(
- self,
- prompt,
- device,
- num_images_per_prompt,
- do_classifier_free_guidance,
- negative_prompt=None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- lora_scale: Optional[float] = None,
- ):
- r"""
- Encodes the prompt into text encoder hidden states.
-
- Args:
- prompt (`str` or `List[str]`, *optional*):
- prompt to be encoded
- device: (`torch.device`):
- torch device
- num_images_per_prompt (`int`):
- number of images that should be generated per prompt
- do_classifier_free_guidance (`bool`):
- whether to use classifier free guidance or not
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
- `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
- less than `1`).
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
- provided, text embeddings will be generated from `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
- argument.
- lora_scale (`float`, *optional*):
- A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- """
- # set lora scale so that monkey patched LoRA
- # function of text encoder can correctly access it
- if lora_scale is not None and isinstance(self, LoraLoaderMixin):
- self._lora_scale = lora_scale
-
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- if prompt_embeds is None:
- # textual inversion: procecss multi-vector tokens if necessary
- if isinstance(self, TextualInversionLoaderMixin):
- prompt = self.maybe_convert_prompt(prompt, self.tokenizer)
-
- text_inputs = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- text_input_ids = text_inputs.input_ids
- untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
-
- if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
- text_input_ids, untruncated_ids
- ):
- removed_text = self.tokenizer.batch_decode(
- untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
- )
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
- )
-
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
- attention_mask = text_inputs.attention_mask.to(device)
- else:
- attention_mask = None
-
- prompt_embeds = self.text_encoder(
- text_input_ids.to(device),
- attention_mask=attention_mask,
- )
- prompt_embeds = prompt_embeds[0]
-
- prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
-
- bs_embed, seq_len, _ = prompt_embeds.shape
- # duplicate text embeddings for each generation per prompt, using mps friendly method
- prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
- prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
-
- # get unconditional embeddings for classifier free guidance
- if do_classifier_free_guidance and negative_prompt_embeds is None:
- uncond_tokens: List[str]
- if negative_prompt is None:
- uncond_tokens = [""] * batch_size
- elif prompt is not None and type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt]
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- uncond_tokens = negative_prompt
-
- # textual inversion: procecss multi-vector tokens if necessary
- if isinstance(self, TextualInversionLoaderMixin):
- uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer)
-
- max_length = prompt_embeds.shape[1]
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="pt",
- )
-
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
- attention_mask = uncond_input.attention_mask.to(device)
- else:
- attention_mask = None
-
- negative_prompt_embeds = self.text_encoder(
- uncond_input.input_ids.to(device),
- attention_mask=attention_mask,
- )
- negative_prompt_embeds = negative_prompt_embeds[0]
-
- if do_classifier_free_guidance:
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
- seq_len = negative_prompt_embeds.shape[1]
-
- negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
-
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
- negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
-
- return prompt_embeds
-
- def decode_latents(self, latents):
- latents = 1 / self.vae.config.scaling_factor * latents
-
- batch_size, channels, num_frames, height, width = latents.shape
- latents = latents.permute(0, 2, 1, 3, 4).reshape(batch_size * num_frames, channels, height, width)
-
- image = self.vae.decode(latents).sample
- video = (
- image[None, :]
- .reshape(
- (
- batch_size,
- num_frames,
- -1,
- )
- + image.shape[2:]
- )
- .permute(0, 2, 1, 3, 4)
- )
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
- video = video.float()
- return video
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
- def prepare_extra_step_kwargs(self, generator, eta):
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
-
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- # check if the scheduler accepts generator
- accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
- if accepts_generator:
- extra_step_kwargs["generator"] = generator
- return extra_step_kwargs
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.check_inputs
- def check_inputs(
- self,
- prompt,
- height,
- width,
- callback_steps,
- negative_prompt=None,
- prompt_embeds=None,
- negative_prompt_embeds=None,
- ):
- if height % 8 != 0 or width % 8 != 0:
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- if prompt is not None and prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
- " only forward one of the two."
- )
- elif prompt is None and prompt_embeds is None:
- raise ValueError(
- "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
- )
- elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- if negative_prompt is not None and negative_prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
- f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
- )
-
- if prompt_embeds is not None and negative_prompt_embeds is not None:
- if prompt_embeds.shape != negative_prompt_embeds.shape:
- raise ValueError(
- "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
- f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
- f" {negative_prompt_embeds.shape}."
- )
-
- def prepare_latents(
- self, batch_size, num_channels_latents, num_frames, height, width, dtype, device, generator, latents=None
- ):
- shape = (
- batch_size,
- num_channels_latents,
- num_frames,
- height // self.vae_scale_factor,
- width // self.vae_scale_factor,
- )
- if isinstance(generator, list) and len(generator) != batch_size:
- raise ValueError(
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
- )
-
- if latents is None:
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
- else:
- latents = latents.to(device)
-
- # scale the initial noise by the standard deviation required by the scheduler
- latents = latents * self.scheduler.init_noise_sigma
- return latents
-
- @torch.no_grad()
- @replace_example_docstring(EXAMPLE_DOC_STRING)
- def __call__(
- self,
- prompt: Union[str, List[str]] = None,
- height: Optional[int] = None,
- width: Optional[int] = None,
- num_frames: int = 16,
- num_inference_steps: int = 50,
- guidance_scale: float = 9.0,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- eta: float = 0.0,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- latents: Optional[torch.FloatTensor] = None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "np",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: int = 1,
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
- ):
- r"""
- The call function to the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
- The height in pixels of the generated video.
- width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
- The width in pixels of the generated video.
- num_frames (`int`, *optional*, defaults to 16):
- The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds
- amounts to 2 seconds of video.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality videos at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- A higher guidance scale value encourages the model to generate images closely linked to the text
- `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts to guide what to not include in image generation. If not defined, you need to
- pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
- to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers.
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
- generation deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor is generated by sampling using the supplied random `generator`. Latents should be of shape
- `(batch_size, num_channel, num_frames, height, width)`.
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
- provided, text embeddings are generated from the `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
- not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- output_type (`str`, *optional*, defaults to `"np"`):
- The output format of the generated video. Choose between `torch.FloatTensor` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput`] instead
- of a plain tuple.
- callback (`Callable`, *optional*):
- A function that calls every `callback_steps` steps during inference. The function is called with the
- following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function is called. If not specified, the callback is called at
- every step.
- cross_attention_kwargs (`dict`, *optional*):
- A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in
- [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
-
- Examples:
-
- Returns:
- [`~pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput`] or `tuple`:
- If `return_dict` is `True`, [`~pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput`] is
- returned, otherwise a `tuple` is returned where the first element is a list with the generated frames.
- """
- # 0. Default height and width to unet
- height = height or self.unet.config.sample_size * self.vae_scale_factor
- width = width or self.unet.config.sample_size * self.vae_scale_factor
-
- num_images_per_prompt = 1
-
- # 1. Check inputs. Raise error if not correct
- self.check_inputs(
- prompt, height, width, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds
- )
-
- # 2. Define call parameters
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- device = self._execution_device
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- # 3. Encode input prompt
- text_encoder_lora_scale = (
- cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None
- )
- prompt_embeds = self._encode_prompt(
- prompt,
- device,
- num_images_per_prompt,
- do_classifier_free_guidance,
- negative_prompt,
- prompt_embeds=prompt_embeds,
- negative_prompt_embeds=negative_prompt_embeds,
- lora_scale=text_encoder_lora_scale,
- )
-
- # 4. Prepare timesteps
- self.scheduler.set_timesteps(num_inference_steps, device=device)
- timesteps = self.scheduler.timesteps
-
- # 5. Prepare latent variables
- num_channels_latents = self.unet.config.in_channels
- latents = self.prepare_latents(
- batch_size * num_images_per_prompt,
- num_channels_latents,
- num_frames,
- height,
- width,
- prompt_embeds.dtype,
- device,
- generator,
- latents,
- )
-
- # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
-
- # 7. Denoising loop
- num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
- with self.progress_bar(total=num_inference_steps) as progress_bar:
- for i, t in enumerate(timesteps):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # predict the noise residual
- noise_pred = self.unet(
- latent_model_input,
- t,
- encoder_hidden_states=prompt_embeds,
- cross_attention_kwargs=cross_attention_kwargs,
- return_dict=False,
- )[0]
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # reshape latents
- bsz, channel, frames, width, height = latents.shape
- latents = latents.permute(0, 2, 1, 3, 4).reshape(bsz * frames, channel, width, height)
- noise_pred = noise_pred.permute(0, 2, 1, 3, 4).reshape(bsz * frames, channel, width, height)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
-
- # reshape latents back
- latents = latents[None, :].reshape(bsz, frames, channel, width, height).permute(0, 2, 1, 3, 4)
-
- # call the callback, if provided
- if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
- progress_bar.update()
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- if output_type == "latent":
- return TextToVideoSDPipelineOutput(frames=latents)
-
- video_tensor = self.decode_latents(latents)
-
- if output_type == "pt":
- video = video_tensor
- else:
- video = tensor2vid(video_tensor)
-
- # Offload last model to CPU
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
- self.final_offload_hook.offload()
-
- if not return_dict:
- return (video,)
-
- return TextToVideoSDPipelineOutput(frames=video)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/torch_utils.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/torch_utils.py
deleted file mode 100644
index 99ea4d8cf1d0b04b8f43d8d7a331247822374bcf..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/torch_utils.py
+++ /dev/null
@@ -1,88 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-PyTorch utilities: Utilities related to PyTorch
-"""
-from typing import List, Optional, Tuple, Union
-
-from . import logging
-from .import_utils import is_torch_available, is_torch_version
-
-
-if is_torch_available():
- import torch
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-try:
- from torch._dynamo import allow_in_graph as maybe_allow_in_graph
-except (ImportError, ModuleNotFoundError):
-
- def maybe_allow_in_graph(cls):
- return cls
-
-
-def randn_tensor(
- shape: Union[Tuple, List],
- generator: Optional[Union[List["torch.Generator"], "torch.Generator"]] = None,
- device: Optional["torch.device"] = None,
- dtype: Optional["torch.dtype"] = None,
- layout: Optional["torch.layout"] = None,
-):
- """A helper function to create random tensors on the desired `device` with the desired `dtype`. When
- passing a list of generators, you can seed each batch size individually. If CPU generators are passed, the tensor
- is always created on the CPU.
- """
- # device on which tensor is created defaults to device
- rand_device = device
- batch_size = shape[0]
-
- layout = layout or torch.strided
- device = device or torch.device("cpu")
-
- if generator is not None:
- gen_device_type = generator.device.type if not isinstance(generator, list) else generator[0].device.type
- if gen_device_type != device.type and gen_device_type == "cpu":
- rand_device = "cpu"
- if device != "mps":
- logger.info(
- f"The passed generator was created on 'cpu' even though a tensor on {device} was expected."
- f" Tensors will be created on 'cpu' and then moved to {device}. Note that one can probably"
- f" slighly speed up this function by passing a generator that was created on the {device} device."
- )
- elif gen_device_type != device.type and gen_device_type == "cuda":
- raise ValueError(f"Cannot generate a {device} tensor from a generator of type {gen_device_type}.")
-
- # make sure generator list of length 1 is treated like a non-list
- if isinstance(generator, list) and len(generator) == 1:
- generator = generator[0]
-
- if isinstance(generator, list):
- shape = (1,) + shape[1:]
- latents = [
- torch.randn(shape, generator=generator[i], device=rand_device, dtype=dtype, layout=layout)
- for i in range(batch_size)
- ]
- latents = torch.cat(latents, dim=0).to(device)
- else:
- latents = torch.randn(shape, generator=generator, device=rand_device, dtype=dtype, layout=layout).to(device)
-
- return latents
-
-
-def is_compiled_module(module):
- """Check whether the module was compiled with torch.compile()"""
- if is_torch_version("<", "2.0.0") or not hasattr(torch, "_dynamo"):
- return False
- return isinstance(module, torch._dynamo.eval_frame.OptimizedModule)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_config.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_config.py
deleted file mode 100644
index 246dd3bf9e537f341bfdae04d83dea400d3cafb9..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_config.py
+++ /dev/null
@@ -1,288 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import tempfile
-import unittest
-
-from diffusers import (
- DDIMScheduler,
- DDPMScheduler,
- DPMSolverMultistepScheduler,
- EulerAncestralDiscreteScheduler,
- EulerDiscreteScheduler,
- PNDMScheduler,
- logging,
-)
-from diffusers.configuration_utils import ConfigMixin, register_to_config
-from diffusers.utils.testing_utils import CaptureLogger
-
-
-class SampleObject(ConfigMixin):
- config_name = "config.json"
-
- @register_to_config
- def __init__(
- self,
- a=2,
- b=5,
- c=(2, 5),
- d="for diffusion",
- e=[1, 3],
- ):
- pass
-
-
-class SampleObject2(ConfigMixin):
- config_name = "config.json"
-
- @register_to_config
- def __init__(
- self,
- a=2,
- b=5,
- c=(2, 5),
- d="for diffusion",
- f=[1, 3],
- ):
- pass
-
-
-class SampleObject3(ConfigMixin):
- config_name = "config.json"
-
- @register_to_config
- def __init__(
- self,
- a=2,
- b=5,
- c=(2, 5),
- d="for diffusion",
- e=[1, 3],
- f=[1, 3],
- ):
- pass
-
-
-class SampleObject4(ConfigMixin):
- config_name = "config.json"
-
- @register_to_config
- def __init__(
- self,
- a=2,
- b=5,
- c=(2, 5),
- d="for diffusion",
- e=[1, 5],
- f=[5, 4],
- ):
- pass
-
-
-class ConfigTester(unittest.TestCase):
- def test_load_not_from_mixin(self):
- with self.assertRaises(ValueError):
- ConfigMixin.load_config("dummy_path")
-
- def test_register_to_config(self):
- obj = SampleObject()
- config = obj.config
- assert config["a"] == 2
- assert config["b"] == 5
- assert config["c"] == (2, 5)
- assert config["d"] == "for diffusion"
- assert config["e"] == [1, 3]
-
- # init ignore private arguments
- obj = SampleObject(_name_or_path="lalala")
- config = obj.config
- assert config["a"] == 2
- assert config["b"] == 5
- assert config["c"] == (2, 5)
- assert config["d"] == "for diffusion"
- assert config["e"] == [1, 3]
-
- # can override default
- obj = SampleObject(c=6)
- config = obj.config
- assert config["a"] == 2
- assert config["b"] == 5
- assert config["c"] == 6
- assert config["d"] == "for diffusion"
- assert config["e"] == [1, 3]
-
- # can use positional arguments.
- obj = SampleObject(1, c=6)
- config = obj.config
- assert config["a"] == 1
- assert config["b"] == 5
- assert config["c"] == 6
- assert config["d"] == "for diffusion"
- assert config["e"] == [1, 3]
-
- def test_save_load(self):
- obj = SampleObject()
- config = obj.config
-
- assert config["a"] == 2
- assert config["b"] == 5
- assert config["c"] == (2, 5)
- assert config["d"] == "for diffusion"
- assert config["e"] == [1, 3]
-
- with tempfile.TemporaryDirectory() as tmpdirname:
- obj.save_config(tmpdirname)
- new_obj = SampleObject.from_config(SampleObject.load_config(tmpdirname))
- new_config = new_obj.config
-
- # unfreeze configs
- config = dict(config)
- new_config = dict(new_config)
-
- assert config.pop("c") == (2, 5) # instantiated as tuple
- assert new_config.pop("c") == [2, 5] # saved & loaded as list because of json
- config.pop("_use_default_values")
- assert config == new_config
-
- def test_load_ddim_from_pndm(self):
- logger = logging.get_logger("diffusers.configuration_utils")
- # 30 for warning
- logger.setLevel(30)
-
- with CaptureLogger(logger) as cap_logger:
- ddim = DDIMScheduler.from_pretrained(
- "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="scheduler"
- )
-
- assert ddim.__class__ == DDIMScheduler
- # no warning should be thrown
- assert cap_logger.out == ""
-
- def test_load_euler_from_pndm(self):
- logger = logging.get_logger("diffusers.configuration_utils")
- # 30 for warning
- logger.setLevel(30)
-
- with CaptureLogger(logger) as cap_logger:
- euler = EulerDiscreteScheduler.from_pretrained(
- "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="scheduler"
- )
-
- assert euler.__class__ == EulerDiscreteScheduler
- # no warning should be thrown
- assert cap_logger.out == ""
-
- def test_load_euler_ancestral_from_pndm(self):
- logger = logging.get_logger("diffusers.configuration_utils")
- # 30 for warning
- logger.setLevel(30)
-
- with CaptureLogger(logger) as cap_logger:
- euler = EulerAncestralDiscreteScheduler.from_pretrained(
- "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="scheduler"
- )
-
- assert euler.__class__ == EulerAncestralDiscreteScheduler
- # no warning should be thrown
- assert cap_logger.out == ""
-
- def test_load_pndm(self):
- logger = logging.get_logger("diffusers.configuration_utils")
- # 30 for warning
- logger.setLevel(30)
-
- with CaptureLogger(logger) as cap_logger:
- pndm = PNDMScheduler.from_pretrained(
- "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="scheduler"
- )
-
- assert pndm.__class__ == PNDMScheduler
- # no warning should be thrown
- assert cap_logger.out == ""
-
- def test_overwrite_config_on_load(self):
- logger = logging.get_logger("diffusers.configuration_utils")
- # 30 for warning
- logger.setLevel(30)
-
- with CaptureLogger(logger) as cap_logger:
- ddpm = DDPMScheduler.from_pretrained(
- "hf-internal-testing/tiny-stable-diffusion-torch",
- subfolder="scheduler",
- prediction_type="sample",
- beta_end=8,
- )
-
- with CaptureLogger(logger) as cap_logger_2:
- ddpm_2 = DDPMScheduler.from_pretrained("google/ddpm-celebahq-256", beta_start=88)
-
- assert ddpm.__class__ == DDPMScheduler
- assert ddpm.config.prediction_type == "sample"
- assert ddpm.config.beta_end == 8
- assert ddpm_2.config.beta_start == 88
-
- # no warning should be thrown
- assert cap_logger.out == ""
- assert cap_logger_2.out == ""
-
- def test_load_dpmsolver(self):
- logger = logging.get_logger("diffusers.configuration_utils")
- # 30 for warning
- logger.setLevel(30)
-
- with CaptureLogger(logger) as cap_logger:
- dpm = DPMSolverMultistepScheduler.from_pretrained(
- "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="scheduler"
- )
-
- assert dpm.__class__ == DPMSolverMultistepScheduler
- # no warning should be thrown
- assert cap_logger.out == ""
-
- def test_use_default_values(self):
- # let's first save a config that should be in the form
- # a=2,
- # b=5,
- # c=(2, 5),
- # d="for diffusion",
- # e=[1, 3],
-
- config = SampleObject()
-
- config_dict = {k: v for k, v in config.config.items() if not k.startswith("_")}
-
- # make sure that default config has all keys in `_use_default_values`
- assert set(config_dict.keys()) == set(config.config._use_default_values)
-
- with tempfile.TemporaryDirectory() as tmpdirname:
- config.save_config(tmpdirname)
-
- # now loading it with SampleObject2 should put f into `_use_default_values`
- config = SampleObject2.from_config(tmpdirname)
-
- assert "f" in config._use_default_values
- assert config.f == [1, 3]
-
- # now loading the config, should **NOT** use [1, 3] for `f`, but the default [1, 4] value
- # **BECAUSE** it is part of `config._use_default_values`
- new_config = SampleObject4.from_config(config.config)
- assert new_config.f == [5, 4]
-
- config.config._use_default_values.pop()
- new_config_2 = SampleObject4.from_config(config.config)
- assert new_config_2.f == [1, 3]
-
- # Nevertheless "e" should still be correctly loaded to [1, 3] from SampleObject2 instead of defaulting to [1, 5]
- assert new_config_2.e == [1, 3]
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/fpg/mask_rcnn_r50_fpg-chn128_crop640_50e_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/fpg/mask_rcnn_r50_fpg-chn128_crop640_50e_coco.py
deleted file mode 100644
index baa4a5affc9b3ead0080d993b14f0d00392c2de5..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/fpg/mask_rcnn_r50_fpg-chn128_crop640_50e_coco.py
+++ /dev/null
@@ -1,10 +0,0 @@
-_base_ = 'mask_rcnn_r50_fpg_crop640_50e_coco.py'
-
-model = dict(
- neck=dict(out_channels=128, inter_channels=128),
- rpn_head=dict(in_channels=128),
- roi_head=dict(
- bbox_roi_extractor=dict(out_channels=128),
- bbox_head=dict(in_channels=128),
- mask_roi_extractor=dict(out_channels=128),
- mask_head=dict(in_channels=128)))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_1x_coco.py
deleted file mode 100644
index 6e1c5d0cadfb9fb3a4f8645e28a8e67fc499e900..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_1x_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = '../mask_rcnn/mask_rcnn_r101_fpn_1x_coco.py'
-model = dict(
- backbone=dict(
- norm_cfg=dict(type='SyncBN', requires_grad=True), norm_eval=False))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/README.md b/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/README.md
deleted file mode 100644
index 66f3dc286f066c50ef54e98de036ef0f5056e246..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/README.md
+++ /dev/null
@@ -1,69 +0,0 @@
-# Pyramid Scene Parsing Network
-
-## Introduction
-
-
-
-```latex
-@inproceedings{zhao2017pspnet,
- title={Pyramid Scene Parsing Network},
- author={Zhao, Hengshuang and Shi, Jianping and Qi, Xiaojuan and Wang, Xiaogang and Jia, Jiaya},
- booktitle={CVPR},
- year={2017}
-}
-```
-
-## Results and models
-
-### Cityscapes
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ------ | --------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ---------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| PSPNet | R-50-D8 | 512x1024 | 40000 | 6.1 | 4.07 | 77.85 | 79.18 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338.log.json) |
-| PSPNet | R-101-D8 | 512x1024 | 40000 | 9.6 | 2.68 | 78.34 | 79.74 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x1024_40k_cityscapes/pspnet_r101-d8_512x1024_40k_cityscapes_20200604_232751-467e7cf4.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x1024_40k_cityscapes/pspnet_r101-d8_512x1024_40k_cityscapes_20200604_232751.log.json) |
-| PSPNet | R-50-D8 | 769x769 | 40000 | 6.9 | 1.76 | 78.26 | 79.88 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r50-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_769x769_40k_cityscapes/pspnet_r50-d8_769x769_40k_cityscapes_20200606_112725-86638686.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_769x769_40k_cityscapes/pspnet_r50-d8_769x769_40k_cityscapes_20200606_112725.log.json) |
-| PSPNet | R-101-D8 | 769x769 | 40000 | 10.9 | 1.15 | 79.08 | 80.28 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_769x769_40k_cityscapes/pspnet_r101-d8_769x769_40k_cityscapes_20200606_112753-61c6f5be.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_769x769_40k_cityscapes/pspnet_r101-d8_769x769_40k_cityscapes_20200606_112753.log.json) |
-| PSPNet | R-18-D8 | 512x1024 | 80000 | 1.7 | 15.71 | 74.87 | 76.04 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r18-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r18-d8_512x1024_80k_cityscapes/pspnet_r18-d8_512x1024_80k_cityscapes_20201225_021458-09ffa746.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r18-d8_512x1024_80k_cityscapes/pspnet_r18-d8_512x1024_80k_cityscapes-20201225_021458.log.json) |
-| PSPNet | R-50-D8 | 512x1024 | 80000 | - | - | 78.55 | 79.79 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r50-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x1024_80k_cityscapes/pspnet_r50-d8_512x1024_80k_cityscapes_20200606_112131-2376f12b.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x1024_80k_cityscapes/pspnet_r50-d8_512x1024_80k_cityscapes_20200606_112131.log.json) |
-| PSPNet | R-101-D8 | 512x1024 | 80000 | - | - | 79.76 | 81.01 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x1024_80k_cityscapes/pspnet_r101-d8_512x1024_80k_cityscapes_20200606_112211-e1e1100f.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x1024_80k_cityscapes/pspnet_r101-d8_512x1024_80k_cityscapes_20200606_112211.log.json) |
-| PSPNet | R-18-D8 | 769x769 | 80000 | 1.9 | 6.20 | 75.90 | 77.86 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r18-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r18-d8_769x769_80k_cityscapes/pspnet_r18-d8_769x769_80k_cityscapes_20201225_021458-3deefc62.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r18-d8_769x769_80k_cityscapes/pspnet_r18-d8_769x769_80k_cityscapes-20201225_021458.log.json) |
-| PSPNet | R-50-D8 | 769x769 | 80000 | - | - | 79.59 | 80.69 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r50-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_769x769_80k_cityscapes/pspnet_r50-d8_769x769_80k_cityscapes_20200606_210121-5ccf03dd.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_769x769_80k_cityscapes/pspnet_r50-d8_769x769_80k_cityscapes_20200606_210121.log.json) |
-| PSPNet | R-101-D8 | 769x769 | 80000 | - | - | 79.77 | 81.06 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_769x769_80k_cityscapes/pspnet_r101-d8_769x769_80k_cityscapes_20200606_225055-dba412fa.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_769x769_80k_cityscapes/pspnet_r101-d8_769x769_80k_cityscapes_20200606_225055.log.json) |
-| PSPNet | R-18b-D8 | 512x1024 | 80000 | 1.5 | 16.28 | 74.23 | 75.79 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r18b-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r18b-d8_512x1024_80k_cityscapes/pspnet_r18b-d8_512x1024_80k_cityscapes_20201226_063116-26928a60.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r18b-d8_512x1024_80k_cityscapes/pspnet_r18b-d8_512x1024_80k_cityscapes-20201226_063116.log.json) |
-| PSPNet | R-50b-D8 | 512x1024 | 80000 | 6.0 | 4.30 | 78.22 | 79.46 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r50b-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50b-d8_512x1024_80k_cityscapes/pspnet_r50b-d8_512x1024_80k_cityscapes_20201225_094315-6344287a.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50b-d8_512x1024_80k_cityscapes/pspnet_r50b-d8_512x1024_80k_cityscapes-20201225_094315.log.json) |
-| PSPNet | R-101b-D8 | 512x1024 | 80000 | 9.5 | 2.76 | 79.69 | 80.79 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101b-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101b-d8_512x1024_80k_cityscapes/pspnet_r101b-d8_512x1024_80k_cityscapes_20201226_170012-3a4d38ab.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101b-d8_512x1024_80k_cityscapes/pspnet_r101b-d8_512x1024_80k_cityscapes-20201226_170012.log.json) |
-| PSPNet | R-18b-D8 | 769x769 | 80000 | 1.7 | 6.41 | 74.92 | 76.90 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r18b-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r18b-d8_769x769_80k_cityscapes/pspnet_r18b-d8_769x769_80k_cityscapes_20201226_080942-bf98d186.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r18b-d8_769x769_80k_cityscapes/pspnet_r18b-d8_769x769_80k_cityscapes-20201226_080942.log.json) |
-| PSPNet | R-50b-D8 | 769x769 | 80000 | 6.8 | 1.88 | 78.50 | 79.96 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r50b-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50b-d8_769x769_80k_cityscapes/pspnet_r50b-d8_769x769_80k_cityscapes_20201225_094316-4c643cf6.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50b-d8_769x769_80k_cityscapes/pspnet_r50b-d8_769x769_80k_cityscapes-20201225_094316.log.json) |
-| PSPNet | R-101b-D8 | 769x769 | 80000 | 10.8 | 1.17 | 78.87 | 80.04 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101b-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101b-d8_769x769_80k_cityscapes/pspnet_r101b-d8_769x769_80k_cityscapes_20201226_171823-f0e7c293.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101b-d8_769x769_80k_cityscapes/pspnet_r101b-d8_769x769_80k_cityscapes-20201226_171823.log.json) |
-
-### ADE20K
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ----------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| PSPNet | R-50-D8 | 512x512 | 80000 | 8.5 | 23.53 | 41.13 | 41.94 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r50-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x512_80k_ade20k/pspnet_r50-d8_512x512_80k_ade20k_20200615_014128-15a8b914.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x512_80k_ade20k/pspnet_r50-d8_512x512_80k_ade20k_20200615_014128.log.json) |
-| PSPNet | R-101-D8 | 512x512 | 80000 | 12 | 15.30 | 43.57 | 44.35 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x512_80k_ade20k/pspnet_r101-d8_512x512_80k_ade20k_20200614_031423-b6e782f0.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x512_80k_ade20k/pspnet_r101-d8_512x512_80k_ade20k_20200614_031423.log.json) |
-| PSPNet | R-50-D8 | 512x512 | 160000 | - | - | 42.48 | 43.44 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r50-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x512_160k_ade20k/pspnet_r50-d8_512x512_160k_ade20k_20200615_184358-1890b0bd.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x512_160k_ade20k/pspnet_r50-d8_512x512_160k_ade20k_20200615_184358.log.json) |
-| PSPNet | R-101-D8 | 512x512 | 160000 | - | - | 44.39 | 45.35 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x512_160k_ade20k/pspnet_r101-d8_512x512_160k_ade20k_20200615_100650-967c316f.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x512_160k_ade20k/pspnet_r101-d8_512x512_160k_ade20k_20200615_100650.log.json) |
-
-### Pascal VOC 2012 + Aug
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| PSPNet | R-50-D8 | 512x512 | 20000 | 6.1 | 23.59 | 76.78 | 77.61 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r50-d8_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x512_20k_voc12aug/pspnet_r50-d8_512x512_20k_voc12aug_20200617_101958-ed5dfbd9.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x512_20k_voc12aug/pspnet_r50-d8_512x512_20k_voc12aug_20200617_101958.log.json) |
-| PSPNet | R-101-D8 | 512x512 | 20000 | 9.6 | 15.02 | 78.47 | 79.25 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x512_20k_voc12aug/pspnet_r101-d8_512x512_20k_voc12aug_20200617_102003-4aef3c9a.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x512_20k_voc12aug/pspnet_r101-d8_512x512_20k_voc12aug_20200617_102003.log.json) |
-| PSPNet | R-50-D8 | 512x512 | 40000 | - | - | 77.29 | 78.48 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r50-d8_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x512_40k_voc12aug/pspnet_r50-d8_512x512_40k_voc12aug_20200613_161222-ae9c1b8c.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x512_40k_voc12aug/pspnet_r50-d8_512x512_40k_voc12aug_20200613_161222.log.json) |
-| PSPNet | R-101-D8 | 512x512 | 40000 | - | - | 78.52 | 79.57 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x512_40k_voc12aug/pspnet_r101-d8_512x512_40k_voc12aug_20200613_161222-bc933b18.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x512_40k_voc12aug/pspnet_r101-d8_512x512_40k_voc12aug_20200613_161222.log.json) |
-
-### Pascal Context
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| PSPNet | R-101-D8 | 480x480 | 40000 | 8.8 | 9.68 | 46.60 | 47.78 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_480x480_40k_pascal_context.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_480x480_40k_pascal_context/pspnet_r101-d8_480x480_40k_pascal_context_20200911_211210-bf0f5d7c.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_480x480_40k_pascal_context/pspnet_r101-d8_480x480_40k_pascal_context-20200911_211210.log.json) |
-| PSPNet | R-101-D8 | 480x480 | 80000 | - | - | 46.03 | 47.15 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_480x480_80k_pascal_context.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_480x480_80k_pascal_context/pspnet_r101-d8_480x480_80k_pascal_context_20200911_190530-c86d6233.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_480x480_80k_pascal_context/pspnet_r101-d8_480x480_80k_pascal_context-20200911_190530.log.json) |
-
-### Pascal Context 59
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| PSPNet | R-101-D8 | 480x480 | 40000 | - | - | 52.02 | 53.54 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_480x480_40k_pascal_context_59.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_480x480_40k_pascal_context_59/pspnet_r101-d8_480x480_40k_pascal_context_59_20210416_114524-86d44cd4.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_480x480_40k_pascal_context_59/pspnet_r101-d8_480x480_40k_pascal_context_59-20210416_114524.log.json) |
-| PSPNet | R-101-D8 | 480x480 | 80000 | - | - | 52.47 | 53.99 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_480x480_80k_pascal_context_59.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_480x480_80k_pascal_context_59/pspnet_r101-d8_480x480_80k_pascal_context_59_20210416_114418-fa6caaa2.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_480x480_80k_pascal_context_59/pspnet_r101-d8_480x480_80k_pascal_context_59-20210416_114418.log.json) |
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/models/diffusion/ddpm.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/models/diffusion/ddpm.py
deleted file mode 100644
index f71a44af48c8cba8e97849b7e6813b3e6f9fe83c..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/models/diffusion/ddpm.py
+++ /dev/null
@@ -1,1797 +0,0 @@
-"""
-wild mixture of
-https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py
-https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py
-https://github.com/CompVis/taming-transformers
--- merci
-"""
-
-import torch
-import torch.nn as nn
-import numpy as np
-import pytorch_lightning as pl
-from torch.optim.lr_scheduler import LambdaLR
-from einops import rearrange, repeat
-from contextlib import contextmanager, nullcontext
-from functools import partial
-import itertools
-from tqdm import tqdm
-from torchvision.utils import make_grid
-from pytorch_lightning.utilities.distributed import rank_zero_only
-from omegaconf import ListConfig
-
-from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config
-from ldm.modules.ema import LitEma
-from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution
-from ldm.models.autoencoder import IdentityFirstStage, AutoencoderKL
-from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like
-from ldm.models.diffusion.ddim import DDIMSampler
-
-
-__conditioning_keys__ = {'concat': 'c_concat',
- 'crossattn': 'c_crossattn',
- 'adm': 'y'}
-
-
-def disabled_train(self, mode=True):
- """Overwrite model.train with this function to make sure train/eval mode
- does not change anymore."""
- return self
-
-
-def uniform_on_device(r1, r2, shape, device):
- return (r1 - r2) * torch.rand(*shape, device=device) + r2
-
-
-class DDPM(pl.LightningModule):
- # classic DDPM with Gaussian diffusion, in image space
- def __init__(self,
- unet_config,
- timesteps=1000,
- beta_schedule="linear",
- loss_type="l2",
- ckpt_path=None,
- ignore_keys=[],
- load_only_unet=False,
- monitor="val/loss",
- use_ema=True,
- first_stage_key="image",
- image_size=256,
- channels=3,
- log_every_t=100,
- clip_denoised=True,
- linear_start=1e-4,
- linear_end=2e-2,
- cosine_s=8e-3,
- given_betas=None,
- original_elbo_weight=0.,
- v_posterior=0., # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta
- l_simple_weight=1.,
- conditioning_key=None,
- parameterization="eps", # all assuming fixed variance schedules
- scheduler_config=None,
- use_positional_encodings=False,
- learn_logvar=False,
- logvar_init=0.,
- make_it_fit=False,
- ucg_training=None,
- reset_ema=False,
- reset_num_ema_updates=False,
- ):
- super().__init__()
- assert parameterization in ["eps", "x0", "v"], 'currently only supporting "eps" and "x0" and "v"'
- self.parameterization = parameterization
- print(f"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode")
- self.cond_stage_model = None
- self.clip_denoised = clip_denoised
- self.log_every_t = log_every_t
- self.first_stage_key = first_stage_key
- self.image_size = image_size # try conv?
- self.channels = channels
- self.use_positional_encodings = use_positional_encodings
- self.model = DiffusionWrapper(unet_config, conditioning_key)
- count_params(self.model, verbose=True)
- self.use_ema = use_ema
- if self.use_ema:
- self.model_ema = LitEma(self.model)
- print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.")
-
- self.use_scheduler = scheduler_config is not None
- if self.use_scheduler:
- self.scheduler_config = scheduler_config
-
- self.v_posterior = v_posterior
- self.original_elbo_weight = original_elbo_weight
- self.l_simple_weight = l_simple_weight
-
- if monitor is not None:
- self.monitor = monitor
- self.make_it_fit = make_it_fit
- if reset_ema: assert exists(ckpt_path)
- if ckpt_path is not None:
- self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys, only_model=load_only_unet)
- if reset_ema:
- assert self.use_ema
- print(f"Resetting ema to pure model weights. This is useful when restoring from an ema-only checkpoint.")
- self.model_ema = LitEma(self.model)
- if reset_num_ema_updates:
- print(" +++++++++++ WARNING: RESETTING NUM_EMA UPDATES TO ZERO +++++++++++ ")
- assert self.use_ema
- self.model_ema.reset_num_updates()
-
- self.register_schedule(given_betas=given_betas, beta_schedule=beta_schedule, timesteps=timesteps,
- linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s)
-
- self.loss_type = loss_type
-
- self.learn_logvar = learn_logvar
- logvar = torch.full(fill_value=logvar_init, size=(self.num_timesteps,))
- if self.learn_logvar:
- self.logvar = nn.Parameter(self.logvar, requires_grad=True)
- else:
- self.register_buffer('logvar', logvar)
-
- self.ucg_training = ucg_training or dict()
- if self.ucg_training:
- self.ucg_prng = np.random.RandomState()
-
- def register_schedule(self, given_betas=None, beta_schedule="linear", timesteps=1000,
- linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
- if exists(given_betas):
- betas = given_betas
- else:
- betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end,
- cosine_s=cosine_s)
- alphas = 1. - betas
- alphas_cumprod = np.cumprod(alphas, axis=0)
- alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1])
-
- timesteps, = betas.shape
- self.num_timesteps = int(timesteps)
- self.linear_start = linear_start
- self.linear_end = linear_end
- assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep'
-
- to_torch = partial(torch.tensor, dtype=torch.float32)
-
- self.register_buffer('betas', to_torch(betas))
- self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
- self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev))
-
- # calculations for diffusion q(x_t | x_{t-1}) and others
- self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))
- self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))
- self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod)))
- self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod)))
- self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1)))
-
- # calculations for posterior q(x_{t-1} | x_t, x_0)
- posterior_variance = (1 - self.v_posterior) * betas * (1. - alphas_cumprod_prev) / (
- 1. - alphas_cumprod) + self.v_posterior * betas
- # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t)
- self.register_buffer('posterior_variance', to_torch(posterior_variance))
- # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain
- self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20))))
- self.register_buffer('posterior_mean_coef1', to_torch(
- betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod)))
- self.register_buffer('posterior_mean_coef2', to_torch(
- (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod)))
-
- if self.parameterization == "eps":
- lvlb_weights = self.betas ** 2 / (
- 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod))
- elif self.parameterization == "x0":
- lvlb_weights = 0.5 * np.sqrt(torch.Tensor(alphas_cumprod)) / (2. * 1 - torch.Tensor(alphas_cumprod))
- elif self.parameterization == "v":
- lvlb_weights = torch.ones_like(self.betas ** 2 / (
- 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod)))
- else:
- raise NotImplementedError("mu not supported")
- lvlb_weights[0] = lvlb_weights[1]
- self.register_buffer('lvlb_weights', lvlb_weights, persistent=False)
- assert not torch.isnan(self.lvlb_weights).all()
-
- @contextmanager
- def ema_scope(self, context=None):
- if self.use_ema:
- self.model_ema.store(self.model.parameters())
- self.model_ema.copy_to(self.model)
- if context is not None:
- print(f"{context}: Switched to EMA weights")
- try:
- yield None
- finally:
- if self.use_ema:
- self.model_ema.restore(self.model.parameters())
- if context is not None:
- print(f"{context}: Restored training weights")
-
- @torch.no_grad()
- def init_from_ckpt(self, path, ignore_keys=list(), only_model=False):
- sd = torch.load(path, map_location="cpu")
- if "state_dict" in list(sd.keys()):
- sd = sd["state_dict"]
- keys = list(sd.keys())
- for k in keys:
- for ik in ignore_keys:
- if k.startswith(ik):
- print("Deleting key {} from state_dict.".format(k))
- del sd[k]
- if self.make_it_fit:
- n_params = len([name for name, _ in
- itertools.chain(self.named_parameters(),
- self.named_buffers())])
- for name, param in tqdm(
- itertools.chain(self.named_parameters(),
- self.named_buffers()),
- desc="Fitting old weights to new weights",
- total=n_params
- ):
- if not name in sd:
- continue
- old_shape = sd[name].shape
- new_shape = param.shape
- assert len(old_shape) == len(new_shape)
- if len(new_shape) > 2:
- # we only modify first two axes
- assert new_shape[2:] == old_shape[2:]
- # assumes first axis corresponds to output dim
- if not new_shape == old_shape:
- new_param = param.clone()
- old_param = sd[name]
- if len(new_shape) == 1:
- for i in range(new_param.shape[0]):
- new_param[i] = old_param[i % old_shape[0]]
- elif len(new_shape) >= 2:
- for i in range(new_param.shape[0]):
- for j in range(new_param.shape[1]):
- new_param[i, j] = old_param[i % old_shape[0], j % old_shape[1]]
-
- n_used_old = torch.ones(old_shape[1])
- for j in range(new_param.shape[1]):
- n_used_old[j % old_shape[1]] += 1
- n_used_new = torch.zeros(new_shape[1])
- for j in range(new_param.shape[1]):
- n_used_new[j] = n_used_old[j % old_shape[1]]
-
- n_used_new = n_used_new[None, :]
- while len(n_used_new.shape) < len(new_shape):
- n_used_new = n_used_new.unsqueeze(-1)
- new_param /= n_used_new
-
- sd[name] = new_param
-
- missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(
- sd, strict=False)
- print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys")
- if len(missing) > 0:
- print(f"Missing Keys:\n {missing}")
- if len(unexpected) > 0:
- print(f"\nUnexpected Keys:\n {unexpected}")
-
- def q_mean_variance(self, x_start, t):
- """
- Get the distribution q(x_t | x_0).
- :param x_start: the [N x C x ...] tensor of noiseless inputs.
- :param t: the number of diffusion steps (minus 1). Here, 0 means one step.
- :return: A tuple (mean, variance, log_variance), all of x_start's shape.
- """
- mean = (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start)
- variance = extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape)
- log_variance = extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape)
- return mean, variance, log_variance
-
- def predict_start_from_noise(self, x_t, t, noise):
- return (
- extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t -
- extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise
- )
-
- def predict_start_from_z_and_v(self, x_t, t, v):
- # self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))
- # self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))
- return (
- extract_into_tensor(self.sqrt_alphas_cumprod, t, x_t.shape) * x_t -
- extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_t.shape) * v
- )
-
- def predict_eps_from_z_and_v(self, x_t, t, v):
- return (
- extract_into_tensor(self.sqrt_alphas_cumprod, t, x_t.shape) * v +
- extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_t.shape) * x_t
- )
-
- def q_posterior(self, x_start, x_t, t):
- posterior_mean = (
- extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start +
- extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t
- )
- posterior_variance = extract_into_tensor(self.posterior_variance, t, x_t.shape)
- posterior_log_variance_clipped = extract_into_tensor(self.posterior_log_variance_clipped, t, x_t.shape)
- return posterior_mean, posterior_variance, posterior_log_variance_clipped
-
- def p_mean_variance(self, x, t, clip_denoised: bool):
- model_out = self.model(x, t)
- if self.parameterization == "eps":
- x_recon = self.predict_start_from_noise(x, t=t, noise=model_out)
- elif self.parameterization == "x0":
- x_recon = model_out
- if clip_denoised:
- x_recon.clamp_(-1., 1.)
-
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
- return model_mean, posterior_variance, posterior_log_variance
-
- @torch.no_grad()
- def p_sample(self, x, t, clip_denoised=True, repeat_noise=False):
- b, *_, device = *x.shape, x.device
- model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, clip_denoised=clip_denoised)
- noise = noise_like(x.shape, device, repeat_noise)
- # no noise when t == 0
- nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
-
- @torch.no_grad()
- def p_sample_loop(self, shape, return_intermediates=False):
- device = self.betas.device
- b = shape[0]
- img = torch.randn(shape, device=device)
- intermediates = [img]
- for i in tqdm(reversed(range(0, self.num_timesteps)), desc='Sampling t', total=self.num_timesteps):
- img = self.p_sample(img, torch.full((b,), i, device=device, dtype=torch.long),
- clip_denoised=self.clip_denoised)
- if i % self.log_every_t == 0 or i == self.num_timesteps - 1:
- intermediates.append(img)
- if return_intermediates:
- return img, intermediates
- return img
-
- @torch.no_grad()
- def sample(self, batch_size=16, return_intermediates=False):
- image_size = self.image_size
- channels = self.channels
- return self.p_sample_loop((batch_size, channels, image_size, image_size),
- return_intermediates=return_intermediates)
-
- def q_sample(self, x_start, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +
- extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise)
-
- def get_v(self, x, noise, t):
- return (
- extract_into_tensor(self.sqrt_alphas_cumprod, t, x.shape) * noise -
- extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x.shape) * x
- )
-
- def get_loss(self, pred, target, mean=True):
- if self.loss_type == 'l1':
- loss = (target - pred).abs()
- if mean:
- loss = loss.mean()
- elif self.loss_type == 'l2':
- if mean:
- loss = torch.nn.functional.mse_loss(target, pred)
- else:
- loss = torch.nn.functional.mse_loss(target, pred, reduction='none')
- else:
- raise NotImplementedError("unknown loss type '{loss_type}'")
-
- return loss
-
- def p_losses(self, x_start, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- model_out = self.model(x_noisy, t)
-
- loss_dict = {}
- if self.parameterization == "eps":
- target = noise
- elif self.parameterization == "x0":
- target = x_start
- elif self.parameterization == "v":
- target = self.get_v(x_start, noise, t)
- else:
- raise NotImplementedError(f"Parameterization {self.parameterization} not yet supported")
-
- loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3])
-
- log_prefix = 'train' if self.training else 'val'
-
- loss_dict.update({f'{log_prefix}/loss_simple': loss.mean()})
- loss_simple = loss.mean() * self.l_simple_weight
-
- loss_vlb = (self.lvlb_weights[t] * loss).mean()
- loss_dict.update({f'{log_prefix}/loss_vlb': loss_vlb})
-
- loss = loss_simple + self.original_elbo_weight * loss_vlb
-
- loss_dict.update({f'{log_prefix}/loss': loss})
-
- return loss, loss_dict
-
- def forward(self, x, *args, **kwargs):
- # b, c, h, w, device, img_size, = *x.shape, x.device, self.image_size
- # assert h == img_size and w == img_size, f'height and width of image must be {img_size}'
- t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long()
- return self.p_losses(x, t, *args, **kwargs)
-
- def get_input(self, batch, k):
- x = batch[k]
- if len(x.shape) == 3:
- x = x[..., None]
- x = rearrange(x, 'b h w c -> b c h w')
- x = x.to(memory_format=torch.contiguous_format).float()
- return x
-
- def shared_step(self, batch):
- x = self.get_input(batch, self.first_stage_key)
- loss, loss_dict = self(x)
- return loss, loss_dict
-
- def training_step(self, batch, batch_idx):
- for k in self.ucg_training:
- p = self.ucg_training[k]["p"]
- val = self.ucg_training[k]["val"]
- if val is None:
- val = ""
- for i in range(len(batch[k])):
- if self.ucg_prng.choice(2, p=[1 - p, p]):
- batch[k][i] = val
-
- loss, loss_dict = self.shared_step(batch)
-
- self.log_dict(loss_dict, prog_bar=True,
- logger=True, on_step=True, on_epoch=True)
-
- self.log("global_step", self.global_step,
- prog_bar=True, logger=True, on_step=True, on_epoch=False)
-
- if self.use_scheduler:
- lr = self.optimizers().param_groups[0]['lr']
- self.log('lr_abs', lr, prog_bar=True, logger=True, on_step=True, on_epoch=False)
-
- return loss
-
- @torch.no_grad()
- def validation_step(self, batch, batch_idx):
- _, loss_dict_no_ema = self.shared_step(batch)
- with self.ema_scope():
- _, loss_dict_ema = self.shared_step(batch)
- loss_dict_ema = {key + '_ema': loss_dict_ema[key] for key in loss_dict_ema}
- self.log_dict(loss_dict_no_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True)
- self.log_dict(loss_dict_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True)
-
- def on_train_batch_end(self, *args, **kwargs):
- if self.use_ema:
- self.model_ema(self.model)
-
- def _get_rows_from_list(self, samples):
- n_imgs_per_row = len(samples)
- denoise_grid = rearrange(samples, 'n b c h w -> b n c h w')
- denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w')
- denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row)
- return denoise_grid
-
- @torch.no_grad()
- def log_images(self, batch, N=8, n_row=2, sample=True, return_keys=None, **kwargs):
- log = dict()
- x = self.get_input(batch, self.first_stage_key)
- N = min(x.shape[0], N)
- n_row = min(x.shape[0], n_row)
- x = x.to(self.device)[:N]
- log["inputs"] = x
-
- # get diffusion row
- diffusion_row = list()
- x_start = x[:n_row]
-
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(x_start)
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- diffusion_row.append(x_noisy)
-
- log["diffusion_row"] = self._get_rows_from_list(diffusion_row)
-
- if sample:
- # get denoise row
- with self.ema_scope("Plotting"):
- samples, denoise_row = self.sample(batch_size=N, return_intermediates=True)
-
- log["samples"] = samples
- log["denoise_row"] = self._get_rows_from_list(denoise_row)
-
- if return_keys:
- if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0:
- return log
- else:
- return {key: log[key] for key in return_keys}
- return log
-
- def configure_optimizers(self):
- lr = self.learning_rate
- params = list(self.model.parameters())
- if self.learn_logvar:
- params = params + [self.logvar]
- opt = torch.optim.AdamW(params, lr=lr)
- return opt
-
-
-class LatentDiffusion(DDPM):
- """main class"""
-
- def __init__(self,
- first_stage_config,
- cond_stage_config,
- num_timesteps_cond=None,
- cond_stage_key="image",
- cond_stage_trainable=False,
- concat_mode=True,
- cond_stage_forward=None,
- conditioning_key=None,
- scale_factor=1.0,
- scale_by_std=False,
- force_null_conditioning=False,
- *args, **kwargs):
- self.force_null_conditioning = force_null_conditioning
- self.num_timesteps_cond = default(num_timesteps_cond, 1)
- self.scale_by_std = scale_by_std
- assert self.num_timesteps_cond <= kwargs['timesteps']
- # for backwards compatibility after implementation of DiffusionWrapper
- if conditioning_key is None:
- conditioning_key = 'concat' if concat_mode else 'crossattn'
- if cond_stage_config == '__is_unconditional__' and not self.force_null_conditioning:
- conditioning_key = None
- ckpt_path = kwargs.pop("ckpt_path", None)
- reset_ema = kwargs.pop("reset_ema", False)
- reset_num_ema_updates = kwargs.pop("reset_num_ema_updates", False)
- ignore_keys = kwargs.pop("ignore_keys", [])
- super().__init__(conditioning_key=conditioning_key, *args, **kwargs)
- self.concat_mode = concat_mode
- self.cond_stage_trainable = cond_stage_trainable
- self.cond_stage_key = cond_stage_key
- try:
- self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1
- except:
- self.num_downs = 0
- if not scale_by_std:
- self.scale_factor = scale_factor
- else:
- self.register_buffer('scale_factor', torch.tensor(scale_factor))
- self.instantiate_first_stage(first_stage_config)
- self.instantiate_cond_stage(cond_stage_config)
- self.cond_stage_forward = cond_stage_forward
- self.clip_denoised = False
- self.bbox_tokenizer = None
-
- self.restarted_from_ckpt = False
- if ckpt_path is not None:
- self.init_from_ckpt(ckpt_path, ignore_keys)
- self.restarted_from_ckpt = True
- if reset_ema:
- assert self.use_ema
- print(
- f"Resetting ema to pure model weights. This is useful when restoring from an ema-only checkpoint.")
- self.model_ema = LitEma(self.model)
- if reset_num_ema_updates:
- print(" +++++++++++ WARNING: RESETTING NUM_EMA UPDATES TO ZERO +++++++++++ ")
- assert self.use_ema
- self.model_ema.reset_num_updates()
-
- def make_cond_schedule(self, ):
- self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long)
- ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long()
- self.cond_ids[:self.num_timesteps_cond] = ids
-
- @rank_zero_only
- @torch.no_grad()
- def on_train_batch_start(self, batch, batch_idx, dataloader_idx):
- # only for very first batch
- if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt:
- assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously'
- # set rescale weight to 1./std of encodings
- print("### USING STD-RESCALING ###")
- x = super().get_input(batch, self.first_stage_key)
- x = x.to(self.device)
- encoder_posterior = self.encode_first_stage(x)
- z = self.get_first_stage_encoding(encoder_posterior).detach()
- del self.scale_factor
- self.register_buffer('scale_factor', 1. / z.flatten().std())
- print(f"setting self.scale_factor to {self.scale_factor}")
- print("### USING STD-RESCALING ###")
-
- def register_schedule(self,
- given_betas=None, beta_schedule="linear", timesteps=1000,
- linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
- super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s)
-
- self.shorten_cond_schedule = self.num_timesteps_cond > 1
- if self.shorten_cond_schedule:
- self.make_cond_schedule()
-
- def instantiate_first_stage(self, config):
- model = instantiate_from_config(config)
- self.first_stage_model = model.eval()
- self.first_stage_model.train = disabled_train
- for param in self.first_stage_model.parameters():
- param.requires_grad = False
-
- def instantiate_cond_stage(self, config):
- if not self.cond_stage_trainable:
- if config == "__is_first_stage__":
- print("Using first stage also as cond stage.")
- self.cond_stage_model = self.first_stage_model
- elif config == "__is_unconditional__":
- print(f"Training {self.__class__.__name__} as an unconditional model.")
- self.cond_stage_model = None
- # self.be_unconditional = True
- else:
- model = instantiate_from_config(config)
- self.cond_stage_model = model.eval()
- self.cond_stage_model.train = disabled_train
- for param in self.cond_stage_model.parameters():
- param.requires_grad = False
- else:
- assert config != '__is_first_stage__'
- assert config != '__is_unconditional__'
- model = instantiate_from_config(config)
- self.cond_stage_model = model
-
- def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False):
- denoise_row = []
- for zd in tqdm(samples, desc=desc):
- denoise_row.append(self.decode_first_stage(zd.to(self.device),
- force_not_quantize=force_no_decoder_quantization))
- n_imgs_per_row = len(denoise_row)
- denoise_row = torch.stack(denoise_row) # n_log_step, n_row, C, H, W
- denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w')
- denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w')
- denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row)
- return denoise_grid
-
- def get_first_stage_encoding(self, encoder_posterior):
- if isinstance(encoder_posterior, DiagonalGaussianDistribution):
- z = encoder_posterior.sample()
- elif isinstance(encoder_posterior, torch.Tensor):
- z = encoder_posterior
- else:
- raise NotImplementedError(f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented")
- return self.scale_factor * z
-
- def get_learned_conditioning(self, c):
- if self.cond_stage_forward is None:
- if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode):
- c = self.cond_stage_model.encode(c)
- if isinstance(c, DiagonalGaussianDistribution):
- c = c.mode()
- else:
- c = self.cond_stage_model(c)
- else:
- assert hasattr(self.cond_stage_model, self.cond_stage_forward)
- c = getattr(self.cond_stage_model, self.cond_stage_forward)(c)
- return c
-
- def meshgrid(self, h, w):
- y = torch.arange(0, h).view(h, 1, 1).repeat(1, w, 1)
- x = torch.arange(0, w).view(1, w, 1).repeat(h, 1, 1)
-
- arr = torch.cat([y, x], dim=-1)
- return arr
-
- def delta_border(self, h, w):
- """
- :param h: height
- :param w: width
- :return: normalized distance to image border,
- wtith min distance = 0 at border and max dist = 0.5 at image center
- """
- lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2)
- arr = self.meshgrid(h, w) / lower_right_corner
- dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0]
- dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0]
- edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0]
- return edge_dist
-
- def get_weighting(self, h, w, Ly, Lx, device):
- weighting = self.delta_border(h, w)
- weighting = torch.clip(weighting, self.split_input_params["clip_min_weight"],
- self.split_input_params["clip_max_weight"], )
- weighting = weighting.view(1, h * w, 1).repeat(1, 1, Ly * Lx).to(device)
-
- if self.split_input_params["tie_braker"]:
- L_weighting = self.delta_border(Ly, Lx)
- L_weighting = torch.clip(L_weighting,
- self.split_input_params["clip_min_tie_weight"],
- self.split_input_params["clip_max_tie_weight"])
-
- L_weighting = L_weighting.view(1, 1, Ly * Lx).to(device)
- weighting = weighting * L_weighting
- return weighting
-
- def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1): # todo load once not every time, shorten code
- """
- :param x: img of size (bs, c, h, w)
- :return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1])
- """
- bs, nc, h, w = x.shape
-
- # number of crops in image
- Ly = (h - kernel_size[0]) // stride[0] + 1
- Lx = (w - kernel_size[1]) // stride[1] + 1
-
- if uf == 1 and df == 1:
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
- unfold = torch.nn.Unfold(**fold_params)
-
- fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params)
-
- weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype)
- normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap
- weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx))
-
- elif uf > 1 and df == 1:
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
- unfold = torch.nn.Unfold(**fold_params)
-
- fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf),
- dilation=1, padding=0,
- stride=(stride[0] * uf, stride[1] * uf))
- fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2)
-
- weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype)
- normalization = fold(weighting).view(1, 1, h * uf, w * uf) # normalizes the overlap
- weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx))
-
- elif df > 1 and uf == 1:
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
- unfold = torch.nn.Unfold(**fold_params)
-
- fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df),
- dilation=1, padding=0,
- stride=(stride[0] // df, stride[1] // df))
- fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2)
-
- weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype)
- normalization = fold(weighting).view(1, 1, h // df, w // df) # normalizes the overlap
- weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx))
-
- else:
- raise NotImplementedError
-
- return fold, unfold, normalization, weighting
-
- @torch.no_grad()
- def get_input(self, batch, k, return_first_stage_outputs=False, force_c_encode=False,
- cond_key=None, return_original_cond=False, bs=None, return_x=False):
- x = super().get_input(batch, k)
- if bs is not None:
- x = x[:bs]
- x = x.to(self.device)
- encoder_posterior = self.encode_first_stage(x)
- z = self.get_first_stage_encoding(encoder_posterior).detach()
-
- if self.model.conditioning_key is not None and not self.force_null_conditioning:
- if cond_key is None:
- cond_key = self.cond_stage_key
- if cond_key != self.first_stage_key:
- if cond_key in ['caption', 'coordinates_bbox', "txt"]:
- xc = batch[cond_key]
- elif cond_key in ['class_label', 'cls']:
- xc = batch
- else:
- xc = super().get_input(batch, cond_key).to(self.device)
- else:
- xc = x
- if not self.cond_stage_trainable or force_c_encode:
- if isinstance(xc, dict) or isinstance(xc, list):
- c = self.get_learned_conditioning(xc)
- else:
- c = self.get_learned_conditioning(xc.to(self.device))
- else:
- c = xc
- if bs is not None:
- c = c[:bs]
-
- if self.use_positional_encodings:
- pos_x, pos_y = self.compute_latent_shifts(batch)
- ckey = __conditioning_keys__[self.model.conditioning_key]
- c = {ckey: c, 'pos_x': pos_x, 'pos_y': pos_y}
-
- else:
- c = None
- xc = None
- if self.use_positional_encodings:
- pos_x, pos_y = self.compute_latent_shifts(batch)
- c = {'pos_x': pos_x, 'pos_y': pos_y}
- out = [z, c]
- if return_first_stage_outputs:
- xrec = self.decode_first_stage(z)
- out.extend([x, xrec])
- if return_x:
- out.extend([x])
- if return_original_cond:
- out.append(xc)
- return out
-
- @torch.no_grad()
- def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False):
- if predict_cids:
- if z.dim() == 4:
- z = torch.argmax(z.exp(), dim=1).long()
- z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None)
- z = rearrange(z, 'b h w c -> b c h w').contiguous()
-
- z = 1. / self.scale_factor * z
- return self.first_stage_model.decode(z)
-
- @torch.no_grad()
- def encode_first_stage(self, x):
- return self.first_stage_model.encode(x)
-
- def shared_step(self, batch, **kwargs):
- x, c = self.get_input(batch, self.first_stage_key)
- loss = self(x, c)
- return loss
-
- def forward(self, x, c, *args, **kwargs):
- t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long()
- if self.model.conditioning_key is not None:
- assert c is not None
- if self.cond_stage_trainable:
- c = self.get_learned_conditioning(c)
- if self.shorten_cond_schedule: # TODO: drop this option
- tc = self.cond_ids[t].to(self.device)
- c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float()))
- return self.p_losses(x, c, t, *args, **kwargs)
-
- def apply_model(self, x_noisy, t, cond, return_ids=False):
- if isinstance(cond, dict):
- # hybrid case, cond is expected to be a dict
- pass
- else:
- if not isinstance(cond, list):
- cond = [cond]
- key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn'
- cond = {key: cond}
-
- x_recon = self.model(x_noisy, t, **cond)
-
- if isinstance(x_recon, tuple) and not return_ids:
- return x_recon[0]
- else:
- return x_recon
-
- def _predict_eps_from_xstart(self, x_t, t, pred_xstart):
- return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \
- extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape)
-
- def _prior_bpd(self, x_start):
- """
- Get the prior KL term for the variational lower-bound, measured in
- bits-per-dim.
- This term can't be optimized, as it only depends on the encoder.
- :param x_start: the [N x C x ...] tensor of inputs.
- :return: a batch of [N] KL values (in bits), one per batch element.
- """
- batch_size = x_start.shape[0]
- t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device)
- qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t)
- kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0)
- return mean_flat(kl_prior) / np.log(2.0)
-
- def p_losses(self, x_start, cond, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- model_output = self.apply_model(x_noisy, t, cond)
-
- loss_dict = {}
- prefix = 'train' if self.training else 'val'
-
- if self.parameterization == "x0":
- target = x_start
- elif self.parameterization == "eps":
- target = noise
- elif self.parameterization == "v":
- target = self.get_v(x_start, noise, t)
- else:
- raise NotImplementedError()
-
- loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3])
- loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()})
-
- logvar_t = self.logvar[t].to(self.device)
- loss = loss_simple / torch.exp(logvar_t) + logvar_t
- # loss = loss_simple / torch.exp(self.logvar) + self.logvar
- if self.learn_logvar:
- loss_dict.update({f'{prefix}/loss_gamma': loss.mean()})
- loss_dict.update({'logvar': self.logvar.data.mean()})
-
- loss = self.l_simple_weight * loss.mean()
-
- loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3))
- loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean()
- loss_dict.update({f'{prefix}/loss_vlb': loss_vlb})
- loss += (self.original_elbo_weight * loss_vlb)
- loss_dict.update({f'{prefix}/loss': loss})
-
- return loss, loss_dict
-
- def p_mean_variance(self, x, c, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False,
- return_x0=False, score_corrector=None, corrector_kwargs=None):
- t_in = t
- model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids)
-
- if score_corrector is not None:
- assert self.parameterization == "eps"
- model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs)
-
- if return_codebook_ids:
- model_out, logits = model_out
-
- if self.parameterization == "eps":
- x_recon = self.predict_start_from_noise(x, t=t, noise=model_out)
- elif self.parameterization == "x0":
- x_recon = model_out
- else:
- raise NotImplementedError()
-
- if clip_denoised:
- x_recon.clamp_(-1., 1.)
- if quantize_denoised:
- x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon)
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
- if return_codebook_ids:
- return model_mean, posterior_variance, posterior_log_variance, logits
- elif return_x0:
- return model_mean, posterior_variance, posterior_log_variance, x_recon
- else:
- return model_mean, posterior_variance, posterior_log_variance
-
- @torch.no_grad()
- def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False,
- return_codebook_ids=False, quantize_denoised=False, return_x0=False,
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None):
- b, *_, device = *x.shape, x.device
- outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised,
- return_codebook_ids=return_codebook_ids,
- quantize_denoised=quantize_denoised,
- return_x0=return_x0,
- score_corrector=score_corrector, corrector_kwargs=corrector_kwargs)
- if return_codebook_ids:
- raise DeprecationWarning("Support dropped.")
- model_mean, _, model_log_variance, logits = outputs
- elif return_x0:
- model_mean, _, model_log_variance, x0 = outputs
- else:
- model_mean, _, model_log_variance = outputs
-
- noise = noise_like(x.shape, device, repeat_noise) * temperature
- if noise_dropout > 0.:
- noise = torch.nn.functional.dropout(noise, p=noise_dropout)
- # no noise when t == 0
- nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
-
- if return_codebook_ids:
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1)
- if return_x0:
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0
- else:
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
-
- @torch.no_grad()
- def progressive_denoising(self, cond, shape, verbose=True, callback=None, quantize_denoised=False,
- img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0.,
- score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None,
- log_every_t=None):
- if not log_every_t:
- log_every_t = self.log_every_t
- timesteps = self.num_timesteps
- if batch_size is not None:
- b = batch_size if batch_size is not None else shape[0]
- shape = [batch_size] + list(shape)
- else:
- b = batch_size = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=self.device)
- else:
- img = x_T
- intermediates = []
- if cond is not None:
- if isinstance(cond, dict):
- cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else
- list(map(lambda x: x[:batch_size], cond[key])) for key in cond}
- else:
- cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]
-
- if start_T is not None:
- timesteps = min(timesteps, start_T)
- iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation',
- total=timesteps) if verbose else reversed(
- range(0, timesteps))
- if type(temperature) == float:
- temperature = [temperature] * timesteps
-
- for i in iterator:
- ts = torch.full((b,), i, device=self.device, dtype=torch.long)
- if self.shorten_cond_schedule:
- assert self.model.conditioning_key != 'hybrid'
- tc = self.cond_ids[ts].to(cond.device)
- cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))
-
- img, x0_partial = self.p_sample(img, cond, ts,
- clip_denoised=self.clip_denoised,
- quantize_denoised=quantize_denoised, return_x0=True,
- temperature=temperature[i], noise_dropout=noise_dropout,
- score_corrector=score_corrector, corrector_kwargs=corrector_kwargs)
- if mask is not None:
- assert x0 is not None
- img_orig = self.q_sample(x0, ts)
- img = img_orig * mask + (1. - mask) * img
-
- if i % log_every_t == 0 or i == timesteps - 1:
- intermediates.append(x0_partial)
- if callback: callback(i)
- if img_callback: img_callback(img, i)
- return img, intermediates
-
- @torch.no_grad()
- def p_sample_loop(self, cond, shape, return_intermediates=False,
- x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False,
- mask=None, x0=None, img_callback=None, start_T=None,
- log_every_t=None):
-
- if not log_every_t:
- log_every_t = self.log_every_t
- device = self.betas.device
- b = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=device)
- else:
- img = x_T
-
- intermediates = [img]
- if timesteps is None:
- timesteps = self.num_timesteps
-
- if start_T is not None:
- timesteps = min(timesteps, start_T)
- iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed(
- range(0, timesteps))
-
- if mask is not None:
- assert x0 is not None
- assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match
-
- for i in iterator:
- ts = torch.full((b,), i, device=device, dtype=torch.long)
- if self.shorten_cond_schedule:
- assert self.model.conditioning_key != 'hybrid'
- tc = self.cond_ids[ts].to(cond.device)
- cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))
-
- img = self.p_sample(img, cond, ts,
- clip_denoised=self.clip_denoised,
- quantize_denoised=quantize_denoised)
- if mask is not None:
- img_orig = self.q_sample(x0, ts)
- img = img_orig * mask + (1. - mask) * img
-
- if i % log_every_t == 0 or i == timesteps - 1:
- intermediates.append(img)
- if callback: callback(i)
- if img_callback: img_callback(img, i)
-
- if return_intermediates:
- return img, intermediates
- return img
-
- @torch.no_grad()
- def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None,
- verbose=True, timesteps=None, quantize_denoised=False,
- mask=None, x0=None, shape=None, **kwargs):
- if shape is None:
- shape = (batch_size, self.channels, self.image_size, self.image_size)
- if cond is not None:
- if isinstance(cond, dict):
- cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else
- list(map(lambda x: x[:batch_size], cond[key])) for key in cond}
- else:
- cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]
- return self.p_sample_loop(cond,
- shape,
- return_intermediates=return_intermediates, x_T=x_T,
- verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised,
- mask=mask, x0=x0)
-
- @torch.no_grad()
- def sample_log(self, cond, batch_size, ddim, ddim_steps, **kwargs):
- if ddim:
- ddim_sampler = DDIMSampler(self)
- shape = (self.channels, self.image_size, self.image_size)
- samples, intermediates = ddim_sampler.sample(ddim_steps, batch_size,
- shape, cond, verbose=False, **kwargs)
-
- else:
- samples, intermediates = self.sample(cond=cond, batch_size=batch_size,
- return_intermediates=True, **kwargs)
-
- return samples, intermediates
-
- @torch.no_grad()
- def get_unconditional_conditioning(self, batch_size, null_label=None):
- if null_label is not None:
- xc = null_label
- if isinstance(xc, ListConfig):
- xc = list(xc)
- if isinstance(xc, dict) or isinstance(xc, list):
- c = self.get_learned_conditioning(xc)
- else:
- if hasattr(xc, "to"):
- xc = xc.to(self.device)
- c = self.get_learned_conditioning(xc)
- else:
- if self.cond_stage_key in ["class_label", "cls"]:
- xc = self.cond_stage_model.get_unconditional_conditioning(batch_size, device=self.device)
- return self.get_learned_conditioning(xc)
- else:
- raise NotImplementedError("todo")
- if isinstance(c, list): # in case the encoder gives us a list
- for i in range(len(c)):
- c[i] = repeat(c[i], '1 ... -> b ...', b=batch_size).to(self.device)
- else:
- c = repeat(c, '1 ... -> b ...', b=batch_size).to(self.device)
- return c
-
- @torch.no_grad()
- def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=50, ddim_eta=0., return_keys=None,
- quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True,
- plot_diffusion_rows=True, unconditional_guidance_scale=1., unconditional_guidance_label=None,
- use_ema_scope=True,
- **kwargs):
- ema_scope = self.ema_scope if use_ema_scope else nullcontext
- use_ddim = ddim_steps is not None
-
- log = dict()
- z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key,
- return_first_stage_outputs=True,
- force_c_encode=True,
- return_original_cond=True,
- bs=N)
- N = min(x.shape[0], N)
- n_row = min(x.shape[0], n_row)
- log["inputs"] = x
- log["reconstruction"] = xrec
- if self.model.conditioning_key is not None:
- if hasattr(self.cond_stage_model, "decode"):
- xc = self.cond_stage_model.decode(c)
- log["conditioning"] = xc
- elif self.cond_stage_key in ["caption", "txt"]:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25)
- log["conditioning"] = xc
- elif self.cond_stage_key in ['class_label', "cls"]:
- try:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25)
- log['conditioning'] = xc
- except KeyError:
- # probably no "human_label" in batch
- pass
- elif isimage(xc):
- log["conditioning"] = xc
- if ismap(xc):
- log["original_conditioning"] = self.to_rgb(xc)
-
- if plot_diffusion_rows:
- # get diffusion row
- diffusion_row = list()
- z_start = z[:n_row]
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(z_start)
- z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)
- diffusion_row.append(self.decode_first_stage(z_noisy))
-
- diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W
- diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')
- diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')
- diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])
- log["diffusion_row"] = diffusion_grid
-
- if sample:
- # get denoise row
- with ema_scope("Sampling"):
- samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True)
- x_samples = self.decode_first_stage(samples)
- log["samples"] = x_samples
- if plot_denoise_rows:
- denoise_grid = self._get_denoise_row_from_list(z_denoise_row)
- log["denoise_row"] = denoise_grid
-
- if quantize_denoised and not isinstance(self.first_stage_model, AutoencoderKL) and not isinstance(
- self.first_stage_model, IdentityFirstStage):
- # also display when quantizing x0 while sampling
- with ema_scope("Plotting Quantized Denoised"):
- samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta,
- quantize_denoised=True)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True,
- # quantize_denoised=True)
- x_samples = self.decode_first_stage(samples.to(self.device))
- log["samples_x0_quantized"] = x_samples
-
- if unconditional_guidance_scale > 1.0:
- uc = self.get_unconditional_conditioning(N, unconditional_guidance_label)
- if self.model.conditioning_key == "crossattn-adm":
- uc = {"c_crossattn": [uc], "c_adm": c["c_adm"]}
- with ema_scope("Sampling with classifier-free guidance"):
- samples_cfg, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=uc,
- )
- x_samples_cfg = self.decode_first_stage(samples_cfg)
- log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg
-
- if inpaint:
- # make a simple center square
- b, h, w = z.shape[0], z.shape[2], z.shape[3]
- mask = torch.ones(N, h, w).to(self.device)
- # zeros will be filled in
- mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0.
- mask = mask[:, None, ...]
- with ema_scope("Plotting Inpaint"):
- samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, eta=ddim_eta,
- ddim_steps=ddim_steps, x0=z[:N], mask=mask)
- x_samples = self.decode_first_stage(samples.to(self.device))
- log["samples_inpainting"] = x_samples
- log["mask"] = mask
-
- # outpaint
- mask = 1. - mask
- with ema_scope("Plotting Outpaint"):
- samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, eta=ddim_eta,
- ddim_steps=ddim_steps, x0=z[:N], mask=mask)
- x_samples = self.decode_first_stage(samples.to(self.device))
- log["samples_outpainting"] = x_samples
-
- if plot_progressive_rows:
- with ema_scope("Plotting Progressives"):
- img, progressives = self.progressive_denoising(c,
- shape=(self.channels, self.image_size, self.image_size),
- batch_size=N)
- prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation")
- log["progressive_row"] = prog_row
-
- if return_keys:
- if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0:
- return log
- else:
- return {key: log[key] for key in return_keys}
- return log
-
- def configure_optimizers(self):
- lr = self.learning_rate
- params = list(self.model.parameters())
- if self.cond_stage_trainable:
- print(f"{self.__class__.__name__}: Also optimizing conditioner params!")
- params = params + list(self.cond_stage_model.parameters())
- if self.learn_logvar:
- print('Diffusion model optimizing logvar')
- params.append(self.logvar)
- opt = torch.optim.AdamW(params, lr=lr)
- if self.use_scheduler:
- assert 'target' in self.scheduler_config
- scheduler = instantiate_from_config(self.scheduler_config)
-
- print("Setting up LambdaLR scheduler...")
- scheduler = [
- {
- 'scheduler': LambdaLR(opt, lr_lambda=scheduler.schedule),
- 'interval': 'step',
- 'frequency': 1
- }]
- return [opt], scheduler
- return opt
-
- @torch.no_grad()
- def to_rgb(self, x):
- x = x.float()
- if not hasattr(self, "colorize"):
- self.colorize = torch.randn(3, x.shape[1], 1, 1).to(x)
- x = nn.functional.conv2d(x, weight=self.colorize)
- x = 2. * (x - x.min()) / (x.max() - x.min()) - 1.
- return x
-
-
-class DiffusionWrapper(pl.LightningModule):
- def __init__(self, diff_model_config, conditioning_key):
- super().__init__()
- self.sequential_cross_attn = diff_model_config.pop("sequential_crossattn", False)
- self.diffusion_model = instantiate_from_config(diff_model_config)
- self.conditioning_key = conditioning_key
- assert self.conditioning_key in [None, 'concat', 'crossattn', 'hybrid', 'adm', 'hybrid-adm', 'crossattn-adm']
-
- def forward(self, x, t, c_concat: list = None, c_crossattn: list = None, c_adm=None):
- if self.conditioning_key is None:
- out = self.diffusion_model(x, t)
- elif self.conditioning_key == 'concat':
- xc = torch.cat([x] + c_concat, dim=1)
- out = self.diffusion_model(xc, t)
- elif self.conditioning_key == 'crossattn':
- if not self.sequential_cross_attn:
- cc = torch.cat(c_crossattn, 1)
- else:
- cc = c_crossattn
- out = self.diffusion_model(x, t, context=cc)
- elif self.conditioning_key == 'hybrid':
- xc = torch.cat([x] + c_concat, dim=1)
- cc = torch.cat(c_crossattn, 1)
- out = self.diffusion_model(xc, t, context=cc)
- elif self.conditioning_key == 'hybrid-adm':
- assert c_adm is not None
- xc = torch.cat([x] + c_concat, dim=1)
- cc = torch.cat(c_crossattn, 1)
- out = self.diffusion_model(xc, t, context=cc, y=c_adm)
- elif self.conditioning_key == 'crossattn-adm':
- assert c_adm is not None
- cc = torch.cat(c_crossattn, 1)
- out = self.diffusion_model(x, t, context=cc, y=c_adm)
- elif self.conditioning_key == 'adm':
- cc = c_crossattn[0]
- out = self.diffusion_model(x, t, y=cc)
- else:
- raise NotImplementedError()
-
- return out
-
-
-class LatentUpscaleDiffusion(LatentDiffusion):
- def __init__(self, *args, low_scale_config, low_scale_key="LR", noise_level_key=None, **kwargs):
- super().__init__(*args, **kwargs)
- # assumes that neither the cond_stage nor the low_scale_model contain trainable params
- assert not self.cond_stage_trainable
- self.instantiate_low_stage(low_scale_config)
- self.low_scale_key = low_scale_key
- self.noise_level_key = noise_level_key
-
- def instantiate_low_stage(self, config):
- model = instantiate_from_config(config)
- self.low_scale_model = model.eval()
- self.low_scale_model.train = disabled_train
- for param in self.low_scale_model.parameters():
- param.requires_grad = False
-
- @torch.no_grad()
- def get_input(self, batch, k, cond_key=None, bs=None, log_mode=False):
- if not log_mode:
- z, c = super().get_input(batch, k, force_c_encode=True, bs=bs)
- else:
- z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True,
- force_c_encode=True, return_original_cond=True, bs=bs)
- x_low = batch[self.low_scale_key][:bs]
- x_low = rearrange(x_low, 'b h w c -> b c h w')
- x_low = x_low.to(memory_format=torch.contiguous_format).float()
- zx, noise_level = self.low_scale_model(x_low)
- if self.noise_level_key is not None:
- # get noise level from batch instead, e.g. when extracting a custom noise level for bsr
- raise NotImplementedError('TODO')
-
- all_conds = {"c_concat": [zx], "c_crossattn": [c], "c_adm": noise_level}
- if log_mode:
- # TODO: maybe disable if too expensive
- x_low_rec = self.low_scale_model.decode(zx)
- return z, all_conds, x, xrec, xc, x_low, x_low_rec, noise_level
- return z, all_conds
-
- @torch.no_grad()
- def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None,
- plot_denoise_rows=False, plot_progressive_rows=True, plot_diffusion_rows=True,
- unconditional_guidance_scale=1., unconditional_guidance_label=None, use_ema_scope=True,
- **kwargs):
- ema_scope = self.ema_scope if use_ema_scope else nullcontext
- use_ddim = ddim_steps is not None
-
- log = dict()
- z, c, x, xrec, xc, x_low, x_low_rec, noise_level = self.get_input(batch, self.first_stage_key, bs=N,
- log_mode=True)
- N = min(x.shape[0], N)
- n_row = min(x.shape[0], n_row)
- log["inputs"] = x
- log["reconstruction"] = xrec
- log["x_lr"] = x_low
- log[f"x_lr_rec_@noise_levels{'-'.join(map(lambda x: str(x), list(noise_level.cpu().numpy())))}"] = x_low_rec
- if self.model.conditioning_key is not None:
- if hasattr(self.cond_stage_model, "decode"):
- xc = self.cond_stage_model.decode(c)
- log["conditioning"] = xc
- elif self.cond_stage_key in ["caption", "txt"]:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25)
- log["conditioning"] = xc
- elif self.cond_stage_key in ['class_label', 'cls']:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25)
- log['conditioning'] = xc
- elif isimage(xc):
- log["conditioning"] = xc
- if ismap(xc):
- log["original_conditioning"] = self.to_rgb(xc)
-
- if plot_diffusion_rows:
- # get diffusion row
- diffusion_row = list()
- z_start = z[:n_row]
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(z_start)
- z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)
- diffusion_row.append(self.decode_first_stage(z_noisy))
-
- diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W
- diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')
- diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')
- diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])
- log["diffusion_row"] = diffusion_grid
-
- if sample:
- # get denoise row
- with ema_scope("Sampling"):
- samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True)
- x_samples = self.decode_first_stage(samples)
- log["samples"] = x_samples
- if plot_denoise_rows:
- denoise_grid = self._get_denoise_row_from_list(z_denoise_row)
- log["denoise_row"] = denoise_grid
-
- if unconditional_guidance_scale > 1.0:
- uc_tmp = self.get_unconditional_conditioning(N, unconditional_guidance_label)
- # TODO explore better "unconditional" choices for the other keys
- # maybe guide away from empty text label and highest noise level and maximally degraded zx?
- uc = dict()
- for k in c:
- if k == "c_crossattn":
- assert isinstance(c[k], list) and len(c[k]) == 1
- uc[k] = [uc_tmp]
- elif k == "c_adm": # todo: only run with text-based guidance?
- assert isinstance(c[k], torch.Tensor)
- #uc[k] = torch.ones_like(c[k]) * self.low_scale_model.max_noise_level
- uc[k] = c[k]
- elif isinstance(c[k], list):
- uc[k] = [c[k][i] for i in range(len(c[k]))]
- else:
- uc[k] = c[k]
-
- with ema_scope("Sampling with classifier-free guidance"):
- samples_cfg, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=uc,
- )
- x_samples_cfg = self.decode_first_stage(samples_cfg)
- log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg
-
- if plot_progressive_rows:
- with ema_scope("Plotting Progressives"):
- img, progressives = self.progressive_denoising(c,
- shape=(self.channels, self.image_size, self.image_size),
- batch_size=N)
- prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation")
- log["progressive_row"] = prog_row
-
- return log
-
-
-class LatentFinetuneDiffusion(LatentDiffusion):
- """
- Basis for different finetunas, such as inpainting or depth2image
- To disable finetuning mode, set finetune_keys to None
- """
-
- def __init__(self,
- concat_keys: tuple,
- finetune_keys=("model.diffusion_model.input_blocks.0.0.weight",
- "model_ema.diffusion_modelinput_blocks00weight"
- ),
- keep_finetune_dims=4,
- # if model was trained without concat mode before and we would like to keep these channels
- c_concat_log_start=None, # to log reconstruction of c_concat codes
- c_concat_log_end=None,
- *args, **kwargs
- ):
- ckpt_path = kwargs.pop("ckpt_path", None)
- ignore_keys = kwargs.pop("ignore_keys", list())
- super().__init__(*args, **kwargs)
- self.finetune_keys = finetune_keys
- self.concat_keys = concat_keys
- self.keep_dims = keep_finetune_dims
- self.c_concat_log_start = c_concat_log_start
- self.c_concat_log_end = c_concat_log_end
- if exists(self.finetune_keys): assert exists(ckpt_path), 'can only finetune from a given checkpoint'
- if exists(ckpt_path):
- self.init_from_ckpt(ckpt_path, ignore_keys)
-
- def init_from_ckpt(self, path, ignore_keys=list(), only_model=False):
- sd = torch.load(path, map_location="cpu")
- if "state_dict" in list(sd.keys()):
- sd = sd["state_dict"]
- keys = list(sd.keys())
- for k in keys:
- for ik in ignore_keys:
- if k.startswith(ik):
- print("Deleting key {} from state_dict.".format(k))
- del sd[k]
-
- # make it explicit, finetune by including extra input channels
- if exists(self.finetune_keys) and k in self.finetune_keys:
- new_entry = None
- for name, param in self.named_parameters():
- if name in self.finetune_keys:
- print(
- f"modifying key '{name}' and keeping its original {self.keep_dims} (channels) dimensions only")
- new_entry = torch.zeros_like(param) # zero init
- assert exists(new_entry), 'did not find matching parameter to modify'
- new_entry[:, :self.keep_dims, ...] = sd[k]
- sd[k] = new_entry
-
- missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(
- sd, strict=False)
- print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys")
- if len(missing) > 0:
- print(f"Missing Keys: {missing}")
- if len(unexpected) > 0:
- print(f"Unexpected Keys: {unexpected}")
-
- @torch.no_grad()
- def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None,
- quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True,
- plot_diffusion_rows=True, unconditional_guidance_scale=1., unconditional_guidance_label=None,
- use_ema_scope=True,
- **kwargs):
- ema_scope = self.ema_scope if use_ema_scope else nullcontext
- use_ddim = ddim_steps is not None
-
- log = dict()
- z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, bs=N, return_first_stage_outputs=True)
- c_cat, c = c["c_concat"][0], c["c_crossattn"][0]
- N = min(x.shape[0], N)
- n_row = min(x.shape[0], n_row)
- log["inputs"] = x
- log["reconstruction"] = xrec
- if self.model.conditioning_key is not None:
- if hasattr(self.cond_stage_model, "decode"):
- xc = self.cond_stage_model.decode(c)
- log["conditioning"] = xc
- elif self.cond_stage_key in ["caption", "txt"]:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25)
- log["conditioning"] = xc
- elif self.cond_stage_key in ['class_label', 'cls']:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25)
- log['conditioning'] = xc
- elif isimage(xc):
- log["conditioning"] = xc
- if ismap(xc):
- log["original_conditioning"] = self.to_rgb(xc)
-
- if not (self.c_concat_log_start is None and self.c_concat_log_end is None):
- log["c_concat_decoded"] = self.decode_first_stage(c_cat[:, self.c_concat_log_start:self.c_concat_log_end])
-
- if plot_diffusion_rows:
- # get diffusion row
- diffusion_row = list()
- z_start = z[:n_row]
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(z_start)
- z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)
- diffusion_row.append(self.decode_first_stage(z_noisy))
-
- diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W
- diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')
- diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')
- diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])
- log["diffusion_row"] = diffusion_grid
-
- if sample:
- # get denoise row
- with ema_scope("Sampling"):
- samples, z_denoise_row = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]},
- batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True)
- x_samples = self.decode_first_stage(samples)
- log["samples"] = x_samples
- if plot_denoise_rows:
- denoise_grid = self._get_denoise_row_from_list(z_denoise_row)
- log["denoise_row"] = denoise_grid
-
- if unconditional_guidance_scale > 1.0:
- uc_cross = self.get_unconditional_conditioning(N, unconditional_guidance_label)
- uc_cat = c_cat
- uc_full = {"c_concat": [uc_cat], "c_crossattn": [uc_cross]}
- with ema_scope("Sampling with classifier-free guidance"):
- samples_cfg, _ = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]},
- batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=uc_full,
- )
- x_samples_cfg = self.decode_first_stage(samples_cfg)
- log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg
-
- return log
-
-
-class LatentInpaintDiffusion(LatentFinetuneDiffusion):
- """
- can either run as pure inpainting model (only concat mode) or with mixed conditionings,
- e.g. mask as concat and text via cross-attn.
- To disable finetuning mode, set finetune_keys to None
- """
-
- def __init__(self,
- concat_keys=("mask", "masked_image"),
- masked_image_key="masked_image",
- *args, **kwargs
- ):
- super().__init__(concat_keys, *args, **kwargs)
- self.masked_image_key = masked_image_key
- assert self.masked_image_key in concat_keys
-
- @torch.no_grad()
- def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False):
- # note: restricted to non-trainable encoders currently
- assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for inpainting'
- z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True,
- force_c_encode=True, return_original_cond=True, bs=bs)
-
- assert exists(self.concat_keys)
- c_cat = list()
- for ck in self.concat_keys:
- cc = rearrange(batch[ck], 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float()
- if bs is not None:
- cc = cc[:bs]
- cc = cc.to(self.device)
- bchw = z.shape
- if ck != self.masked_image_key:
- cc = torch.nn.functional.interpolate(cc, size=bchw[-2:])
- else:
- cc = self.get_first_stage_encoding(self.encode_first_stage(cc))
- c_cat.append(cc)
- c_cat = torch.cat(c_cat, dim=1)
- all_conds = {"c_concat": [c_cat], "c_crossattn": [c]}
- if return_first_stage_outputs:
- return z, all_conds, x, xrec, xc
- return z, all_conds
-
- @torch.no_grad()
- def log_images(self, *args, **kwargs):
- log = super(LatentInpaintDiffusion, self).log_images(*args, **kwargs)
- log["masked_image"] = rearrange(args[0]["masked_image"],
- 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float()
- return log
-
-
-class LatentDepth2ImageDiffusion(LatentFinetuneDiffusion):
- """
- condition on monocular depth estimation
- """
-
- def __init__(self, depth_stage_config, concat_keys=("midas_in",), *args, **kwargs):
- super().__init__(concat_keys=concat_keys, *args, **kwargs)
- self.depth_model = instantiate_from_config(depth_stage_config)
- self.depth_stage_key = concat_keys[0]
-
- @torch.no_grad()
- def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False):
- # note: restricted to non-trainable encoders currently
- assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for depth2img'
- z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True,
- force_c_encode=True, return_original_cond=True, bs=bs)
-
- assert exists(self.concat_keys)
- assert len(self.concat_keys) == 1
- c_cat = list()
- for ck in self.concat_keys:
- cc = batch[ck]
- if bs is not None:
- cc = cc[:bs]
- cc = cc.to(self.device)
- cc = self.depth_model(cc)
- cc = torch.nn.functional.interpolate(
- cc,
- size=z.shape[2:],
- mode="bicubic",
- align_corners=False,
- )
-
- depth_min, depth_max = torch.amin(cc, dim=[1, 2, 3], keepdim=True), torch.amax(cc, dim=[1, 2, 3],
- keepdim=True)
- cc = 2. * (cc - depth_min) / (depth_max - depth_min + 0.001) - 1.
- c_cat.append(cc)
- c_cat = torch.cat(c_cat, dim=1)
- all_conds = {"c_concat": [c_cat], "c_crossattn": [c]}
- if return_first_stage_outputs:
- return z, all_conds, x, xrec, xc
- return z, all_conds
-
- @torch.no_grad()
- def log_images(self, *args, **kwargs):
- log = super().log_images(*args, **kwargs)
- depth = self.depth_model(args[0][self.depth_stage_key])
- depth_min, depth_max = torch.amin(depth, dim=[1, 2, 3], keepdim=True), \
- torch.amax(depth, dim=[1, 2, 3], keepdim=True)
- log["depth"] = 2. * (depth - depth_min) / (depth_max - depth_min) - 1.
- return log
-
-
-class LatentUpscaleFinetuneDiffusion(LatentFinetuneDiffusion):
- """
- condition on low-res image (and optionally on some spatial noise augmentation)
- """
- def __init__(self, concat_keys=("lr",), reshuffle_patch_size=None,
- low_scale_config=None, low_scale_key=None, *args, **kwargs):
- super().__init__(concat_keys=concat_keys, *args, **kwargs)
- self.reshuffle_patch_size = reshuffle_patch_size
- self.low_scale_model = None
- if low_scale_config is not None:
- print("Initializing a low-scale model")
- assert exists(low_scale_key)
- self.instantiate_low_stage(low_scale_config)
- self.low_scale_key = low_scale_key
-
- def instantiate_low_stage(self, config):
- model = instantiate_from_config(config)
- self.low_scale_model = model.eval()
- self.low_scale_model.train = disabled_train
- for param in self.low_scale_model.parameters():
- param.requires_grad = False
-
- @torch.no_grad()
- def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False):
- # note: restricted to non-trainable encoders currently
- assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for upscaling-ft'
- z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True,
- force_c_encode=True, return_original_cond=True, bs=bs)
-
- assert exists(self.concat_keys)
- assert len(self.concat_keys) == 1
- # optionally make spatial noise_level here
- c_cat = list()
- noise_level = None
- for ck in self.concat_keys:
- cc = batch[ck]
- cc = rearrange(cc, 'b h w c -> b c h w')
- if exists(self.reshuffle_patch_size):
- assert isinstance(self.reshuffle_patch_size, int)
- cc = rearrange(cc, 'b c (p1 h) (p2 w) -> b (p1 p2 c) h w',
- p1=self.reshuffle_patch_size, p2=self.reshuffle_patch_size)
- if bs is not None:
- cc = cc[:bs]
- cc = cc.to(self.device)
- if exists(self.low_scale_model) and ck == self.low_scale_key:
- cc, noise_level = self.low_scale_model(cc)
- c_cat.append(cc)
- c_cat = torch.cat(c_cat, dim=1)
- if exists(noise_level):
- all_conds = {"c_concat": [c_cat], "c_crossattn": [c], "c_adm": noise_level}
- else:
- all_conds = {"c_concat": [c_cat], "c_crossattn": [c]}
- if return_first_stage_outputs:
- return z, all_conds, x, xrec, xc
- return z, all_conds
-
- @torch.no_grad()
- def log_images(self, *args, **kwargs):
- log = super().log_images(*args, **kwargs)
- log["lr"] = rearrange(args[0]["lr"], 'b h w c -> b c h w')
- return log
diff --git a/spaces/Arthur678/vits-uma-genshin-honkai/README.md b/spaces/Arthur678/vits-uma-genshin-honkai/README.md
deleted file mode 100644
index 1c0aa069bfd980b6b45bb2bf62ff74bd9b0b61c2..0000000000000000000000000000000000000000
--- a/spaces/Arthur678/vits-uma-genshin-honkai/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-license: apache-2.0
-title: ' vits-uma-genshin-honkai'
-sdk: gradio
-sdk_version: 3.7
-emoji: 🐨
-colorTo: yellow
-pinned: false
-app_file: app.py
-duplicated_from: ikechan8370/vits-uma-genshin-honkai
----
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/colorama/tests/ansitowin32_test.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/colorama/tests/ansitowin32_test.py
deleted file mode 100644
index 91ca551f97b4576c680711e826a1855fb944c872..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/colorama/tests/ansitowin32_test.py
+++ /dev/null
@@ -1,294 +0,0 @@
-# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.
-from io import StringIO, TextIOWrapper
-from unittest import TestCase, main
-try:
- from contextlib import ExitStack
-except ImportError:
- # python 2
- from contextlib2 import ExitStack
-
-try:
- from unittest.mock import MagicMock, Mock, patch
-except ImportError:
- from mock import MagicMock, Mock, patch
-
-from ..ansitowin32 import AnsiToWin32, StreamWrapper
-from ..win32 import ENABLE_VIRTUAL_TERMINAL_PROCESSING
-from .utils import osname
-
-
-class StreamWrapperTest(TestCase):
-
- def testIsAProxy(self):
- mockStream = Mock()
- wrapper = StreamWrapper(mockStream, None)
- self.assertTrue( wrapper.random_attr is mockStream.random_attr )
-
- def testDelegatesWrite(self):
- mockStream = Mock()
- mockConverter = Mock()
- wrapper = StreamWrapper(mockStream, mockConverter)
- wrapper.write('hello')
- self.assertTrue(mockConverter.write.call_args, (('hello',), {}))
-
- def testDelegatesContext(self):
- mockConverter = Mock()
- s = StringIO()
- with StreamWrapper(s, mockConverter) as fp:
- fp.write(u'hello')
- self.assertTrue(s.closed)
-
- def testProxyNoContextManager(self):
- mockStream = MagicMock()
- mockStream.__enter__.side_effect = AttributeError()
- mockConverter = Mock()
- with self.assertRaises(AttributeError) as excinfo:
- with StreamWrapper(mockStream, mockConverter) as wrapper:
- wrapper.write('hello')
-
- def test_closed_shouldnt_raise_on_closed_stream(self):
- stream = StringIO()
- stream.close()
- wrapper = StreamWrapper(stream, None)
- self.assertEqual(wrapper.closed, True)
-
- def test_closed_shouldnt_raise_on_detached_stream(self):
- stream = TextIOWrapper(StringIO())
- stream.detach()
- wrapper = StreamWrapper(stream, None)
- self.assertEqual(wrapper.closed, True)
-
-class AnsiToWin32Test(TestCase):
-
- def testInit(self):
- mockStdout = Mock()
- auto = Mock()
- stream = AnsiToWin32(mockStdout, autoreset=auto)
- self.assertEqual(stream.wrapped, mockStdout)
- self.assertEqual(stream.autoreset, auto)
-
- @patch('colorama.ansitowin32.winterm', None)
- @patch('colorama.ansitowin32.winapi_test', lambda *_: True)
- def testStripIsTrueOnWindows(self):
- with osname('nt'):
- mockStdout = Mock()
- stream = AnsiToWin32(mockStdout)
- self.assertTrue(stream.strip)
-
- def testStripIsFalseOffWindows(self):
- with osname('posix'):
- mockStdout = Mock(closed=False)
- stream = AnsiToWin32(mockStdout)
- self.assertFalse(stream.strip)
-
- def testWriteStripsAnsi(self):
- mockStdout = Mock()
- stream = AnsiToWin32(mockStdout)
- stream.wrapped = Mock()
- stream.write_and_convert = Mock()
- stream.strip = True
-
- stream.write('abc')
-
- self.assertFalse(stream.wrapped.write.called)
- self.assertEqual(stream.write_and_convert.call_args, (('abc',), {}))
-
- def testWriteDoesNotStripAnsi(self):
- mockStdout = Mock()
- stream = AnsiToWin32(mockStdout)
- stream.wrapped = Mock()
- stream.write_and_convert = Mock()
- stream.strip = False
- stream.convert = False
-
- stream.write('abc')
-
- self.assertFalse(stream.write_and_convert.called)
- self.assertEqual(stream.wrapped.write.call_args, (('abc',), {}))
-
- def assert_autoresets(self, convert, autoreset=True):
- stream = AnsiToWin32(Mock())
- stream.convert = convert
- stream.reset_all = Mock()
- stream.autoreset = autoreset
- stream.winterm = Mock()
-
- stream.write('abc')
-
- self.assertEqual(stream.reset_all.called, autoreset)
-
- def testWriteAutoresets(self):
- self.assert_autoresets(convert=True)
- self.assert_autoresets(convert=False)
- self.assert_autoresets(convert=True, autoreset=False)
- self.assert_autoresets(convert=False, autoreset=False)
-
- def testWriteAndConvertWritesPlainText(self):
- stream = AnsiToWin32(Mock())
- stream.write_and_convert( 'abc' )
- self.assertEqual( stream.wrapped.write.call_args, (('abc',), {}) )
-
- def testWriteAndConvertStripsAllValidAnsi(self):
- stream = AnsiToWin32(Mock())
- stream.call_win32 = Mock()
- data = [
- 'abc\033[mdef',
- 'abc\033[0mdef',
- 'abc\033[2mdef',
- 'abc\033[02mdef',
- 'abc\033[002mdef',
- 'abc\033[40mdef',
- 'abc\033[040mdef',
- 'abc\033[0;1mdef',
- 'abc\033[40;50mdef',
- 'abc\033[50;30;40mdef',
- 'abc\033[Adef',
- 'abc\033[0Gdef',
- 'abc\033[1;20;128Hdef',
- ]
- for datum in data:
- stream.wrapped.write.reset_mock()
- stream.write_and_convert( datum )
- self.assertEqual(
- [args[0] for args in stream.wrapped.write.call_args_list],
- [ ('abc',), ('def',) ]
- )
-
- def testWriteAndConvertSkipsEmptySnippets(self):
- stream = AnsiToWin32(Mock())
- stream.call_win32 = Mock()
- stream.write_and_convert( '\033[40m\033[41m' )
- self.assertFalse( stream.wrapped.write.called )
-
- def testWriteAndConvertCallsWin32WithParamsAndCommand(self):
- stream = AnsiToWin32(Mock())
- stream.convert = True
- stream.call_win32 = Mock()
- stream.extract_params = Mock(return_value='params')
- data = {
- 'abc\033[adef': ('a', 'params'),
- 'abc\033[;;bdef': ('b', 'params'),
- 'abc\033[0cdef': ('c', 'params'),
- 'abc\033[;;0;;Gdef': ('G', 'params'),
- 'abc\033[1;20;128Hdef': ('H', 'params'),
- }
- for datum, expected in data.items():
- stream.call_win32.reset_mock()
- stream.write_and_convert( datum )
- self.assertEqual( stream.call_win32.call_args[0], expected )
-
- def test_reset_all_shouldnt_raise_on_closed_orig_stdout(self):
- stream = StringIO()
- converter = AnsiToWin32(stream)
- stream.close()
-
- converter.reset_all()
-
- def test_wrap_shouldnt_raise_on_closed_orig_stdout(self):
- stream = StringIO()
- stream.close()
- with \
- patch("colorama.ansitowin32.os.name", "nt"), \
- patch("colorama.ansitowin32.winapi_test", lambda: True):
- converter = AnsiToWin32(stream)
- self.assertTrue(converter.strip)
- self.assertFalse(converter.convert)
-
- def test_wrap_shouldnt_raise_on_missing_closed_attr(self):
- with \
- patch("colorama.ansitowin32.os.name", "nt"), \
- patch("colorama.ansitowin32.winapi_test", lambda: True):
- converter = AnsiToWin32(object())
- self.assertTrue(converter.strip)
- self.assertFalse(converter.convert)
-
- def testExtractParams(self):
- stream = AnsiToWin32(Mock())
- data = {
- '': (0,),
- ';;': (0,),
- '2': (2,),
- ';;002;;': (2,),
- '0;1': (0, 1),
- ';;003;;456;;': (3, 456),
- '11;22;33;44;55': (11, 22, 33, 44, 55),
- }
- for datum, expected in data.items():
- self.assertEqual(stream.extract_params('m', datum), expected)
-
- def testCallWin32UsesLookup(self):
- listener = Mock()
- stream = AnsiToWin32(listener)
- stream.win32_calls = {
- 1: (lambda *_, **__: listener(11),),
- 2: (lambda *_, **__: listener(22),),
- 3: (lambda *_, **__: listener(33),),
- }
- stream.call_win32('m', (3, 1, 99, 2))
- self.assertEqual(
- [a[0][0] for a in listener.call_args_list],
- [33, 11, 22] )
-
- def test_osc_codes(self):
- mockStdout = Mock()
- stream = AnsiToWin32(mockStdout, convert=True)
- with patch('colorama.ansitowin32.winterm') as winterm:
- data = [
- '\033]0\x07', # missing arguments
- '\033]0;foo\x08', # wrong OSC command
- '\033]0;colorama_test_title\x07', # should work
- '\033]1;colorama_test_title\x07', # wrong set command
- '\033]2;colorama_test_title\x07', # should work
- '\033]' + ';' * 64 + '\x08', # see issue #247
- ]
- for code in data:
- stream.write(code)
- self.assertEqual(winterm.set_title.call_count, 2)
-
- def test_native_windows_ansi(self):
- with ExitStack() as stack:
- def p(a, b):
- stack.enter_context(patch(a, b, create=True))
- # Pretend to be on Windows
- p("colorama.ansitowin32.os.name", "nt")
- p("colorama.ansitowin32.winapi_test", lambda: True)
- p("colorama.win32.winapi_test", lambda: True)
- p("colorama.winterm.win32.windll", "non-None")
- p("colorama.winterm.get_osfhandle", lambda _: 1234)
-
- # Pretend that our mock stream has native ANSI support
- p(
- "colorama.winterm.win32.GetConsoleMode",
- lambda _: ENABLE_VIRTUAL_TERMINAL_PROCESSING,
- )
- SetConsoleMode = Mock()
- p("colorama.winterm.win32.SetConsoleMode", SetConsoleMode)
-
- stdout = Mock()
- stdout.closed = False
- stdout.isatty.return_value = True
- stdout.fileno.return_value = 1
-
- # Our fake console says it has native vt support, so AnsiToWin32 should
- # enable that support and do nothing else.
- stream = AnsiToWin32(stdout)
- SetConsoleMode.assert_called_with(1234, ENABLE_VIRTUAL_TERMINAL_PROCESSING)
- self.assertFalse(stream.strip)
- self.assertFalse(stream.convert)
- self.assertFalse(stream.should_wrap())
-
- # Now let's pretend we're on an old Windows console, that doesn't have
- # native ANSI support.
- p("colorama.winterm.win32.GetConsoleMode", lambda _: 0)
- SetConsoleMode = Mock()
- p("colorama.winterm.win32.SetConsoleMode", SetConsoleMode)
-
- stream = AnsiToWin32(stdout)
- SetConsoleMode.assert_called_with(1234, ENABLE_VIRTUAL_TERMINAL_PROCESSING)
- self.assertTrue(stream.strip)
- self.assertTrue(stream.convert)
- self.assertTrue(stream.should_wrap())
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/repr.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/repr.py
deleted file mode 100644
index f284bcafa6ab2e1c9ae51be54107836e68cfb0d3..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/repr.py
+++ /dev/null
@@ -1,149 +0,0 @@
-import inspect
-from functools import partial
-from typing import (
- Any,
- Callable,
- Iterable,
- List,
- Optional,
- Tuple,
- Type,
- TypeVar,
- Union,
- overload,
-)
-
-T = TypeVar("T")
-
-
-Result = Iterable[Union[Any, Tuple[Any], Tuple[str, Any], Tuple[str, Any, Any]]]
-RichReprResult = Result
-
-
-class ReprError(Exception):
- """An error occurred when attempting to build a repr."""
-
-
-@overload
-def auto(cls: Optional[Type[T]]) -> Type[T]:
- ...
-
-
-@overload
-def auto(*, angular: bool = False) -> Callable[[Type[T]], Type[T]]:
- ...
-
-
-def auto(
- cls: Optional[Type[T]] = None, *, angular: Optional[bool] = None
-) -> Union[Type[T], Callable[[Type[T]], Type[T]]]:
- """Class decorator to create __repr__ from __rich_repr__"""
-
- def do_replace(cls: Type[T], angular: Optional[bool] = None) -> Type[T]:
- def auto_repr(self: T) -> str:
- """Create repr string from __rich_repr__"""
- repr_str: List[str] = []
- append = repr_str.append
-
- angular: bool = getattr(self.__rich_repr__, "angular", False) # type: ignore[attr-defined]
- for arg in self.__rich_repr__(): # type: ignore[attr-defined]
- if isinstance(arg, tuple):
- if len(arg) == 1:
- append(repr(arg[0]))
- else:
- key, value, *default = arg
- if key is None:
- append(repr(value))
- else:
- if default and default[0] == value:
- continue
- append(f"{key}={value!r}")
- else:
- append(repr(arg))
- if angular:
- return f"<{self.__class__.__name__} {' '.join(repr_str)}>"
- else:
- return f"{self.__class__.__name__}({', '.join(repr_str)})"
-
- def auto_rich_repr(self: Type[T]) -> Result:
- """Auto generate __rich_rep__ from signature of __init__"""
- try:
- signature = inspect.signature(self.__init__)
- for name, param in signature.parameters.items():
- if param.kind == param.POSITIONAL_ONLY:
- yield getattr(self, name)
- elif param.kind in (
- param.POSITIONAL_OR_KEYWORD,
- param.KEYWORD_ONLY,
- ):
- if param.default == param.empty:
- yield getattr(self, param.name)
- else:
- yield param.name, getattr(self, param.name), param.default
- except Exception as error:
- raise ReprError(
- f"Failed to auto generate __rich_repr__; {error}"
- ) from None
-
- if not hasattr(cls, "__rich_repr__"):
- auto_rich_repr.__doc__ = "Build a rich repr"
- cls.__rich_repr__ = auto_rich_repr # type: ignore[attr-defined]
-
- auto_repr.__doc__ = "Return repr(self)"
- cls.__repr__ = auto_repr # type: ignore[assignment]
- if angular is not None:
- cls.__rich_repr__.angular = angular # type: ignore[attr-defined]
- return cls
-
- if cls is None:
- return partial(do_replace, angular=angular)
- else:
- return do_replace(cls, angular=angular)
-
-
-@overload
-def rich_repr(cls: Optional[Type[T]]) -> Type[T]:
- ...
-
-
-@overload
-def rich_repr(*, angular: bool = False) -> Callable[[Type[T]], Type[T]]:
- ...
-
-
-def rich_repr(
- cls: Optional[Type[T]] = None, *, angular: bool = False
-) -> Union[Type[T], Callable[[Type[T]], Type[T]]]:
- if cls is None:
- return auto(angular=angular)
- else:
- return auto(cls)
-
-
-if __name__ == "__main__":
-
- @auto
- class Foo:
- def __rich_repr__(self) -> Result:
- yield "foo"
- yield "bar", {"shopping": ["eggs", "ham", "pineapple"]}
- yield "buy", "hand sanitizer"
-
- foo = Foo()
- from pip._vendor.rich.console import Console
-
- console = Console()
-
- console.rule("Standard repr")
- console.print(foo)
-
- console.print(foo, width=60)
- console.print(foo, width=30)
-
- console.rule("Angular repr")
- Foo.__rich_repr__.angular = True # type: ignore[attr-defined]
-
- console.print(foo)
-
- console.print(foo, width=60)
- console.print(foo, width=30)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/_msvccompiler.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/_msvccompiler.py
deleted file mode 100644
index 729c2dd5217528d7b3f9220cc2c7981f95c6f6e1..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/_msvccompiler.py
+++ /dev/null
@@ -1,572 +0,0 @@
-"""distutils._msvccompiler
-
-Contains MSVCCompiler, an implementation of the abstract CCompiler class
-for Microsoft Visual Studio 2015.
-
-The module is compatible with VS 2015 and later. You can find legacy support
-for older versions in distutils.msvc9compiler and distutils.msvccompiler.
-"""
-
-# Written by Perry Stoll
-# hacked by Robin Becker and Thomas Heller to do a better job of
-# finding DevStudio (through the registry)
-# ported to VS 2005 and VS 2008 by Christian Heimes
-# ported to VS 2015 by Steve Dower
-
-import os
-import subprocess
-import contextlib
-import warnings
-import unittest.mock as mock
-
-with contextlib.suppress(ImportError):
- import winreg
-
-from distutils.errors import (
- DistutilsExecError,
- DistutilsPlatformError,
- CompileError,
- LibError,
- LinkError,
-)
-from distutils.ccompiler import CCompiler, gen_lib_options
-from distutils import log
-from distutils.util import get_platform
-
-from itertools import count
-
-
-def _find_vc2015():
- try:
- key = winreg.OpenKeyEx(
- winreg.HKEY_LOCAL_MACHINE,
- r"Software\Microsoft\VisualStudio\SxS\VC7",
- access=winreg.KEY_READ | winreg.KEY_WOW64_32KEY,
- )
- except OSError:
- log.debug("Visual C++ is not registered")
- return None, None
-
- best_version = 0
- best_dir = None
- with key:
- for i in count():
- try:
- v, vc_dir, vt = winreg.EnumValue(key, i)
- except OSError:
- break
- if v and vt == winreg.REG_SZ and os.path.isdir(vc_dir):
- try:
- version = int(float(v))
- except (ValueError, TypeError):
- continue
- if version >= 14 and version > best_version:
- best_version, best_dir = version, vc_dir
- return best_version, best_dir
-
-
-def _find_vc2017():
- """Returns "15, path" based on the result of invoking vswhere.exe
- If no install is found, returns "None, None"
-
- The version is returned to avoid unnecessarily changing the function
- result. It may be ignored when the path is not None.
-
- If vswhere.exe is not available, by definition, VS 2017 is not
- installed.
- """
- root = os.environ.get("ProgramFiles(x86)") or os.environ.get("ProgramFiles")
- if not root:
- return None, None
-
- try:
- path = subprocess.check_output(
- [
- os.path.join(
- root, "Microsoft Visual Studio", "Installer", "vswhere.exe"
- ),
- "-latest",
- "-prerelease",
- "-requires",
- "Microsoft.VisualStudio.Component.VC.Tools.x86.x64",
- "-property",
- "installationPath",
- "-products",
- "*",
- ],
- encoding="mbcs",
- errors="strict",
- ).strip()
- except (subprocess.CalledProcessError, OSError, UnicodeDecodeError):
- return None, None
-
- path = os.path.join(path, "VC", "Auxiliary", "Build")
- if os.path.isdir(path):
- return 15, path
-
- return None, None
-
-
-PLAT_SPEC_TO_RUNTIME = {
- 'x86': 'x86',
- 'x86_amd64': 'x64',
- 'x86_arm': 'arm',
- 'x86_arm64': 'arm64',
-}
-
-
-def _find_vcvarsall(plat_spec):
- # bpo-38597: Removed vcruntime return value
- _, best_dir = _find_vc2017()
-
- if not best_dir:
- best_version, best_dir = _find_vc2015()
-
- if not best_dir:
- log.debug("No suitable Visual C++ version found")
- return None, None
-
- vcvarsall = os.path.join(best_dir, "vcvarsall.bat")
- if not os.path.isfile(vcvarsall):
- log.debug("%s cannot be found", vcvarsall)
- return None, None
-
- return vcvarsall, None
-
-
-def _get_vc_env(plat_spec):
- if os.getenv("DISTUTILS_USE_SDK"):
- return {key.lower(): value for key, value in os.environ.items()}
-
- vcvarsall, _ = _find_vcvarsall(plat_spec)
- if not vcvarsall:
- raise DistutilsPlatformError("Unable to find vcvarsall.bat")
-
- try:
- out = subprocess.check_output(
- f'cmd /u /c "{vcvarsall}" {plat_spec} && set',
- stderr=subprocess.STDOUT,
- ).decode('utf-16le', errors='replace')
- except subprocess.CalledProcessError as exc:
- log.error(exc.output)
- raise DistutilsPlatformError(f"Error executing {exc.cmd}")
-
- env = {
- key.lower(): value
- for key, _, value in (line.partition('=') for line in out.splitlines())
- if key and value
- }
-
- return env
-
-
-def _find_exe(exe, paths=None):
- """Return path to an MSVC executable program.
-
- Tries to find the program in several places: first, one of the
- MSVC program search paths from the registry; next, the directories
- in the PATH environment variable. If any of those work, return an
- absolute path that is known to exist. If none of them work, just
- return the original program name, 'exe'.
- """
- if not paths:
- paths = os.getenv('path').split(os.pathsep)
- for p in paths:
- fn = os.path.join(os.path.abspath(p), exe)
- if os.path.isfile(fn):
- return fn
- return exe
-
-
-# A map keyed by get_platform() return values to values accepted by
-# 'vcvarsall.bat'. Always cross-compile from x86 to work with the
-# lighter-weight MSVC installs that do not include native 64-bit tools.
-PLAT_TO_VCVARS = {
- 'win32': 'x86',
- 'win-amd64': 'x86_amd64',
- 'win-arm32': 'x86_arm',
- 'win-arm64': 'x86_arm64',
-}
-
-
-class MSVCCompiler(CCompiler):
- """Concrete class that implements an interface to Microsoft Visual C++,
- as defined by the CCompiler abstract class."""
-
- compiler_type = 'msvc'
-
- # Just set this so CCompiler's constructor doesn't barf. We currently
- # don't use the 'set_executables()' bureaucracy provided by CCompiler,
- # as it really isn't necessary for this sort of single-compiler class.
- # Would be nice to have a consistent interface with UnixCCompiler,
- # though, so it's worth thinking about.
- executables = {}
-
- # Private class data (need to distinguish C from C++ source for compiler)
- _c_extensions = ['.c']
- _cpp_extensions = ['.cc', '.cpp', '.cxx']
- _rc_extensions = ['.rc']
- _mc_extensions = ['.mc']
-
- # Needed for the filename generation methods provided by the
- # base class, CCompiler.
- src_extensions = _c_extensions + _cpp_extensions + _rc_extensions + _mc_extensions
- res_extension = '.res'
- obj_extension = '.obj'
- static_lib_extension = '.lib'
- shared_lib_extension = '.dll'
- static_lib_format = shared_lib_format = '%s%s'
- exe_extension = '.exe'
-
- def __init__(self, verbose=0, dry_run=0, force=0):
- super().__init__(verbose, dry_run, force)
- # target platform (.plat_name is consistent with 'bdist')
- self.plat_name = None
- self.initialized = False
-
- @classmethod
- def _configure(cls, vc_env):
- """
- Set class-level include/lib dirs.
- """
- cls.include_dirs = cls._parse_path(vc_env.get('include', ''))
- cls.library_dirs = cls._parse_path(vc_env.get('lib', ''))
-
- @staticmethod
- def _parse_path(val):
- return [dir.rstrip(os.sep) for dir in val.split(os.pathsep) if dir]
-
- def initialize(self, plat_name=None):
- # multi-init means we would need to check platform same each time...
- assert not self.initialized, "don't init multiple times"
- if plat_name is None:
- plat_name = get_platform()
- # sanity check for platforms to prevent obscure errors later.
- if plat_name not in PLAT_TO_VCVARS:
- raise DistutilsPlatformError(
- f"--plat-name must be one of {tuple(PLAT_TO_VCVARS)}"
- )
-
- # Get the vcvarsall.bat spec for the requested platform.
- plat_spec = PLAT_TO_VCVARS[plat_name]
-
- vc_env = _get_vc_env(plat_spec)
- if not vc_env:
- raise DistutilsPlatformError(
- "Unable to find a compatible " "Visual Studio installation."
- )
- self._configure(vc_env)
-
- self._paths = vc_env.get('path', '')
- paths = self._paths.split(os.pathsep)
- self.cc = _find_exe("cl.exe", paths)
- self.linker = _find_exe("link.exe", paths)
- self.lib = _find_exe("lib.exe", paths)
- self.rc = _find_exe("rc.exe", paths) # resource compiler
- self.mc = _find_exe("mc.exe", paths) # message compiler
- self.mt = _find_exe("mt.exe", paths) # message compiler
-
- self.preprocess_options = None
- # bpo-38597: Always compile with dynamic linking
- # Future releases of Python 3.x will include all past
- # versions of vcruntime*.dll for compatibility.
- self.compile_options = ['/nologo', '/O2', '/W3', '/GL', '/DNDEBUG', '/MD']
-
- self.compile_options_debug = [
- '/nologo',
- '/Od',
- '/MDd',
- '/Zi',
- '/W3',
- '/D_DEBUG',
- ]
-
- ldflags = ['/nologo', '/INCREMENTAL:NO', '/LTCG']
-
- ldflags_debug = ['/nologo', '/INCREMENTAL:NO', '/LTCG', '/DEBUG:FULL']
-
- self.ldflags_exe = [*ldflags, '/MANIFEST:EMBED,ID=1']
- self.ldflags_exe_debug = [*ldflags_debug, '/MANIFEST:EMBED,ID=1']
- self.ldflags_shared = [
- *ldflags,
- '/DLL',
- '/MANIFEST:EMBED,ID=2',
- '/MANIFESTUAC:NO',
- ]
- self.ldflags_shared_debug = [
- *ldflags_debug,
- '/DLL',
- '/MANIFEST:EMBED,ID=2',
- '/MANIFESTUAC:NO',
- ]
- self.ldflags_static = [*ldflags]
- self.ldflags_static_debug = [*ldflags_debug]
-
- self._ldflags = {
- (CCompiler.EXECUTABLE, None): self.ldflags_exe,
- (CCompiler.EXECUTABLE, False): self.ldflags_exe,
- (CCompiler.EXECUTABLE, True): self.ldflags_exe_debug,
- (CCompiler.SHARED_OBJECT, None): self.ldflags_shared,
- (CCompiler.SHARED_OBJECT, False): self.ldflags_shared,
- (CCompiler.SHARED_OBJECT, True): self.ldflags_shared_debug,
- (CCompiler.SHARED_LIBRARY, None): self.ldflags_static,
- (CCompiler.SHARED_LIBRARY, False): self.ldflags_static,
- (CCompiler.SHARED_LIBRARY, True): self.ldflags_static_debug,
- }
-
- self.initialized = True
-
- # -- Worker methods ------------------------------------------------
-
- @property
- def out_extensions(self):
- return {
- **super().out_extensions,
- **{
- ext: self.res_extension
- for ext in self._rc_extensions + self._mc_extensions
- },
- }
-
- def compile( # noqa: C901
- self,
- sources,
- output_dir=None,
- macros=None,
- include_dirs=None,
- debug=0,
- extra_preargs=None,
- extra_postargs=None,
- depends=None,
- ):
-
- if not self.initialized:
- self.initialize()
- compile_info = self._setup_compile(
- output_dir, macros, include_dirs, sources, depends, extra_postargs
- )
- macros, objects, extra_postargs, pp_opts, build = compile_info
-
- compile_opts = extra_preargs or []
- compile_opts.append('/c')
- if debug:
- compile_opts.extend(self.compile_options_debug)
- else:
- compile_opts.extend(self.compile_options)
-
- add_cpp_opts = False
-
- for obj in objects:
- try:
- src, ext = build[obj]
- except KeyError:
- continue
- if debug:
- # pass the full pathname to MSVC in debug mode,
- # this allows the debugger to find the source file
- # without asking the user to browse for it
- src = os.path.abspath(src)
-
- if ext in self._c_extensions:
- input_opt = "/Tc" + src
- elif ext in self._cpp_extensions:
- input_opt = "/Tp" + src
- add_cpp_opts = True
- elif ext in self._rc_extensions:
- # compile .RC to .RES file
- input_opt = src
- output_opt = "/fo" + obj
- try:
- self.spawn([self.rc] + pp_opts + [output_opt, input_opt])
- except DistutilsExecError as msg:
- raise CompileError(msg)
- continue
- elif ext in self._mc_extensions:
- # Compile .MC to .RC file to .RES file.
- # * '-h dir' specifies the directory for the
- # generated include file
- # * '-r dir' specifies the target directory of the
- # generated RC file and the binary message resource
- # it includes
- #
- # For now (since there are no options to change this),
- # we use the source-directory for the include file and
- # the build directory for the RC file and message
- # resources. This works at least for win32all.
- h_dir = os.path.dirname(src)
- rc_dir = os.path.dirname(obj)
- try:
- # first compile .MC to .RC and .H file
- self.spawn([self.mc, '-h', h_dir, '-r', rc_dir, src])
- base, _ = os.path.splitext(os.path.basename(src))
- rc_file = os.path.join(rc_dir, base + '.rc')
- # then compile .RC to .RES file
- self.spawn([self.rc, "/fo" + obj, rc_file])
-
- except DistutilsExecError as msg:
- raise CompileError(msg)
- continue
- else:
- # how to handle this file?
- raise CompileError(f"Don't know how to compile {src} to {obj}")
-
- args = [self.cc] + compile_opts + pp_opts
- if add_cpp_opts:
- args.append('/EHsc')
- args.append(input_opt)
- args.append("/Fo" + obj)
- args.extend(extra_postargs)
-
- try:
- self.spawn(args)
- except DistutilsExecError as msg:
- raise CompileError(msg)
-
- return objects
-
- def create_static_lib(
- self, objects, output_libname, output_dir=None, debug=0, target_lang=None
- ):
-
- if not self.initialized:
- self.initialize()
- objects, output_dir = self._fix_object_args(objects, output_dir)
- output_filename = self.library_filename(output_libname, output_dir=output_dir)
-
- if self._need_link(objects, output_filename):
- lib_args = objects + ['/OUT:' + output_filename]
- if debug:
- pass # XXX what goes here?
- try:
- log.debug('Executing "%s" %s', self.lib, ' '.join(lib_args))
- self.spawn([self.lib] + lib_args)
- except DistutilsExecError as msg:
- raise LibError(msg)
- else:
- log.debug("skipping %s (up-to-date)", output_filename)
-
- def link(
- self,
- target_desc,
- objects,
- output_filename,
- output_dir=None,
- libraries=None,
- library_dirs=None,
- runtime_library_dirs=None,
- export_symbols=None,
- debug=0,
- extra_preargs=None,
- extra_postargs=None,
- build_temp=None,
- target_lang=None,
- ):
-
- if not self.initialized:
- self.initialize()
- objects, output_dir = self._fix_object_args(objects, output_dir)
- fixed_args = self._fix_lib_args(libraries, library_dirs, runtime_library_dirs)
- libraries, library_dirs, runtime_library_dirs = fixed_args
-
- if runtime_library_dirs:
- self.warn(
- "I don't know what to do with 'runtime_library_dirs': "
- + str(runtime_library_dirs)
- )
-
- lib_opts = gen_lib_options(self, library_dirs, runtime_library_dirs, libraries)
- if output_dir is not None:
- output_filename = os.path.join(output_dir, output_filename)
-
- if self._need_link(objects, output_filename):
- ldflags = self._ldflags[target_desc, debug]
-
- export_opts = ["/EXPORT:" + sym for sym in (export_symbols or [])]
-
- ld_args = (
- ldflags + lib_opts + export_opts + objects + ['/OUT:' + output_filename]
- )
-
- # The MSVC linker generates .lib and .exp files, which cannot be
- # suppressed by any linker switches. The .lib files may even be
- # needed! Make sure they are generated in the temporary build
- # directory. Since they have different names for debug and release
- # builds, they can go into the same directory.
- build_temp = os.path.dirname(objects[0])
- if export_symbols is not None:
- (dll_name, dll_ext) = os.path.splitext(
- os.path.basename(output_filename)
- )
- implib_file = os.path.join(build_temp, self.library_filename(dll_name))
- ld_args.append('/IMPLIB:' + implib_file)
-
- if extra_preargs:
- ld_args[:0] = extra_preargs
- if extra_postargs:
- ld_args.extend(extra_postargs)
-
- output_dir = os.path.dirname(os.path.abspath(output_filename))
- self.mkpath(output_dir)
- try:
- log.debug('Executing "%s" %s', self.linker, ' '.join(ld_args))
- self.spawn([self.linker] + ld_args)
- except DistutilsExecError as msg:
- raise LinkError(msg)
- else:
- log.debug("skipping %s (up-to-date)", output_filename)
-
- def spawn(self, cmd):
- env = dict(os.environ, PATH=self._paths)
- with self._fallback_spawn(cmd, env) as fallback:
- return super().spawn(cmd, env=env)
- return fallback.value
-
- @contextlib.contextmanager
- def _fallback_spawn(self, cmd, env):
- """
- Discovered in pypa/distutils#15, some tools monkeypatch the compiler,
- so the 'env' kwarg causes a TypeError. Detect this condition and
- restore the legacy, unsafe behavior.
- """
- bag = type('Bag', (), {})()
- try:
- yield bag
- except TypeError as exc:
- if "unexpected keyword argument 'env'" not in str(exc):
- raise
- else:
- return
- warnings.warn("Fallback spawn triggered. Please update distutils monkeypatch.")
- with mock.patch.dict('os.environ', env):
- bag.value = super().spawn(cmd)
-
- # -- Miscellaneous methods -----------------------------------------
- # These are all used by the 'gen_lib_options() function, in
- # ccompiler.py.
-
- def library_dir_option(self, dir):
- return "/LIBPATH:" + dir
-
- def runtime_library_dir_option(self, dir):
- raise DistutilsPlatformError(
- "don't know how to set runtime library search path for MSVC"
- )
-
- def library_option(self, lib):
- return self.library_filename(lib)
-
- def find_library_file(self, dirs, lib, debug=0):
- # Prefer a debugging library if found (and requested), but deal
- # with it if we don't have one.
- if debug:
- try_names = [lib + "_d", lib]
- else:
- try_names = [lib]
- for dir in dirs:
- for name in try_names:
- libfile = os.path.join(dir, self.library_filename(name))
- if os.path.isfile(libfile):
- return libfile
- else:
- # Oops, didn't find it in *any* of 'dirs'
- return None
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/custom_solver.py b/spaces/Awiny/Image2Paragraph/models/grit_src/grit/custom_solver.py
deleted file mode 100644
index 87f7d61ed756acf9326b7ab4097a989a9e6c7532..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/custom_solver.py
+++ /dev/null
@@ -1,88 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-# Modified by Jialian Wu from https://github.com/facebookresearch/Detic/blob/main/detic/custom_solver.py
-import itertools
-from typing import Any, Callable, Dict, Iterable, List, Set, Type, Union
-import torch
-
-from detectron2.config import CfgNode
-
-from detectron2.solver.build import maybe_add_gradient_clipping
-
-
-def build_custom_optimizer(cfg: CfgNode, model: torch.nn.Module) -> torch.optim.Optimizer:
- params: List[Dict[str, Any]] = []
- memo: Set[torch.nn.parameter.Parameter] = set()
- optimizer_type = cfg.SOLVER.OPTIMIZER
-
- for key, value in model.named_parameters(recurse=True):
- if not value.requires_grad:
- continue
- # Avoid duplicating parameters
- if value in memo:
- continue
- memo.add(value)
- lr = cfg.SOLVER.BASE_LR
- weight_decay = cfg.SOLVER.WEIGHT_DECAY
-
- if cfg.SOLVER.VIT_LAYER_DECAY:
- lr = lr * get_vit_lr_decay_rate(key, cfg.SOLVER.VIT_LAYER_DECAY_RATE, cfg.MODEL.VIT_LAYERS)
-
- param = {"params": [value], "lr": lr}
- if optimizer_type != 'ADAMW':
- param['weight_decay'] = weight_decay
- params += [param]
-
- def maybe_add_full_model_gradient_clipping(optim): # optim: the optimizer class
- # detectron2 doesn't have full model gradient clipping now
- clip_norm_val = cfg.SOLVER.CLIP_GRADIENTS.CLIP_VALUE
- enable = (
- cfg.SOLVER.CLIP_GRADIENTS.ENABLED
- and cfg.SOLVER.CLIP_GRADIENTS.CLIP_TYPE == "full_model"
- and clip_norm_val > 0.0
- )
-
- class FullModelGradientClippingOptimizer(optim):
- def step(self, closure=None):
- all_params = itertools.chain(*[x["params"] for x in self.param_groups])
- torch.nn.utils.clip_grad_norm_(all_params, clip_norm_val)
- super().step(closure=closure)
-
- return FullModelGradientClippingOptimizer if enable else optim
-
-
- if optimizer_type == 'SGD':
- optimizer = maybe_add_full_model_gradient_clipping(torch.optim.SGD)(
- params, cfg.SOLVER.BASE_LR, momentum=cfg.SOLVER.MOMENTUM,
- nesterov=cfg.SOLVER.NESTEROV
- )
- elif optimizer_type == 'ADAMW':
- optimizer = maybe_add_full_model_gradient_clipping(torch.optim.AdamW)(
- params, cfg.SOLVER.BASE_LR,
- weight_decay=cfg.SOLVER.WEIGHT_DECAY
- )
- else:
- raise NotImplementedError(f"no optimizer type {optimizer_type}")
- if not cfg.SOLVER.CLIP_GRADIENTS.CLIP_TYPE == "full_model":
- optimizer = maybe_add_gradient_clipping(cfg, optimizer)
- return optimizer
-
-
-def get_vit_lr_decay_rate(name, lr_decay_rate=1.0, num_layers=12):
- """
- Calculate lr decay rate for different ViT blocks.
- Args:
- name (string): parameter name.
- lr_decay_rate (float): base lr decay rate.
- num_layers (int): number of ViT blocks.
-
- Returns:
- lr decay rate for the given parameter.
- """
- layer_id = num_layers + 1
- if name.startswith("backbone"):
- if ".pos_embed" in name or ".patch_embed" in name:
- layer_id = 0
- elif ".blocks." in name and ".residual." not in name:
- layer_id = int(name[name.find(".blocks.") :].split(".")[2]) + 1
-
- return lr_decay_rate ** (num_layers + 1 - layer_id)
\ No newline at end of file
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/panoptic_evaluation.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/panoptic_evaluation.py
deleted file mode 100644
index 9fb3462b7f9abf6feaa499976bfed526ebd17e31..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/panoptic_evaluation.py
+++ /dev/null
@@ -1,199 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import contextlib
-import io
-import itertools
-import json
-import logging
-import numpy as np
-import os
-import tempfile
-from collections import OrderedDict
-from typing import Optional
-from PIL import Image
-from tabulate import tabulate
-
-from detectron2.data import MetadataCatalog
-from detectron2.utils import comm
-from detectron2.utils.file_io import PathManager
-
-from .evaluator import DatasetEvaluator
-
-logger = logging.getLogger(__name__)
-
-
-class COCOPanopticEvaluator(DatasetEvaluator):
- """
- Evaluate Panoptic Quality metrics on COCO using PanopticAPI.
- It saves panoptic segmentation prediction in `output_dir`
-
- It contains a synchronize call and has to be called from all workers.
- """
-
- def __init__(self, dataset_name: str, output_dir: Optional[str] = None):
- """
- Args:
- dataset_name: name of the dataset
- output_dir: output directory to save results for evaluation.
- """
- self._metadata = MetadataCatalog.get(dataset_name)
- self._thing_contiguous_id_to_dataset_id = {
- v: k for k, v in self._metadata.thing_dataset_id_to_contiguous_id.items()
- }
- self._stuff_contiguous_id_to_dataset_id = {
- v: k for k, v in self._metadata.stuff_dataset_id_to_contiguous_id.items()
- }
-
- self._output_dir = output_dir
- if self._output_dir is not None:
- PathManager.mkdirs(self._output_dir)
-
- def reset(self):
- self._predictions = []
-
- def _convert_category_id(self, segment_info):
- isthing = segment_info.pop("isthing", None)
- if isthing is None:
- # the model produces panoptic category id directly. No more conversion needed
- return segment_info
- if isthing is True:
- segment_info["category_id"] = self._thing_contiguous_id_to_dataset_id[
- segment_info["category_id"]
- ]
- else:
- segment_info["category_id"] = self._stuff_contiguous_id_to_dataset_id[
- segment_info["category_id"]
- ]
- return segment_info
-
- def process(self, inputs, outputs):
- from panopticapi.utils import id2rgb
-
- for input, output in zip(inputs, outputs):
- panoptic_img, segments_info = output["panoptic_seg"]
- panoptic_img = panoptic_img.cpu().numpy()
- if segments_info is None:
- # If "segments_info" is None, we assume "panoptic_img" is a
- # H*W int32 image storing the panoptic_id in the format of
- # category_id * label_divisor + instance_id. We reserve -1 for
- # VOID label, and add 1 to panoptic_img since the official
- # evaluation script uses 0 for VOID label.
- label_divisor = self._metadata.label_divisor
- segments_info = []
- for panoptic_label in np.unique(panoptic_img):
- if panoptic_label == -1:
- # VOID region.
- continue
- pred_class = panoptic_label // label_divisor
- isthing = (
- pred_class in self._metadata.thing_dataset_id_to_contiguous_id.values()
- )
- segments_info.append(
- {
- "id": int(panoptic_label) + 1,
- "category_id": int(pred_class),
- "isthing": bool(isthing),
- }
- )
- # Official evaluation script uses 0 for VOID label.
- panoptic_img += 1
-
- file_name = os.path.basename(input["file_name"])
- file_name_png = os.path.splitext(file_name)[0] + ".png"
- with io.BytesIO() as out:
- Image.fromarray(id2rgb(panoptic_img)).save(out, format="PNG")
- segments_info = [self._convert_category_id(x) for x in segments_info]
- self._predictions.append(
- {
- "image_id": input["image_id"],
- "file_name": file_name_png,
- "png_string": out.getvalue(),
- "segments_info": segments_info,
- }
- )
-
- def evaluate(self):
- comm.synchronize()
-
- self._predictions = comm.gather(self._predictions)
- self._predictions = list(itertools.chain(*self._predictions))
- if not comm.is_main_process():
- return
-
- # PanopticApi requires local files
- gt_json = PathManager.get_local_path(self._metadata.panoptic_json)
- gt_folder = PathManager.get_local_path(self._metadata.panoptic_root)
-
- with tempfile.TemporaryDirectory(prefix="panoptic_eval") as pred_dir:
- logger.info("Writing all panoptic predictions to {} ...".format(pred_dir))
- for p in self._predictions:
- with open(os.path.join(pred_dir, p["file_name"]), "wb") as f:
- f.write(p.pop("png_string"))
-
- with open(gt_json, "r") as f:
- json_data = json.load(f)
- json_data["annotations"] = self._predictions
-
- output_dir = self._output_dir or pred_dir
- predictions_json = os.path.join(output_dir, "predictions.json")
- with PathManager.open(predictions_json, "w") as f:
- f.write(json.dumps(json_data))
-
- from panopticapi.evaluation import pq_compute
-
- with contextlib.redirect_stdout(io.StringIO()):
- pq_res = pq_compute(
- gt_json,
- PathManager.get_local_path(predictions_json),
- gt_folder=gt_folder,
- pred_folder=pred_dir,
- )
-
- res = {}
- res["PQ"] = 100 * pq_res["All"]["pq"]
- res["SQ"] = 100 * pq_res["All"]["sq"]
- res["RQ"] = 100 * pq_res["All"]["rq"]
- res["PQ_th"] = 100 * pq_res["Things"]["pq"]
- res["SQ_th"] = 100 * pq_res["Things"]["sq"]
- res["RQ_th"] = 100 * pq_res["Things"]["rq"]
- res["PQ_st"] = 100 * pq_res["Stuff"]["pq"]
- res["SQ_st"] = 100 * pq_res["Stuff"]["sq"]
- res["RQ_st"] = 100 * pq_res["Stuff"]["rq"]
-
- results = OrderedDict({"panoptic_seg": res})
- _print_panoptic_results(pq_res)
-
- return results
-
-
-def _print_panoptic_results(pq_res):
- headers = ["", "PQ", "SQ", "RQ", "#categories"]
- data = []
- for name in ["All", "Things", "Stuff"]:
- row = [name] + [pq_res[name][k] * 100 for k in ["pq", "sq", "rq"]] + [pq_res[name]["n"]]
- data.append(row)
- table = tabulate(
- data, headers=headers, tablefmt="pipe", floatfmt=".3f", stralign="center", numalign="center"
- )
- logger.info("Panoptic Evaluation Results:\n" + table)
-
-
-if __name__ == "__main__":
- from detectron2.utils.logger import setup_logger
-
- logger = setup_logger()
- import argparse
-
- parser = argparse.ArgumentParser()
- parser.add_argument("--gt-json")
- parser.add_argument("--gt-dir")
- parser.add_argument("--pred-json")
- parser.add_argument("--pred-dir")
- args = parser.parse_args()
-
- from panopticapi.evaluation import pq_compute
-
- with contextlib.redirect_stdout(io.StringIO()):
- pq_res = pq_compute(
- args.gt_json, args.pred_json, gt_folder=args.gt_dir, pred_folder=args.pred_dir
- )
- _print_panoptic_results(pq_res)
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/data/test_transforms.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/data/test_transforms.py
deleted file mode 100644
index 382048e533708dec3fabf89528564ebc2ad4c83f..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/data/test_transforms.py
+++ /dev/null
@@ -1,268 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import logging
-import numpy as np
-import unittest
-from unittest import mock
-import torch
-from PIL import Image, ImageOps
-from torch.nn import functional as F
-
-from detectron2.config import get_cfg
-from detectron2.data import detection_utils
-from detectron2.data import transforms as T
-from detectron2.utils.logger import setup_logger
-
-logger = logging.getLogger(__name__)
-
-
-def polygon_allclose(poly1, poly2):
- """
- Test whether two polygons are the same.
- Both arguments are nx2 numpy arrays.
- """
- # ABCD and CDAB are the same polygon. So it's important to check after rolling
- for k in range(len(poly1)):
- rolled_poly1 = np.roll(poly1, k, axis=0)
- if np.allclose(rolled_poly1, poly2):
- return True
- return False
-
-
-class TestTransforms(unittest.TestCase):
- def setUp(self):
- setup_logger()
-
- def test_apply_rotated_boxes(self):
- np.random.seed(125)
- cfg = get_cfg()
- is_train = True
- augs = detection_utils.build_augmentation(cfg, is_train)
- image = np.random.rand(200, 300)
- image, transforms = T.apply_augmentations(augs, image)
- image_shape = image.shape[:2] # h, w
- assert image_shape == (800, 1200)
- annotation = {"bbox": [179, 97, 62, 40, -56]}
-
- boxes = np.array([annotation["bbox"]], dtype=np.float64) # boxes.shape = (1, 5)
- transformed_bbox = transforms.apply_rotated_box(boxes)[0]
-
- expected_bbox = np.array([484, 388, 248, 160, 56], dtype=np.float64)
- err_msg = "transformed_bbox = {}, expected {}".format(transformed_bbox, expected_bbox)
- assert np.allclose(transformed_bbox, expected_bbox), err_msg
-
- def test_resize_and_crop(self):
- np.random.seed(125)
- min_scale = 0.2
- max_scale = 2.0
- target_height = 1100
- target_width = 1000
- resize_aug = T.ResizeScale(min_scale, max_scale, target_height, target_width)
- fixed_size_crop_aug = T.FixedSizeCrop((target_height, target_width))
- hflip_aug = T.RandomFlip()
- augs = [resize_aug, fixed_size_crop_aug, hflip_aug]
- original_image = np.random.rand(900, 800)
- image, transforms = T.apply_augmentations(augs, original_image)
- image_shape = image.shape[:2] # h, w
- self.assertEqual((1100, 1000), image_shape)
-
- boxes = np.array(
- [[91, 46, 144, 111], [523, 251, 614, 295]],
- dtype=np.float64,
- )
- transformed_bboxs = transforms.apply_box(boxes)
- expected_bboxs = np.array(
- [
- [895.42, 33.42666667, 933.91125, 80.66],
- [554.0825, 182.39333333, 620.17125, 214.36666667],
- ],
- dtype=np.float64,
- )
- err_msg = "transformed_bbox = {}, expected {}".format(transformed_bboxs, expected_bboxs)
- self.assertTrue(np.allclose(transformed_bboxs, expected_bboxs), err_msg)
-
- polygon = np.array([[91, 46], [144, 46], [144, 111], [91, 111]])
- transformed_polygons = transforms.apply_polygons([polygon])
- expected_polygon = np.array([[934.0, 33.0], [934.0, 80.0], [896.0, 80.0], [896.0, 33.0]])
- self.assertEqual(1, len(transformed_polygons))
- err_msg = "transformed_polygon = {}, expected {}".format(
- transformed_polygons[0], expected_polygon
- )
- self.assertTrue(polygon_allclose(transformed_polygons[0], expected_polygon), err_msg)
-
- def test_apply_rotated_boxes_unequal_scaling_factor(self):
- np.random.seed(125)
- h, w = 400, 200
- newh, neww = 800, 800
- image = np.random.rand(h, w)
- augs = []
- augs.append(T.Resize(shape=(newh, neww)))
- image, transforms = T.apply_augmentations(augs, image)
- image_shape = image.shape[:2] # h, w
- assert image_shape == (newh, neww)
-
- boxes = np.array(
- [
- [150, 100, 40, 20, 0],
- [150, 100, 40, 20, 30],
- [150, 100, 40, 20, 90],
- [150, 100, 40, 20, -90],
- ],
- dtype=np.float64,
- )
- transformed_boxes = transforms.apply_rotated_box(boxes)
-
- expected_bboxes = np.array(
- [
- [600, 200, 160, 40, 0],
- [600, 200, 144.22205102, 52.91502622, 49.10660535],
- [600, 200, 80, 80, 90],
- [600, 200, 80, 80, -90],
- ],
- dtype=np.float64,
- )
- err_msg = "transformed_boxes = {}, expected {}".format(transformed_boxes, expected_bboxes)
- assert np.allclose(transformed_boxes, expected_bboxes), err_msg
-
- def test_print_augmentation(self):
- t = T.RandomCrop("relative", (100, 100))
- self.assertEqual(str(t), "RandomCrop(crop_type='relative', crop_size=(100, 100))")
-
- t0 = T.RandomFlip(prob=0.5)
- self.assertEqual(str(t0), "RandomFlip(prob=0.5)")
-
- t1 = T.RandomFlip()
- self.assertEqual(str(t1), "RandomFlip()")
-
- t = T.AugmentationList([t0, t1])
- self.assertEqual(str(t), f"AugmentationList[{t0}, {t1}]")
-
- def test_random_apply_prob_out_of_range_check(self):
- test_probabilities = {0.0: True, 0.5: True, 1.0: True, -0.01: False, 1.01: False}
-
- for given_probability, is_valid in test_probabilities.items():
- if not is_valid:
- self.assertRaises(AssertionError, T.RandomApply, None, prob=given_probability)
- else:
- T.RandomApply(T.NoOpTransform(), prob=given_probability)
-
- def test_random_apply_wrapping_aug_probability_occured_evaluation(self):
- transform_mock = mock.MagicMock(name="MockTransform", spec=T.Augmentation)
- image_mock = mock.MagicMock(name="MockImage")
- random_apply = T.RandomApply(transform_mock, prob=0.001)
-
- with mock.patch.object(random_apply, "_rand_range", return_value=0.0001):
- transform = random_apply.get_transform(image_mock)
- transform_mock.get_transform.assert_called_once_with(image_mock)
- self.assertIsNot(transform, transform_mock)
-
- def test_random_apply_wrapping_std_transform_probability_occured_evaluation(self):
- transform_mock = mock.MagicMock(name="MockTransform", spec=T.Transform)
- image_mock = mock.MagicMock(name="MockImage")
- random_apply = T.RandomApply(transform_mock, prob=0.001)
-
- with mock.patch.object(random_apply, "_rand_range", return_value=0.0001):
- transform = random_apply.get_transform(image_mock)
- self.assertIs(transform, transform_mock)
-
- def test_random_apply_probability_not_occured_evaluation(self):
- transform_mock = mock.MagicMock(name="MockTransform", spec=T.Augmentation)
- image_mock = mock.MagicMock(name="MockImage")
- random_apply = T.RandomApply(transform_mock, prob=0.001)
-
- with mock.patch.object(random_apply, "_rand_range", return_value=0.9):
- transform = random_apply.get_transform(image_mock)
- transform_mock.get_transform.assert_not_called()
- self.assertIsInstance(transform, T.NoOpTransform)
-
- def test_augmentation_input_args(self):
- input_shape = (100, 100)
- output_shape = (50, 50)
-
- # define two augmentations with different args
- class TG1(T.Augmentation):
- def get_transform(self, image, sem_seg):
- return T.ResizeTransform(
- input_shape[0], input_shape[1], output_shape[0], output_shape[1]
- )
-
- class TG2(T.Augmentation):
- def get_transform(self, image):
- assert image.shape[:2] == output_shape # check that TG1 is applied
- return T.HFlipTransform(output_shape[1])
-
- image = np.random.rand(*input_shape).astype("float32")
- sem_seg = (np.random.rand(*input_shape) < 0.5).astype("uint8")
- inputs = T.AugInput(image, sem_seg=sem_seg) # provide two args
- tfms = inputs.apply_augmentations([TG1(), TG2()])
- self.assertIsInstance(tfms[0], T.ResizeTransform)
- self.assertIsInstance(tfms[1], T.HFlipTransform)
- self.assertTrue(inputs.image.shape[:2] == output_shape)
- self.assertTrue(inputs.sem_seg.shape[:2] == output_shape)
-
- class TG3(T.Augmentation):
- def get_transform(self, image, nonexist):
- pass
-
- with self.assertRaises(AttributeError):
- inputs.apply_augmentations([TG3()])
-
- def test_augmentation_list(self):
- input_shape = (100, 100)
- image = np.random.rand(*input_shape).astype("float32")
- sem_seg = (np.random.rand(*input_shape) < 0.5).astype("uint8")
- inputs = T.AugInput(image, sem_seg=sem_seg) # provide two args
-
- augs = T.AugmentationList([T.RandomFlip(), T.Resize(20)])
- _ = T.AugmentationList([augs, T.Resize(30)])(inputs)
- # 3 in latest fvcore (flattened transformlist), 2 in older
- # self.assertEqual(len(tfms), 3)
-
- def test_color_transforms(self):
- rand_img = np.random.random((100, 100, 3)) * 255
- rand_img = rand_img.astype("uint8")
-
- # Test no-op
- noop_transform = T.ColorTransform(lambda img: img)
- self.assertTrue(np.array_equal(rand_img, noop_transform.apply_image(rand_img)))
-
- # Test a ImageOps operation
- magnitude = np.random.randint(0, 256)
- solarize_transform = T.PILColorTransform(lambda img: ImageOps.solarize(img, magnitude))
- expected_img = ImageOps.solarize(Image.fromarray(rand_img), magnitude)
- self.assertTrue(np.array_equal(expected_img, solarize_transform.apply_image(rand_img)))
-
- def test_resize_transform(self):
- input_shapes = [(100, 100), (100, 100, 1), (100, 100, 3)]
- output_shapes = [(200, 200), (200, 200, 1), (200, 200, 3)]
- for in_shape, out_shape in zip(input_shapes, output_shapes):
- in_img = np.random.randint(0, 255, size=in_shape, dtype=np.uint8)
- tfm = T.ResizeTransform(in_shape[0], in_shape[1], out_shape[0], out_shape[1])
- out_img = tfm.apply_image(in_img)
- self.assertEqual(out_img.shape, out_shape)
-
- def test_resize_shorted_edge_scriptable(self):
- def f(image):
- newh, neww = T.ResizeShortestEdge.get_output_shape(
- image.shape[-2], image.shape[-1], 80, 133
- )
- return F.interpolate(image.unsqueeze(0), size=(newh, neww))
-
- input = torch.randn(3, 10, 10)
- script_f = torch.jit.script(f)
- self.assertTrue(torch.allclose(f(input), script_f(input)))
-
- # generalize to new shapes
- input = torch.randn(3, 8, 100)
- self.assertTrue(torch.allclose(f(input), script_f(input)))
-
- def test_extent_transform(self):
- input_shapes = [(100, 100), (100, 100, 1), (100, 100, 3)]
- src_rect = (20, 20, 80, 80)
- output_shapes = [(200, 200), (200, 200, 1), (200, 200, 3)]
- for in_shape, out_shape in zip(input_shapes, output_shapes):
- in_img = np.random.randint(0, 255, size=in_shape, dtype=np.uint8)
- tfm = T.ExtentTransform(src_rect, out_shape[:2])
- out_img = tfm.apply_image(in_img)
- self.assertTrue(out_img.shape == out_shape)
diff --git a/spaces/Bart92/RVC_HF/infer/modules/uvr5/preprocess.py b/spaces/Bart92/RVC_HF/infer/modules/uvr5/preprocess.py
deleted file mode 100644
index 19f11110ea822eeb140fb885c600536290a1adff..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/infer/modules/uvr5/preprocess.py
+++ /dev/null
@@ -1,346 +0,0 @@
-import os
-import logging
-
-logger = logging.getLogger(__name__)
-
-import librosa
-import numpy as np
-import soundfile as sf
-import torch
-
-from infer.lib.uvr5_pack.lib_v5 import nets_61968KB as Nets
-from infer.lib.uvr5_pack.lib_v5 import spec_utils
-from infer.lib.uvr5_pack.lib_v5.model_param_init import ModelParameters
-from infer.lib.uvr5_pack.lib_v5.nets_new import CascadedNet
-from infer.lib.uvr5_pack.utils import inference
-
-
-class AudioPre:
- def __init__(self, agg, model_path, device, is_half):
- self.model_path = model_path
- self.device = device
- self.data = {
- # Processing Options
- "postprocess": False,
- "tta": False,
- # Constants
- "window_size": 512,
- "agg": agg,
- "high_end_process": "mirroring",
- }
- mp = ModelParameters("infer/lib/uvr5_pack/lib_v5/modelparams/4band_v2.json")
- model = Nets.CascadedASPPNet(mp.param["bins"] * 2)
- cpk = torch.load(model_path, map_location="cpu")
- model.load_state_dict(cpk)
- model.eval()
- if is_half:
- model = model.half().to(device)
- else:
- model = model.to(device)
-
- self.mp = mp
- self.model = model
-
- def _path_audio_(self, music_file, ins_root=None, vocal_root=None, format="flac"):
- if ins_root is None and vocal_root is None:
- return "No save root."
- name = os.path.basename(music_file)
- if ins_root is not None:
- os.makedirs(ins_root, exist_ok=True)
- if vocal_root is not None:
- os.makedirs(vocal_root, exist_ok=True)
- X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {}
- bands_n = len(self.mp.param["band"])
- # print(bands_n)
- for d in range(bands_n, 0, -1):
- bp = self.mp.param["band"][d]
- if d == bands_n: # high-end band
- (
- X_wave[d],
- _,
- ) = librosa.core.load( # 理论上librosa读取可能对某些音频有bug,应该上ffmpeg读取,但是太麻烦了弃坑
- music_file,
- bp["sr"],
- False,
- dtype=np.float32,
- res_type=bp["res_type"],
- )
- if X_wave[d].ndim == 1:
- X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]])
- else: # lower bands
- X_wave[d] = librosa.core.resample(
- X_wave[d + 1],
- self.mp.param["band"][d + 1]["sr"],
- bp["sr"],
- res_type=bp["res_type"],
- )
- # Stft of wave source
- X_spec_s[d] = spec_utils.wave_to_spectrogram_mt(
- X_wave[d],
- bp["hl"],
- bp["n_fft"],
- self.mp.param["mid_side"],
- self.mp.param["mid_side_b2"],
- self.mp.param["reverse"],
- )
- # pdb.set_trace()
- if d == bands_n and self.data["high_end_process"] != "none":
- input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + (
- self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"]
- )
- input_high_end = X_spec_s[d][
- :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, :
- ]
-
- X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp)
- aggresive_set = float(self.data["agg"] / 100)
- aggressiveness = {
- "value": aggresive_set,
- "split_bin": self.mp.param["band"][1]["crop_stop"],
- }
- with torch.no_grad():
- pred, X_mag, X_phase = inference(
- X_spec_m, self.device, self.model, aggressiveness, self.data
- )
- # Postprocess
- if self.data["postprocess"]:
- pred_inv = np.clip(X_mag - pred, 0, np.inf)
- pred = spec_utils.mask_silence(pred, pred_inv)
- y_spec_m = pred * X_phase
- v_spec_m = X_spec_m - y_spec_m
-
- if ins_root is not None:
- if self.data["high_end_process"].startswith("mirroring"):
- input_high_end_ = spec_utils.mirroring(
- self.data["high_end_process"], y_spec_m, input_high_end, self.mp
- )
- wav_instrument = spec_utils.cmb_spectrogram_to_wave(
- y_spec_m, self.mp, input_high_end_h, input_high_end_
- )
- else:
- wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp)
- logger.info("%s instruments done" % name)
- if format in ["wav", "flac"]:
- sf.write(
- os.path.join(
- ins_root,
- "instrument_{}_{}.{}".format(name, self.data["agg"], format),
- ),
- (np.array(wav_instrument) * 32768).astype("int16"),
- self.mp.param["sr"],
- ) #
- else:
- path = os.path.join(
- ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"])
- )
- sf.write(
- path,
- (np.array(wav_instrument) * 32768).astype("int16"),
- self.mp.param["sr"],
- )
- if os.path.exists(path):
- os.system(
- "ffmpeg -i %s -vn %s -q:a 2 -y"
- % (path, path[:-4] + ".%s" % format)
- )
- if vocal_root is not None:
- if self.data["high_end_process"].startswith("mirroring"):
- input_high_end_ = spec_utils.mirroring(
- self.data["high_end_process"], v_spec_m, input_high_end, self.mp
- )
- wav_vocals = spec_utils.cmb_spectrogram_to_wave(
- v_spec_m, self.mp, input_high_end_h, input_high_end_
- )
- else:
- wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp)
- logger.info("%s vocals done" % name)
- if format in ["wav", "flac"]:
- sf.write(
- os.path.join(
- vocal_root,
- "vocal_{}_{}.{}".format(name, self.data["agg"], format),
- ),
- (np.array(wav_vocals) * 32768).astype("int16"),
- self.mp.param["sr"],
- )
- else:
- path = os.path.join(
- vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"])
- )
- sf.write(
- path,
- (np.array(wav_vocals) * 32768).astype("int16"),
- self.mp.param["sr"],
- )
- if os.path.exists(path):
- os.system(
- "ffmpeg -i %s -vn %s -q:a 2 -y"
- % (path, path[:-4] + ".%s" % format)
- )
-
-
-class AudioPreDeEcho:
- def __init__(self, agg, model_path, device, is_half):
- self.model_path = model_path
- self.device = device
- self.data = {
- # Processing Options
- "postprocess": False,
- "tta": False,
- # Constants
- "window_size": 512,
- "agg": agg,
- "high_end_process": "mirroring",
- }
- mp = ModelParameters("infer/lib/uvr5_pack/lib_v5/modelparams/4band_v3.json")
- nout = 64 if "DeReverb" in model_path else 48
- model = CascadedNet(mp.param["bins"] * 2, nout)
- cpk = torch.load(model_path, map_location="cpu")
- model.load_state_dict(cpk)
- model.eval()
- if is_half:
- model = model.half().to(device)
- else:
- model = model.to(device)
-
- self.mp = mp
- self.model = model
-
- def _path_audio_(
- self, music_file, vocal_root=None, ins_root=None, format="flac"
- ): # 3个VR模型vocal和ins是反的
- if ins_root is None and vocal_root is None:
- return "No save root."
- name = os.path.basename(music_file)
- if ins_root is not None:
- os.makedirs(ins_root, exist_ok=True)
- if vocal_root is not None:
- os.makedirs(vocal_root, exist_ok=True)
- X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {}
- bands_n = len(self.mp.param["band"])
- # print(bands_n)
- for d in range(bands_n, 0, -1):
- bp = self.mp.param["band"][d]
- if d == bands_n: # high-end band
- (
- X_wave[d],
- _,
- ) = librosa.core.load( # 理论上librosa读取可能对某些音频有bug,应该上ffmpeg读取,但是太麻烦了弃坑
- music_file,
- bp["sr"],
- False,
- dtype=np.float32,
- res_type=bp["res_type"],
- )
- if X_wave[d].ndim == 1:
- X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]])
- else: # lower bands
- X_wave[d] = librosa.core.resample(
- X_wave[d + 1],
- self.mp.param["band"][d + 1]["sr"],
- bp["sr"],
- res_type=bp["res_type"],
- )
- # Stft of wave source
- X_spec_s[d] = spec_utils.wave_to_spectrogram_mt(
- X_wave[d],
- bp["hl"],
- bp["n_fft"],
- self.mp.param["mid_side"],
- self.mp.param["mid_side_b2"],
- self.mp.param["reverse"],
- )
- # pdb.set_trace()
- if d == bands_n and self.data["high_end_process"] != "none":
- input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + (
- self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"]
- )
- input_high_end = X_spec_s[d][
- :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, :
- ]
-
- X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp)
- aggresive_set = float(self.data["agg"] / 100)
- aggressiveness = {
- "value": aggresive_set,
- "split_bin": self.mp.param["band"][1]["crop_stop"],
- }
- with torch.no_grad():
- pred, X_mag, X_phase = inference(
- X_spec_m, self.device, self.model, aggressiveness, self.data
- )
- # Postprocess
- if self.data["postprocess"]:
- pred_inv = np.clip(X_mag - pred, 0, np.inf)
- pred = spec_utils.mask_silence(pred, pred_inv)
- y_spec_m = pred * X_phase
- v_spec_m = X_spec_m - y_spec_m
-
- if ins_root is not None:
- if self.data["high_end_process"].startswith("mirroring"):
- input_high_end_ = spec_utils.mirroring(
- self.data["high_end_process"], y_spec_m, input_high_end, self.mp
- )
- wav_instrument = spec_utils.cmb_spectrogram_to_wave(
- y_spec_m, self.mp, input_high_end_h, input_high_end_
- )
- else:
- wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp)
- logger.info("%s instruments done" % name)
- if format in ["wav", "flac"]:
- sf.write(
- os.path.join(
- ins_root,
- "instrument_{}_{}.{}".format(name, self.data["agg"], format),
- ),
- (np.array(wav_instrument) * 32768).astype("int16"),
- self.mp.param["sr"],
- ) #
- else:
- path = os.path.join(
- ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"])
- )
- sf.write(
- path,
- (np.array(wav_instrument) * 32768).astype("int16"),
- self.mp.param["sr"],
- )
- if os.path.exists(path):
- os.system(
- "ffmpeg -i %s -vn %s -q:a 2 -y"
- % (path, path[:-4] + ".%s" % format)
- )
- if vocal_root is not None:
- if self.data["high_end_process"].startswith("mirroring"):
- input_high_end_ = spec_utils.mirroring(
- self.data["high_end_process"], v_spec_m, input_high_end, self.mp
- )
- wav_vocals = spec_utils.cmb_spectrogram_to_wave(
- v_spec_m, self.mp, input_high_end_h, input_high_end_
- )
- else:
- wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp)
- logger.info("%s vocals done" % name)
- if format in ["wav", "flac"]:
- sf.write(
- os.path.join(
- vocal_root,
- "vocal_{}_{}.{}".format(name, self.data["agg"], format),
- ),
- (np.array(wav_vocals) * 32768).astype("int16"),
- self.mp.param["sr"],
- )
- else:
- path = os.path.join(
- vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"])
- )
- sf.write(
- path,
- (np.array(wav_vocals) * 32768).astype("int16"),
- self.mp.param["sr"],
- )
- if os.path.exists(path):
- os.system(
- "ffmpeg -i %s -vn %s -q:a 2 -y"
- % (path, path[:-4] + ".%s" % format)
- )
diff --git a/spaces/Benson/text-generation/Examples/Descarga C.md b/spaces/Benson/text-generation/Examples/Descarga C.md
deleted file mode 100644
index e9fdbc80e085176ab72680160addf28862a260d6..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descarga C.md
+++ /dev/null
@@ -1,91 +0,0 @@
-
-
Cómo descargar e instalar C++ en Windows
-
C++ es un lenguaje de programación popular que ha evolucionado a partir de C y ha añadido características orientadas a objetos, genéricas y funcionales. Está diseñado para la programación de sistemas, software integrado y sistemas grandes, con rendimiento, eficiencia y flexibilidad como sus objetivos. C++ soporta programación orientada a objetos, lo que ayuda a modular y mantener un programa de manera eficiente. C++ también tiene otras características como espacio de nombres, sobrecarga del operador, manejo de errores y excepciones, y una biblioteca de conceptos.
Si quieres aprender C++ o utilizarlo para tus proyectos, necesitas tener un compilador de C++ y un entorno de desarrollo integrado (IDE) instalado en tu ordenador. En este artículo, te mostraremos cómo descargar e instalar C++ en Windows usando Visual Studio, que es uno de los IDEs más populares para el desarrollo de C++. Visual Studio proporciona un conjunto completo de herramientas para crear, depurar, probar e implementar aplicaciones C++.
El primer paso es descargar el instalador de Visual Studio desde el sitio web de Microsoft. El instalador es una aplicación ligera que te permite elegir e instalar las características que necesitas para Visual Studio.
-
Para descargar el instalador, vaya a la página de descarga Visual Studio y seleccione la edición de Visual Studio que desee. Puede elegir entre ediciones Community, Professional o Enterprise. Para este tutorial, usaremos la edición Comunidad, que es gratuita para estudiantes, colaboradores de código abierto y desarrolladores individuales.
-
-
Haga doble clic en el archivo de arranque para ejecutarlo. Si recibe un aviso de Control de cuenta de usuario, elija Sí para permitirlo. Se le pedirá que acepte los Términos de licencia de Microsoft y la Declaración de privacidad de Microsoft. Elija Continue.
-
Paso 2: Elija cargas de trabajo para el desarrollo de C++
-
El instalador le presentará una lista de cargas de trabajo, que son grupos de opciones relacionadas para áreas de desarrollo específicas. El soporte para C++ es ahora parte de cargas de trabajo opcionales que no están instaladas por defecto.
-
-
Para el desarrollo de C++, debe seleccionar Desarrollo de escritorio con carga de trabajo de C++. Esta carga de trabajo incluye funciones como:
-
-
El conjunto de herramientas del compilador MSVC
-
El SDK de Windows
-
Herramientas de CMake
-
Herramientas de prueba
-
Herramientas de depuración
-
Herramientas de análisis de código
-
Biblioteca de plantillas estándar (STL)
-
La biblioteca Boost
-
El marco de Google Test
-
La biblioteca MFC
-
La biblioteca ATL
-
-
Para seleccionar el Desarrollo de escritorio con carga de trabajo de C++, marque la casilla junto a él. También puede ampliar la carga de trabajo para ver los componentes opcionales que puede instalar o deseleccionar. Por ejemplo, puede elegir instalar soporte para desarrollo de Linux con C++ o Windows 10 SDK (10.0.19041.0).
-
Después de seleccionar la carga de trabajo y los componentes que desea, haga clic en el botón Instalar en la esquina inferior derecha del instalador. El instalador le mostrará el progreso y el estado de la instalación. Dependiendo de la velocidad de Internet y la configuración del equipo, esto puede tomar algún tiempo.
-
Paso 3: Instalar y lanzar Visual Studio
-
Cuando se complete la instalación, verá un mensaje que dice "Instalación exitosa!" Ahora puede iniciar Visual Studio haciendo clic en el botón Iniciar en el instalador o buscándolo en el menú Inicio.
-
-
Después de iniciar sesión, se le pedirá que elija un tema de color y un perfil de configuración de desarrollo. Puede elegir entre temas de Luz, Oscuridad o Azul y desde configuraciones de desarrollo General, C#, C++, Python o Web. Para este tutorial, elegiremos el tema Oscuro y la configuración de desarrollo de C++.
-
Visual Studio se abrirá y le mostrará una página de inicio con varias opciones. Para crear un nuevo proyecto, haga clic en el botón Crear un nuevo proyecto.
-
Paso 4: Escribir y ejecutar un programa simple de C++
-
Para escribir y ejecutar un simple programa de C++, necesitas crear un proyecto que contenga tus archivos de código fuente y otros recursos. Un proyecto también especifica cómo construir y ejecutar su programa usando varias herramientas y configuraciones.
-
Para crear un nuevo proyecto, siga estos pasos:
-
-
En la ventana Crear un nuevo proyecto, busque "C++" en el cuadro de búsqueda y seleccione "Aplicación de consola" en la lista de plantillas. Haga clic en Siguiente.
-
En la ventana Configurar su nuevo proyecto, introduzca un nombre para su proyecto (como HelloWorld) y elija una ubicación para guardarlo. También puede cambiar otras opciones como el nombre de la solución, la plataforma de destino y el estándar de idioma. Haga clic en Crear.
-
Visual Studio creará un nuevo proyecto y lo abrirá en la ventana principal. Verá un panel del Explorador de soluciones en el lado derecho que muestra los archivos y carpetas en su proyecto. También verá un panel del editor que muestra el código fuente de su archivo main.cpp.
-
-
El archivo.cpp principal contiene un simple programa C++ que imprime "Hello World!" en la consola. El código se ve así:
Para construir y ejecutar su programa, siga estos pasos:
-
-
Haga clic en el menú Build y seleccione Build Solution (o pulse Ctrl+Shift+B). Esto compilará su código fuente en un archivo ejecutable usando el conjunto de herramientas del compilador MSVC.
-
-
Debería ver un mensaje que diga "¡Hola mundo!" en la ventana de la consola. Pulse cualquier tecla para cerrarlo.
-
-
Conclusión
-
En este artículo, le hemos mostrado cómo descargar e instalar C++ en Windows usando Visual Studio. También le hemos mostrado cómo crear, construir y ejecutar un programa simple de C++ usando las herramientas de Visual Studio.
-
C++ es un lenguaje de programación potente y versátil que puede utilizarse para diversos fines. Si quieres saber más sobre C++, puedes consultar algunos de estos recursos:
-
-
C++ Tutorial: Un tutorial completo para principiantes por cplusplus.com.
-
MinGW: Un GNU minimalista para Windows que proporciona compiladores GCC (GNU Compiler Collection) para C y C++.
-
Cygwin: Un entorno similar a Linux para Windows que proporciona compiladores GCC para C y C++.
-
GCC: Un compilador libre y de código abierto para C y C++ que es ampliamente utilizado en Linux y otros sistemas similares a Unix.
-
Xcode: Un IDE gratuito para Mac OS que soporta el desarrollo de C y C++ usando Clang.
-
Eclipse: IDE libre y de código abierto que soporta múltiples idiomas, incluyendo C y C++, y múltiples plataformas, incluyendo Linux, Mac OS y Windows.
-
-
¿Cuáles son algunas de las nuevas características de C++20?
-
C++20 es la última versión del estándar C++ que se publicó en 2020. Introduce muchas nuevas características y mejoras en el lenguaje, como:
-
-
Módulos: Una nueva forma de organizar código en unidades que se pueden importar y exportar.
-
Conceptos: Una forma de especificar restricciones en parámetros de plantilla usando predicados.
-
Rangos: Una biblioteca que proporciona vistas y algoritmos para trabajar con secuencias de elementos.
-
Coroutines: Una forma de escribir código asíncrono usando funciones suspendibles.
-
Contratos: Una forma de expresar precondiciones, postcondiciones y aserciones para funciones.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Apk Mod De Netflix.md b/spaces/Benson/text-generation/Examples/Descargar Apk Mod De Netflix.md
deleted file mode 100644
index e86b6ed682f816e158f4e07ea8cb1fc225a40960..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Apk Mod De Netflix.md
+++ /dev/null
@@ -1,154 +0,0 @@
-
-
Descargar Mod APK de Netflix: Cómo ver contenido premium gratis
-
Netflix es una de las plataformas de streaming más populares del mundo, ofreciendo una amplia gama de películas, programas de televisión, documentales y contenido original. Sin embargo, no todos pueden permitirse el lujo de pagar una suscripción a Netflix, o acceder a todo el contenido disponible en diferentes regiones. Es por eso que algunas personas buscan formas de descargar APK mod de Netflix, que son versiones modificadas de la aplicación oficial que permiten a los usuarios ver contenido premium de forma gratuita. Pero, ¿cómo descargar e instalar un mod APK de Netflix en su dispositivo? Y cuáles son los riesgos y beneficios de usar una aplicación de este tipo? En este artículo, vamos a responder a estas preguntas y más, para que pueda disfrutar de la transmisión ilimitada sin romper el banco.
-
¿Qué es Netflix y por qué es tan popular?
-
Netflix es una compañía estadounidense que proporciona servicios de transmisión en línea para varios tipos de medios, como películas, programas de televisión, documentales, anime y producciones originales. Netflix fue fundada en 1997 como un servicio de alquiler de DVD, pero más tarde se expandió a la transmisión en línea en 2007. Desde entonces, Netflix ha crecido hasta convertirse en una de las compañías de entretenimiento más grandes e influyentes del mundo, con más de 200 millones de suscriptores en más de 190 países.
Algunas de las características y beneficios que hacen que Netflix sea tan popular entre los usuarios son:
-
-
Ofrece una enorme biblioteca de contenido, con miles de títulos de diferentes géneros, idiomas y categorías.
-
Produce contenido original de alta calidad, como Stranger Things, The Crown, Black Mirror, The Witcher y muchos más.
-
Permite a los usuarios descargar contenido para verlo sin conexión en sus dispositivos.
-
Es compatible con varios dispositivos, como teléfonos inteligentes, tabletas, ordenadores portátiles, televisores inteligentes, consolas de juegos y dispositivos de transmisión.
-
-
Permite a los usuarios crear múltiples perfiles dentro de una cuenta, cada uno con sus propios ajustes y preferencias.
-
Ofrece varias características para mejorar la experiencia del usuario, como subtítulos, subtítulos, descripciones de audio, controles parentales, omitir la introducción, velocidad de reproducción, etc.
-
-
Planes de suscripción y precios de Netflix
-
Para acceder al contenido de Netflix, los usuarios deben registrarse en un plan de suscripción que se adapte a sus necesidades y presupuesto. Netflix ofrece cuatro planes de suscripción: Básico, Estándar, Premium y Ultra. Las principales diferencias entre estos planes son el número de pantallas que se pueden utilizar simultáneamente, la calidad de vídeo (SD, HD o 4K), y la disponibilidad de HDR y Dolby Visión. La siguiente tabla muestra los detalles de cada plan:
-
-
-
Plan
-
Pantallas
-
Calidad
-
HDR/Dolby Visión
-
Precio (USD)
-
-
-
Básico
-
1
-
SD
-
No
-
$8.99/mes
-
-
-
Estándar
-
2
-
HD
-
No
-
$13.99/mes
-
-
-
Premium
-
4
-
4K
-
Sí
-
$17.99/mes
-
-
-
Ultra
-
4
-
4K+
-
Sí
-
$19.99/mes
-
-
-
Tenga en cuenta que los precios pueden variar dependiendo del país y la región del usuario. Los usuarios también pueden optar por una prueba gratuita durante un período de tiempo limitado antes de comprometerse con un plan.
-
¿Qué es un mod APK y por qué lo necesita?
-
Un mod APK es una versión modificada de un archivo de paquete de aplicaciones de Android (APK), que es el formato utilizado para distribuir e instalar aplicaciones en dispositivos Android. Un mod APK puede tener diferentes características y funciones que la aplicación original, tales como la eliminación de anuncios, desbloquear contenido premium, agregar opciones adicionales, mejorar el rendimiento, etc. Un mod APK suele ser creado por tercerosdesarrolladores de fiestas o hackers que modifican el código fuente de la aplicación original.
-
Mod APK definición y ventajas
-
-
-
Puede proporcionar acceso a contenido premium o características que están restringidas o pagadas en la aplicación original.
-
Puede mejorar la experiencia del usuario eliminando anuncios molestos, mejorando los gráficos, aumentando la velocidad, etc.
-
Puede permitir a los usuarios personalizar la aplicación de acuerdo a sus preferencias y necesidades.
-
Puede evitar las restricciones regionales y el bloqueo geográfico que pueden limitar la disponibilidad de algunos contenidos o servicios en ciertas áreas.
-
-
Riesgos y desafíos de usar mod APKs
-
Sin embargo, el uso de un mod APK también viene con algunos riesgos y desafíos, tales como:
-
-
Puede exponer los datos del dispositivo y del usuario a malware, virus, spyware u otros ataques maliciosos que pueden dañar el dispositivo o comprometer la privacidad y la seguridad del usuario.
-
Puede violar los términos y condiciones de la aplicación original y su desarrollador, lo que puede resultar en acciones legales o sanciones.
-
Puede causar problemas de compatibilidad o errores con el dispositivo u otras aplicaciones, lo que puede afectar la funcionalidad o el rendimiento del dispositivo o aplicación.
-
Puede ser desactualizado o poco fiable, ya que puede no recibir actualizaciones regulares o soporte del desarrollador original o el modder.
-
-
Por lo tanto, los usuarios deben tener cuidado y precaución al descargar e instalar APK mod, y solo usarlos de fuentes confiables y de buena reputación.
-
Cómo descargar e instalar Netflix mod APK en su dispositivo
-
Si desea descargar e instalar un mod APK de Netflix en su dispositivo, tendrá que seguir estos pasos:
-
Paso 1: Encontrar una fuente confiable para el archivo mod APK
-
-
Paso 2: Habilitar fuentes desconocidas en la configuración del dispositivo
-
El siguiente paso es habilitar fuentes desconocidas en la configuración del dispositivo, lo que le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto, tendrá que ir a la configuración del dispositivo, luego la seguridad o la privacidad, y luego activar la opción para fuentes desconocidas. También es posible que necesite conceder permiso a su navegador o administrador de archivos para instalar aplicaciones de fuentes desconocidas.
-
-
Paso 3: Descargar e instalar el archivo mod APK
-
El tercer paso es descargar e instalar el archivo APK mod en su dispositivo. Puede hacer esto haciendo clic en el enlace de descarga o en el botón de la fuente que ha elegido, y luego esperar a que el archivo se descargue en su dispositivo. Una vez descargado el archivo, puede abrirlo con su administrador de archivos o navegador, luego toque en instalar. Es posible que necesite aceptar algunos permisos o advertencias antes de instalar la aplicación.
-
Paso 4: Inicie la aplicación y disfrute de streaming ilimitado
-
El paso final es lanzar la aplicación y disfrutar de la transmisión ilimitada de contenido de Netflix de forma gratuita. Puede hacer esto abriendo la aplicación desde el cajón de aplicaciones o la pantalla de inicio, luego iniciando sesión con su correo electrónico o cuenta de Facebook, o creando una nueva cuenta si no tiene una. A continuación, puede navegar a través de las categorías y géneros de contenido disponibles en Netflix, o buscar títulos específicos que desea ver. También puede ajustar algunos ajustes y preferencias dentro de la aplicación, como la calidad del vídeo, el idioma, los subtítulos, etc.
-
Comparación de Netflix mod APK y la aplicación oficial de Netflix
-
Comparación de funciones y características
-
-
Sin embargo, Netflix mod APK también tiene algunos inconvenientes y limitaciones en comparación con la aplicación oficial de Netflix. Por ejemplo, Netflix mod APK no puede tener todas las características y funciones que la aplicación oficial tiene, tales como la descarga de contenido para la visualización sin conexión, la creación de múltiples perfiles, conseguir recomendaciones personalizadas, etc. Netflix mod APK también puede tener algunos errores o errores que pueden afectar el rendimiento o la funcionalidad de la aplicación. Además, Netflix mod APK no se puede actualizar o apoyar regularmente, lo que puede causar problemas de compatibilidad o riesgos de seguridad.
-
Comparación de pros y contras
-
Para resumir, aquí están algunos de los pros y los contras de usar Netflix mod APK versus la aplicación oficial de Netflix:
-
-
-
Netflix mod APK
-
Aplicación oficial de Netflix
-
-
-
Pros:
-
Pros:
-
-
-
-
Acceso gratuito al contenido premium
-
Sin cuotas de suscripción o planes
-
No hay restricciones regionales o bloqueo geográfico
-
Calidad 4K con soporte HDR y Dolby Visión
-
No hay anuncios o ventanas emergentes
-
-
-
Aplicación segura y segura
-
Actualizaciones regulares y soporte
-
Descargar contenido para ver sin conexión
-
Crear múltiples perfiles
-
Obtener recomendaciones personalizadas
-
-
-
-
Contras:
-
Contras:
-
-
-
-
Ataques potenciales de malware o virus
-
Acciones legales o sanciones
-
Problemas o errores de compatibilidad
-
Falta de características o funciones
-
Aplicación desactualizada o poco fiable
-
-
-
Se requieren tarifas de suscripción o planes
-
Restricciones regionales o bloqueo geográfico aplicado
-
La calidad del vídeo depende del dispositivo y del plan
-
Pueden aparecer anuncios o ventanas emergentes
-
Disponibilidad limitada de contenido en algunas áreas
-
-
-
-
Conclusión y preguntas frecuentes
-
-
Si tiene alguna pregunta sobre Netflix mod APK, puede encontrar las respuestas en las siguientes preguntas frecuentes:
-
Q: ¿Es Netflix mod APK legal?
-
A: No, Netflix mod APK no es legal, ya que viola los términos y condiciones de la aplicación original y su desarrollador. El uso de un mod de Netflix APK puede resultar en acciones legales o sanciones de Netflix u otras autoridades.
-
Q: ¿Es seguro el mod APK de Netflix?
-
A: No necesariamente, Netflix mod APK puede no ser seguro, ya que puede exponer el dispositivo y los datos del usuario a malware, virus, spyware u otros ataques maliciosos que pueden dañar el dispositivo o comprometer la privacidad y la seguridad del usuario. Los usuarios siempre deben escanear el archivo APK mod con un software antivirus antes de instalarlo en su dispositivo.
-
Q: ¿Cómo puedo actualizar Netflix mod APK?
-
A: Para actualizar Netflix mod APK, los usuarios necesitan encontrar una versión más nueva del archivo mod APK de una fuente confiable, luego descargarlo e instalarlo en su dispositivo. Los usuarios también deben desinstalar la versión anterior del mod APK antes de instalar el nuevo.
-
Q: ¿Cómo puedo desinstalar Netflix mod APK?
-
A: Para desinstalar Netflix mod APK, los usuarios necesitan ir a la configuración de su dispositivo, luego aplicaciones o aplicaciones, a continuación, encontrar y seleccionar la aplicación Netflix mod APK, a continuación, toque en desinstalar. Los usuarios también deben eliminar los archivos o carpetas residuales relacionados con la aplicación de su dispositivo de almacenamiento.
-
Q: ¿Puedo usar Netflix mod APK en otros dispositivos?
-
A: Sí, los usuarios pueden usar Netflix mod APK en otros dispositivos que admiten el sistema operativo Android, como teléfonos inteligentes, tabletas, computadoras portátiles, televisores inteligentes, consolas de juegos y dispositivos de transmisión. Sin embargo, los usuarios deben asegurarse de que el archivo APK mod es compatible con su modelo de dispositivo y la versión antes de instalarlo en su dispositivo.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Ekelebe De J Martins.md b/spaces/Benson/text-generation/Examples/Descargar Ekelebe De J Martins.md
deleted file mode 100644
index 9776fbf0eeba70b149a600c7989fd34e8e241c76..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Ekelebe De J Martins.md
+++ /dev/null
@@ -1,64 +0,0 @@
-
-
Cómo descargar música de Tommy J Pisa
-
Si eres un fan de la música pop y dangdut indonesia, es posible que hayas oído hablar de Tommy J Pisa, un cantante que saltó a la fama en los años 1980 y 1990. Es conocido por su voz melodiosa y canciones románticas, como "Dibatas Kota Ini", "Surat Untuk Kekasih", y "Biarkan Aku Menangis". Su música ha tocado los corazones de muchos oyentes y se ha convertido en parte de la herencia musical de Indonesia.
Pero, ¿cómo se puede descargar música de Tommy J Pisa y disfrutar de ella en sus dispositivos? En este artículo, le mostraremos tres maneras de hacerlo legal y éticamente, sin violar ninguna ley de derechos de autor ni dañar al artista. También responderemos algunas preguntas frecuentes sobre Tommy J Pisa y su música.
-
Opción 1: Compra sus álbumes o canciones en tiendas de música en línea
-
La forma más sencilla de descargar música de Tommy J Pisa es comprar sus álbumes o canciones de tiendas de música en línea, como iTunes, Amazon o Google Play. Al hacer esto, apoyará al artista financieramente y obtendrá archivos MP3 de alta calidad que puede reproducir en cualquier dispositivo. También obtendrá acceso a la ilustración del álbum, letras y otra información.
-
Para comprar música de Tommy J Pisa en línea, necesitará una tarjeta de crédito o una billetera digital, como PayPal. También tendrá que crear una cuenta en la tienda de música en línea de su elección y descargar su aplicación o software. Una vez que hayas hecho eso, puedes navegar por su catálogo y buscar los álbumes o canciones de Tommy J Pisa. Puede previsualizar las canciones antes de comprarlas y luego hacer clic en el botón de compra para completar la compra. Las canciones se descargarán en tu dispositivo o almacenamiento en la nube y podrás escucharlas en cualquier momento.
-
Opción 2: Transmitir su música desde plataformas en línea que permiten escuchar sin conexión
-
-
Para transmitir música de Tommy J Pisa en línea, necesitará una conexión a Internet y una suscripción a la plataforma de su elección. Algunas plataformas ofrecen pruebas gratuitas o planes con anuncios, mientras que otras requieren una cuota mensual o anual. También tendrá que descargar su aplicación o software y crear una cuenta. Una vez que hayas hecho eso, puedes buscar la música de Tommy J Pisa y agregarla a tu biblioteca o lista de reproducción. A continuación, puede escucharlo en línea o descargarlo para escucharlo sin conexión alternando el botón de descarga. Las canciones se almacenarán en su dispositivo o almacenamiento en la nube y puede escucharlas en cualquier momento.
-
Opción 3: Descargar su música desde sitios web libres y legales que ofrecen sus canciones con su permiso
-
La tercera forma de descargar música de Tommy J Pisa es descargar su música desde sitios web gratuitos y legales que ofrecen sus canciones con su permiso. Estos sitios web suelen ser administrados por fans o sellos independientes que han obtenido los derechos para distribuir su música de forma gratuita. También pueden ofrecer otros contenidos relacionados con Tommy J Pisa, como vídeos, fotos o noticias.
-
Para descargar música de Tommy J Pisa desde estos sitios web, necesitará una conexión a Internet y un navegador web. También tendrá que encontrar estos sitios web mediante la búsqueda en línea o siguiendo los enlaces de las redes sociales u otras fuentes. Algunos ejemplos de estos sitios web son:
-
-
-
Sitio web
Descripción
-
[Akurama Records]( 1 )
Una discográfica con sede en Yakarta que ha subido varios álbumes de Tommy J Pisa en YouTube. Puede escucharlos en línea o descargarlos como archivos MP3 utilizando una herramienta de descarga de YouTube.
-
[Tommy J Pisa Fans Club]
Un sitio web dedicado a Tommy J Pisa que tiene una colección de sus canciones, videos, fotos y noticias. Puede escuchar sus canciones en línea o descargarlas como archivos MP3 haciendo clic en el enlace de descarga.
-
-
-
Sin embargo, debe tener cuidado al descargar música de estos sitios web, ya que algunos de ellos pueden contener virus, malware o spyware que pueden dañar su dispositivo o comprometer su privacidad. También debes respetar los deseos del artista y no compartir su música sin su permiso o usarla con fines comerciales.
-
Conclusión
-
En conclusión, hay tres maneras de descargar música de Tommy J Pisa legal y éticamente: comprar sus álbumes o canciones de tiendas de música en línea, streaming de su música desde plataformas en línea que permiten escuchar fuera de línea, y descargar su música de sitios web libres y legales que ofrecen sus canciones con su permiso. Al hacerlo, podrás disfrutar de su música en tus dispositivos y apreciar su talento y contribución a la escena musical indonesia.
-
Aquí hay algunos consejos sobre cómo disfrutar de su música:
-
-
Crea una lista de reproducción de tus canciones favoritas de Tommy J Pisa y escúchala cuando quieras.
-
Comparte su música con tus amigos y familiares y presentarles su estilo y género.
-
Ver sus vídeos en YouTube u otras plataformas y ver cómo se realiza en vivo o en estudio.
-
Síguelo en las redes sociales u otros canales y mantente actualizado sobre sus últimas noticias y actividades.
-
Apoyarlo asistiendo a sus conciertos o eventos si es posible y mostrarle su amor y aprecio.
-
-
Preguntas frecuentes
-
¿Quién es Tommy J Pisa?
-
Tommy J Pisa es un cantante indonesio especializado en música pop y dangdut. Nació en Yakarta el 22 de diciembre de 1960. Comenzó su carrera como cantante callejero y más tarde se unió a varias bandas antes de ir en solitario. Ha publicado más de 20 álbumes y ha ganado varios premios y reconocimientos por su música.
-
¿Qué es dangdut?
-
-
¿Cuáles son algunas de las canciones más populares de Tommy J Pisa?
-
Algunas de las canciones más populares de Tommy J Pisa son:
-
-
"Dibatas Kota Ini" (Al borde de esta ciudad), una canción sobre una relación a larga distancia que termina en tragedia.
-
"Surat Untuk Kekasih" (Carta para Amante), una canción sobre un hombre que escribe una carta a su amante que lo ha dejado por otro hombre.
-
"Biarkan Aku Menangis" (Let Me Cry), una canción sobre un hombre que expresa su tristeza y arrepentimiento después de perder a su amante.
-
"Disini Dibatas Kota Ini" (Aquí en el borde de esta ciudad), una secuela de "Dibatas Kota Ini" que cuenta la historia del amante que regresa a la ciudad después de años de separación.
-
"Nasib Pengamen" (The Fate of Street Singers), una canción que refleja la propia experiencia de Tommy J Pisa como cantante callejero que lucha por llegar a fin de mes.
-
-
¿Dónde puedo encontrar más información sobre Tommy J Pisa?
-
Puedes encontrar más información sobre Tommy J Pisa en las siguientes fuentes:
-
-
[Su sitio web oficial], donde puedes encontrar su biografía, discografía, galería, noticias y datos de contacto.
-
[Su página de Facebook], donde puedes seguirlo y ver sus publicaciones, fotos, videos y eventos.
-
[Su cuenta de Instagram], donde puedes seguirlo y ver sus historias, fotos, videos y transmisiones en vivo.
-
[Su canal de YouTube], donde puedes suscribirte a él y ver sus videos, entrevistas y presentaciones en vivo.
-
[Su página de Wikipedia], donde puedes encontrar un resumen de su vida, carrera, premios y discografía.
-
-
¿Cómo puedo contactar a Tommy J Pisa?
-
Si desea ponerse en contacto con Tommy J Pisa por cualquier motivo, como reservarlo para un espectáculo, colaborar con él o enviarle un correo de fans, puede hacerlo utilizando los siguientes métodos:
-
-
Correo electrónico: tommyjpisa@gmail.com
-
Teléfono: +62 812 3456 7890
-
Dirección: Jl. Raya Bogor No. 123, Yakarta Timur, Indonesia
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BetterAPI/BetterChat_new/src/lib/buildPrompt.ts b/spaces/BetterAPI/BetterChat_new/src/lib/buildPrompt.ts
deleted file mode 100644
index bd48390ba13ed8e68b790cfd475c32f5824d907d..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat_new/src/lib/buildPrompt.ts
+++ /dev/null
@@ -1,33 +0,0 @@
-import {
- PUBLIC_ASSISTANT_MESSAGE_TOKEN,
- PUBLIC_MAX_INPUT_TOKENS,
- PUBLIC_PREPROMPT,
- PUBLIC_SEP_TOKEN,
- PUBLIC_USER_MESSAGE_TOKEN,
-} from "$env/static/public";
-import type { Message } from "./types/Message";
-
-/**
- * Convert [{user: "assistant", content: "hi"}, {user: "user", content: "hello"}] to:
- *
- * <|assistant|>hi<|endoftext|><|prompter|>hello<|endoftext|><|assistant|>
- */
-export function buildPrompt(messages: Message[]): string {
- const prompt =
- messages
- .map(
- (m) =>
- (m.from === "user"
- ? PUBLIC_USER_MESSAGE_TOKEN + m.content
- : PUBLIC_ASSISTANT_MESSAGE_TOKEN + m.content) +
- (m.content.endsWith(PUBLIC_SEP_TOKEN) ? "" : PUBLIC_SEP_TOKEN)
- )
- .join("") + PUBLIC_ASSISTANT_MESSAGE_TOKEN;
-
- // Not super precise, but it's truncated in the model's backend anyway
- return (
- PUBLIC_PREPROMPT +
- "\n-----\n" +
- prompt.split(" ").slice(-parseInt(PUBLIC_MAX_INPUT_TOKENS)).join(" ")
- );
-}
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/typing_extensions.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/typing_extensions.py
deleted file mode 100644
index 9cbf5b87b590c2d40fd3db2444339df85f71c611..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/typing_extensions.py
+++ /dev/null
@@ -1,2312 +0,0 @@
-import abc
-import collections
-import collections.abc
-import functools
-import inspect
-import operator
-import sys
-import types as _types
-import typing
-import warnings
-
-
-__all__ = [
- # Super-special typing primitives.
- 'Any',
- 'ClassVar',
- 'Concatenate',
- 'Final',
- 'LiteralString',
- 'ParamSpec',
- 'ParamSpecArgs',
- 'ParamSpecKwargs',
- 'Self',
- 'Type',
- 'TypeVar',
- 'TypeVarTuple',
- 'Unpack',
-
- # ABCs (from collections.abc).
- 'Awaitable',
- 'AsyncIterator',
- 'AsyncIterable',
- 'Coroutine',
- 'AsyncGenerator',
- 'AsyncContextManager',
- 'ChainMap',
-
- # Concrete collection types.
- 'ContextManager',
- 'Counter',
- 'Deque',
- 'DefaultDict',
- 'NamedTuple',
- 'OrderedDict',
- 'TypedDict',
-
- # Structural checks, a.k.a. protocols.
- 'SupportsIndex',
-
- # One-off things.
- 'Annotated',
- 'assert_never',
- 'assert_type',
- 'clear_overloads',
- 'dataclass_transform',
- 'deprecated',
- 'get_overloads',
- 'final',
- 'get_args',
- 'get_origin',
- 'get_type_hints',
- 'IntVar',
- 'is_typeddict',
- 'Literal',
- 'NewType',
- 'overload',
- 'override',
- 'Protocol',
- 'reveal_type',
- 'runtime',
- 'runtime_checkable',
- 'Text',
- 'TypeAlias',
- 'TypeGuard',
- 'TYPE_CHECKING',
- 'Never',
- 'NoReturn',
- 'Required',
- 'NotRequired',
-]
-
-# for backward compatibility
-PEP_560 = True
-GenericMeta = type
-
-# The functions below are modified copies of typing internal helpers.
-# They are needed by _ProtocolMeta and they provide support for PEP 646.
-
-_marker = object()
-
-
-def _check_generic(cls, parameters, elen=_marker):
- """Check correct count for parameters of a generic cls (internal helper).
- This gives a nice error message in case of count mismatch.
- """
- if not elen:
- raise TypeError(f"{cls} is not a generic class")
- if elen is _marker:
- if not hasattr(cls, "__parameters__") or not cls.__parameters__:
- raise TypeError(f"{cls} is not a generic class")
- elen = len(cls.__parameters__)
- alen = len(parameters)
- if alen != elen:
- if hasattr(cls, "__parameters__"):
- parameters = [p for p in cls.__parameters__ if not _is_unpack(p)]
- num_tv_tuples = sum(isinstance(p, TypeVarTuple) for p in parameters)
- if (num_tv_tuples > 0) and (alen >= elen - num_tv_tuples):
- return
- raise TypeError(f"Too {'many' if alen > elen else 'few'} parameters for {cls};"
- f" actual {alen}, expected {elen}")
-
-
-if sys.version_info >= (3, 10):
- def _should_collect_from_parameters(t):
- return isinstance(
- t, (typing._GenericAlias, _types.GenericAlias, _types.UnionType)
- )
-elif sys.version_info >= (3, 9):
- def _should_collect_from_parameters(t):
- return isinstance(t, (typing._GenericAlias, _types.GenericAlias))
-else:
- def _should_collect_from_parameters(t):
- return isinstance(t, typing._GenericAlias) and not t._special
-
-
-def _collect_type_vars(types, typevar_types=None):
- """Collect all type variable contained in types in order of
- first appearance (lexicographic order). For example::
-
- _collect_type_vars((T, List[S, T])) == (T, S)
- """
- if typevar_types is None:
- typevar_types = typing.TypeVar
- tvars = []
- for t in types:
- if (
- isinstance(t, typevar_types) and
- t not in tvars and
- not _is_unpack(t)
- ):
- tvars.append(t)
- if _should_collect_from_parameters(t):
- tvars.extend([t for t in t.__parameters__ if t not in tvars])
- return tuple(tvars)
-
-
-NoReturn = typing.NoReturn
-
-# Some unconstrained type variables. These are used by the container types.
-# (These are not for export.)
-T = typing.TypeVar('T') # Any type.
-KT = typing.TypeVar('KT') # Key type.
-VT = typing.TypeVar('VT') # Value type.
-T_co = typing.TypeVar('T_co', covariant=True) # Any type covariant containers.
-T_contra = typing.TypeVar('T_contra', contravariant=True) # Ditto contravariant.
-
-
-if sys.version_info >= (3, 11):
- from typing import Any
-else:
-
- class _AnyMeta(type):
- def __instancecheck__(self, obj):
- if self is Any:
- raise TypeError("typing_extensions.Any cannot be used with isinstance()")
- return super().__instancecheck__(obj)
-
- def __repr__(self):
- if self is Any:
- return "typing_extensions.Any"
- return super().__repr__()
-
- class Any(metaclass=_AnyMeta):
- """Special type indicating an unconstrained type.
- - Any is compatible with every type.
- - Any assumed to have all methods.
- - All values assumed to be instances of Any.
- Note that all the above statements are true from the point of view of
- static type checkers. At runtime, Any should not be used with instance
- checks.
- """
- def __new__(cls, *args, **kwargs):
- if cls is Any:
- raise TypeError("Any cannot be instantiated")
- return super().__new__(cls, *args, **kwargs)
-
-
-ClassVar = typing.ClassVar
-
-# On older versions of typing there is an internal class named "Final".
-# 3.8+
-if hasattr(typing, 'Final') and sys.version_info[:2] >= (3, 7):
- Final = typing.Final
-# 3.7
-else:
- class _FinalForm(typing._SpecialForm, _root=True):
-
- def __repr__(self):
- return 'typing_extensions.' + self._name
-
- def __getitem__(self, parameters):
- item = typing._type_check(parameters,
- f'{self._name} accepts only a single type.')
- return typing._GenericAlias(self, (item,))
-
- Final = _FinalForm('Final',
- doc="""A special typing construct to indicate that a name
- cannot be re-assigned or overridden in a subclass.
- For example:
-
- MAX_SIZE: Final = 9000
- MAX_SIZE += 1 # Error reported by type checker
-
- class Connection:
- TIMEOUT: Final[int] = 10
- class FastConnector(Connection):
- TIMEOUT = 1 # Error reported by type checker
-
- There is no runtime checking of these properties.""")
-
-if sys.version_info >= (3, 11):
- final = typing.final
-else:
- # @final exists in 3.8+, but we backport it for all versions
- # before 3.11 to keep support for the __final__ attribute.
- # See https://bugs.python.org/issue46342
- def final(f):
- """This decorator can be used to indicate to type checkers that
- the decorated method cannot be overridden, and decorated class
- cannot be subclassed. For example:
-
- class Base:
- @final
- def done(self) -> None:
- ...
- class Sub(Base):
- def done(self) -> None: # Error reported by type checker
- ...
- @final
- class Leaf:
- ...
- class Other(Leaf): # Error reported by type checker
- ...
-
- There is no runtime checking of these properties. The decorator
- sets the ``__final__`` attribute to ``True`` on the decorated object
- to allow runtime introspection.
- """
- try:
- f.__final__ = True
- except (AttributeError, TypeError):
- # Skip the attribute silently if it is not writable.
- # AttributeError happens if the object has __slots__ or a
- # read-only property, TypeError if it's a builtin class.
- pass
- return f
-
-
-def IntVar(name):
- return typing.TypeVar(name)
-
-
-# 3.8+:
-if hasattr(typing, 'Literal'):
- Literal = typing.Literal
-# 3.7:
-else:
- class _LiteralForm(typing._SpecialForm, _root=True):
-
- def __repr__(self):
- return 'typing_extensions.' + self._name
-
- def __getitem__(self, parameters):
- return typing._GenericAlias(self, parameters)
-
- Literal = _LiteralForm('Literal',
- doc="""A type that can be used to indicate to type checkers
- that the corresponding value has a value literally equivalent
- to the provided parameter. For example:
-
- var: Literal[4] = 4
-
- The type checker understands that 'var' is literally equal to
- the value 4 and no other value.
-
- Literal[...] cannot be subclassed. There is no runtime
- checking verifying that the parameter is actually a value
- instead of a type.""")
-
-
-_overload_dummy = typing._overload_dummy # noqa
-
-
-if hasattr(typing, "get_overloads"): # 3.11+
- overload = typing.overload
- get_overloads = typing.get_overloads
- clear_overloads = typing.clear_overloads
-else:
- # {module: {qualname: {firstlineno: func}}}
- _overload_registry = collections.defaultdict(
- functools.partial(collections.defaultdict, dict)
- )
-
- def overload(func):
- """Decorator for overloaded functions/methods.
-
- In a stub file, place two or more stub definitions for the same
- function in a row, each decorated with @overload. For example:
-
- @overload
- def utf8(value: None) -> None: ...
- @overload
- def utf8(value: bytes) -> bytes: ...
- @overload
- def utf8(value: str) -> bytes: ...
-
- In a non-stub file (i.e. a regular .py file), do the same but
- follow it with an implementation. The implementation should *not*
- be decorated with @overload. For example:
-
- @overload
- def utf8(value: None) -> None: ...
- @overload
- def utf8(value: bytes) -> bytes: ...
- @overload
- def utf8(value: str) -> bytes: ...
- def utf8(value):
- # implementation goes here
-
- The overloads for a function can be retrieved at runtime using the
- get_overloads() function.
- """
- # classmethod and staticmethod
- f = getattr(func, "__func__", func)
- try:
- _overload_registry[f.__module__][f.__qualname__][
- f.__code__.co_firstlineno
- ] = func
- except AttributeError:
- # Not a normal function; ignore.
- pass
- return _overload_dummy
-
- def get_overloads(func):
- """Return all defined overloads for *func* as a sequence."""
- # classmethod and staticmethod
- f = getattr(func, "__func__", func)
- if f.__module__ not in _overload_registry:
- return []
- mod_dict = _overload_registry[f.__module__]
- if f.__qualname__ not in mod_dict:
- return []
- return list(mod_dict[f.__qualname__].values())
-
- def clear_overloads():
- """Clear all overloads in the registry."""
- _overload_registry.clear()
-
-
-# This is not a real generic class. Don't use outside annotations.
-Type = typing.Type
-
-# Various ABCs mimicking those in collections.abc.
-# A few are simply re-exported for completeness.
-
-
-Awaitable = typing.Awaitable
-Coroutine = typing.Coroutine
-AsyncIterable = typing.AsyncIterable
-AsyncIterator = typing.AsyncIterator
-Deque = typing.Deque
-ContextManager = typing.ContextManager
-AsyncContextManager = typing.AsyncContextManager
-DefaultDict = typing.DefaultDict
-
-# 3.7.2+
-if hasattr(typing, 'OrderedDict'):
- OrderedDict = typing.OrderedDict
-# 3.7.0-3.7.2
-else:
- OrderedDict = typing._alias(collections.OrderedDict, (KT, VT))
-
-Counter = typing.Counter
-ChainMap = typing.ChainMap
-AsyncGenerator = typing.AsyncGenerator
-NewType = typing.NewType
-Text = typing.Text
-TYPE_CHECKING = typing.TYPE_CHECKING
-
-
-_PROTO_WHITELIST = ['Callable', 'Awaitable',
- 'Iterable', 'Iterator', 'AsyncIterable', 'AsyncIterator',
- 'Hashable', 'Sized', 'Container', 'Collection', 'Reversible',
- 'ContextManager', 'AsyncContextManager']
-
-
-def _get_protocol_attrs(cls):
- attrs = set()
- for base in cls.__mro__[:-1]: # without object
- if base.__name__ in ('Protocol', 'Generic'):
- continue
- annotations = getattr(base, '__annotations__', {})
- for attr in list(base.__dict__.keys()) + list(annotations.keys()):
- if (not attr.startswith('_abc_') and attr not in (
- '__abstractmethods__', '__annotations__', '__weakref__',
- '_is_protocol', '_is_runtime_protocol', '__dict__',
- '__args__', '__slots__',
- '__next_in_mro__', '__parameters__', '__origin__',
- '__orig_bases__', '__extra__', '__tree_hash__',
- '__doc__', '__subclasshook__', '__init__', '__new__',
- '__module__', '_MutableMapping__marker', '_gorg')):
- attrs.add(attr)
- return attrs
-
-
-def _is_callable_members_only(cls):
- return all(callable(getattr(cls, attr, None)) for attr in _get_protocol_attrs(cls))
-
-
-def _maybe_adjust_parameters(cls):
- """Helper function used in Protocol.__init_subclass__ and _TypedDictMeta.__new__.
-
- The contents of this function are very similar
- to logic found in typing.Generic.__init_subclass__
- on the CPython main branch.
- """
- tvars = []
- if '__orig_bases__' in cls.__dict__:
- tvars = typing._collect_type_vars(cls.__orig_bases__)
- # Look for Generic[T1, ..., Tn] or Protocol[T1, ..., Tn].
- # If found, tvars must be a subset of it.
- # If not found, tvars is it.
- # Also check for and reject plain Generic,
- # and reject multiple Generic[...] and/or Protocol[...].
- gvars = None
- for base in cls.__orig_bases__:
- if (isinstance(base, typing._GenericAlias) and
- base.__origin__ in (typing.Generic, Protocol)):
- # for error messages
- the_base = base.__origin__.__name__
- if gvars is not None:
- raise TypeError(
- "Cannot inherit from Generic[...]"
- " and/or Protocol[...] multiple types.")
- gvars = base.__parameters__
- if gvars is None:
- gvars = tvars
- else:
- tvarset = set(tvars)
- gvarset = set(gvars)
- if not tvarset <= gvarset:
- s_vars = ', '.join(str(t) for t in tvars if t not in gvarset)
- s_args = ', '.join(str(g) for g in gvars)
- raise TypeError(f"Some type variables ({s_vars}) are"
- f" not listed in {the_base}[{s_args}]")
- tvars = gvars
- cls.__parameters__ = tuple(tvars)
-
-
-# 3.8+
-if hasattr(typing, 'Protocol'):
- Protocol = typing.Protocol
-# 3.7
-else:
-
- def _no_init(self, *args, **kwargs):
- if type(self)._is_protocol:
- raise TypeError('Protocols cannot be instantiated')
-
- class _ProtocolMeta(abc.ABCMeta): # noqa: B024
- # This metaclass is a bit unfortunate and exists only because of the lack
- # of __instancehook__.
- def __instancecheck__(cls, instance):
- # We need this method for situations where attributes are
- # assigned in __init__.
- if ((not getattr(cls, '_is_protocol', False) or
- _is_callable_members_only(cls)) and
- issubclass(instance.__class__, cls)):
- return True
- if cls._is_protocol:
- if all(hasattr(instance, attr) and
- (not callable(getattr(cls, attr, None)) or
- getattr(instance, attr) is not None)
- for attr in _get_protocol_attrs(cls)):
- return True
- return super().__instancecheck__(instance)
-
- class Protocol(metaclass=_ProtocolMeta):
- # There is quite a lot of overlapping code with typing.Generic.
- # Unfortunately it is hard to avoid this while these live in two different
- # modules. The duplicated code will be removed when Protocol is moved to typing.
- """Base class for protocol classes. Protocol classes are defined as::
-
- class Proto(Protocol):
- def meth(self) -> int:
- ...
-
- Such classes are primarily used with static type checkers that recognize
- structural subtyping (static duck-typing), for example::
-
- class C:
- def meth(self) -> int:
- return 0
-
- def func(x: Proto) -> int:
- return x.meth()
-
- func(C()) # Passes static type check
-
- See PEP 544 for details. Protocol classes decorated with
- @typing_extensions.runtime act as simple-minded runtime protocol that checks
- only the presence of given attributes, ignoring their type signatures.
-
- Protocol classes can be generic, they are defined as::
-
- class GenProto(Protocol[T]):
- def meth(self) -> T:
- ...
- """
- __slots__ = ()
- _is_protocol = True
-
- def __new__(cls, *args, **kwds):
- if cls is Protocol:
- raise TypeError("Type Protocol cannot be instantiated; "
- "it can only be used as a base class")
- return super().__new__(cls)
-
- @typing._tp_cache
- def __class_getitem__(cls, params):
- if not isinstance(params, tuple):
- params = (params,)
- if not params and cls is not typing.Tuple:
- raise TypeError(
- f"Parameter list to {cls.__qualname__}[...] cannot be empty")
- msg = "Parameters to generic types must be types."
- params = tuple(typing._type_check(p, msg) for p in params) # noqa
- if cls is Protocol:
- # Generic can only be subscripted with unique type variables.
- if not all(isinstance(p, typing.TypeVar) for p in params):
- i = 0
- while isinstance(params[i], typing.TypeVar):
- i += 1
- raise TypeError(
- "Parameters to Protocol[...] must all be type variables."
- f" Parameter {i + 1} is {params[i]}")
- if len(set(params)) != len(params):
- raise TypeError(
- "Parameters to Protocol[...] must all be unique")
- else:
- # Subscripting a regular Generic subclass.
- _check_generic(cls, params, len(cls.__parameters__))
- return typing._GenericAlias(cls, params)
-
- def __init_subclass__(cls, *args, **kwargs):
- if '__orig_bases__' in cls.__dict__:
- error = typing.Generic in cls.__orig_bases__
- else:
- error = typing.Generic in cls.__bases__
- if error:
- raise TypeError("Cannot inherit from plain Generic")
- _maybe_adjust_parameters(cls)
-
- # Determine if this is a protocol or a concrete subclass.
- if not cls.__dict__.get('_is_protocol', None):
- cls._is_protocol = any(b is Protocol for b in cls.__bases__)
-
- # Set (or override) the protocol subclass hook.
- def _proto_hook(other):
- if not cls.__dict__.get('_is_protocol', None):
- return NotImplemented
- if not getattr(cls, '_is_runtime_protocol', False):
- if sys._getframe(2).f_globals['__name__'] in ['abc', 'functools']:
- return NotImplemented
- raise TypeError("Instance and class checks can only be used with"
- " @runtime protocols")
- if not _is_callable_members_only(cls):
- if sys._getframe(2).f_globals['__name__'] in ['abc', 'functools']:
- return NotImplemented
- raise TypeError("Protocols with non-method members"
- " don't support issubclass()")
- if not isinstance(other, type):
- # Same error as for issubclass(1, int)
- raise TypeError('issubclass() arg 1 must be a class')
- for attr in _get_protocol_attrs(cls):
- for base in other.__mro__:
- if attr in base.__dict__:
- if base.__dict__[attr] is None:
- return NotImplemented
- break
- annotations = getattr(base, '__annotations__', {})
- if (isinstance(annotations, typing.Mapping) and
- attr in annotations and
- isinstance(other, _ProtocolMeta) and
- other._is_protocol):
- break
- else:
- return NotImplemented
- return True
- if '__subclasshook__' not in cls.__dict__:
- cls.__subclasshook__ = _proto_hook
-
- # We have nothing more to do for non-protocols.
- if not cls._is_protocol:
- return
-
- # Check consistency of bases.
- for base in cls.__bases__:
- if not (base in (object, typing.Generic) or
- base.__module__ == 'collections.abc' and
- base.__name__ in _PROTO_WHITELIST or
- isinstance(base, _ProtocolMeta) and base._is_protocol):
- raise TypeError('Protocols can only inherit from other'
- f' protocols, got {repr(base)}')
- cls.__init__ = _no_init
-
-
-# 3.8+
-if hasattr(typing, 'runtime_checkable'):
- runtime_checkable = typing.runtime_checkable
-# 3.7
-else:
- def runtime_checkable(cls):
- """Mark a protocol class as a runtime protocol, so that it
- can be used with isinstance() and issubclass(). Raise TypeError
- if applied to a non-protocol class.
-
- This allows a simple-minded structural check very similar to the
- one-offs in collections.abc such as Hashable.
- """
- if not isinstance(cls, _ProtocolMeta) or not cls._is_protocol:
- raise TypeError('@runtime_checkable can be only applied to protocol classes,'
- f' got {cls!r}')
- cls._is_runtime_protocol = True
- return cls
-
-
-# Exists for backwards compatibility.
-runtime = runtime_checkable
-
-
-# 3.8+
-if hasattr(typing, 'SupportsIndex'):
- SupportsIndex = typing.SupportsIndex
-# 3.7
-else:
- @runtime_checkable
- class SupportsIndex(Protocol):
- __slots__ = ()
-
- @abc.abstractmethod
- def __index__(self) -> int:
- pass
-
-
-if hasattr(typing, "Required"):
- # The standard library TypedDict in Python 3.8 does not store runtime information
- # about which (if any) keys are optional. See https://bugs.python.org/issue38834
- # The standard library TypedDict in Python 3.9.0/1 does not honour the "total"
- # keyword with old-style TypedDict(). See https://bugs.python.org/issue42059
- # The standard library TypedDict below Python 3.11 does not store runtime
- # information about optional and required keys when using Required or NotRequired.
- # Generic TypedDicts are also impossible using typing.TypedDict on Python <3.11.
- TypedDict = typing.TypedDict
- _TypedDictMeta = typing._TypedDictMeta
- is_typeddict = typing.is_typeddict
-else:
- def _check_fails(cls, other):
- try:
- if sys._getframe(1).f_globals['__name__'] not in ['abc',
- 'functools',
- 'typing']:
- # Typed dicts are only for static structural subtyping.
- raise TypeError('TypedDict does not support instance and class checks')
- except (AttributeError, ValueError):
- pass
- return False
-
- def _dict_new(*args, **kwargs):
- if not args:
- raise TypeError('TypedDict.__new__(): not enough arguments')
- _, args = args[0], args[1:] # allow the "cls" keyword be passed
- return dict(*args, **kwargs)
-
- _dict_new.__text_signature__ = '($cls, _typename, _fields=None, /, **kwargs)'
-
- def _typeddict_new(*args, total=True, **kwargs):
- if not args:
- raise TypeError('TypedDict.__new__(): not enough arguments')
- _, args = args[0], args[1:] # allow the "cls" keyword be passed
- if args:
- typename, args = args[0], args[1:] # allow the "_typename" keyword be passed
- elif '_typename' in kwargs:
- typename = kwargs.pop('_typename')
- import warnings
- warnings.warn("Passing '_typename' as keyword argument is deprecated",
- DeprecationWarning, stacklevel=2)
- else:
- raise TypeError("TypedDict.__new__() missing 1 required positional "
- "argument: '_typename'")
- if args:
- try:
- fields, = args # allow the "_fields" keyword be passed
- except ValueError:
- raise TypeError('TypedDict.__new__() takes from 2 to 3 '
- f'positional arguments but {len(args) + 2} '
- 'were given')
- elif '_fields' in kwargs and len(kwargs) == 1:
- fields = kwargs.pop('_fields')
- import warnings
- warnings.warn("Passing '_fields' as keyword argument is deprecated",
- DeprecationWarning, stacklevel=2)
- else:
- fields = None
-
- if fields is None:
- fields = kwargs
- elif kwargs:
- raise TypeError("TypedDict takes either a dict or keyword arguments,"
- " but not both")
-
- ns = {'__annotations__': dict(fields)}
- try:
- # Setting correct module is necessary to make typed dict classes pickleable.
- ns['__module__'] = sys._getframe(1).f_globals.get('__name__', '__main__')
- except (AttributeError, ValueError):
- pass
-
- return _TypedDictMeta(typename, (), ns, total=total)
-
- _typeddict_new.__text_signature__ = ('($cls, _typename, _fields=None,'
- ' /, *, total=True, **kwargs)')
-
- _TAKES_MODULE = "module" in inspect.signature(typing._type_check).parameters
-
- class _TypedDictMeta(type):
- def __init__(cls, name, bases, ns, total=True):
- super().__init__(name, bases, ns)
-
- def __new__(cls, name, bases, ns, total=True):
- # Create new typed dict class object.
- # This method is called directly when TypedDict is subclassed,
- # or via _typeddict_new when TypedDict is instantiated. This way
- # TypedDict supports all three syntaxes described in its docstring.
- # Subclasses and instances of TypedDict return actual dictionaries
- # via _dict_new.
- ns['__new__'] = _typeddict_new if name == 'TypedDict' else _dict_new
- # Don't insert typing.Generic into __bases__ here,
- # or Generic.__init_subclass__ will raise TypeError
- # in the super().__new__() call.
- # Instead, monkey-patch __bases__ onto the class after it's been created.
- tp_dict = super().__new__(cls, name, (dict,), ns)
-
- if any(issubclass(base, typing.Generic) for base in bases):
- tp_dict.__bases__ = (typing.Generic, dict)
- _maybe_adjust_parameters(tp_dict)
-
- annotations = {}
- own_annotations = ns.get('__annotations__', {})
- msg = "TypedDict('Name', {f0: t0, f1: t1, ...}); each t must be a type"
- kwds = {"module": tp_dict.__module__} if _TAKES_MODULE else {}
- own_annotations = {
- n: typing._type_check(tp, msg, **kwds)
- for n, tp in own_annotations.items()
- }
- required_keys = set()
- optional_keys = set()
-
- for base in bases:
- annotations.update(base.__dict__.get('__annotations__', {}))
- required_keys.update(base.__dict__.get('__required_keys__', ()))
- optional_keys.update(base.__dict__.get('__optional_keys__', ()))
-
- annotations.update(own_annotations)
- for annotation_key, annotation_type in own_annotations.items():
- annotation_origin = get_origin(annotation_type)
- if annotation_origin is Annotated:
- annotation_args = get_args(annotation_type)
- if annotation_args:
- annotation_type = annotation_args[0]
- annotation_origin = get_origin(annotation_type)
-
- if annotation_origin is Required:
- required_keys.add(annotation_key)
- elif annotation_origin is NotRequired:
- optional_keys.add(annotation_key)
- elif total:
- required_keys.add(annotation_key)
- else:
- optional_keys.add(annotation_key)
-
- tp_dict.__annotations__ = annotations
- tp_dict.__required_keys__ = frozenset(required_keys)
- tp_dict.__optional_keys__ = frozenset(optional_keys)
- if not hasattr(tp_dict, '__total__'):
- tp_dict.__total__ = total
- return tp_dict
-
- __instancecheck__ = __subclasscheck__ = _check_fails
-
- TypedDict = _TypedDictMeta('TypedDict', (dict,), {})
- TypedDict.__module__ = __name__
- TypedDict.__doc__ = \
- """A simple typed name space. At runtime it is equivalent to a plain dict.
-
- TypedDict creates a dictionary type that expects all of its
- instances to have a certain set of keys, with each key
- associated with a value of a consistent type. This expectation
- is not checked at runtime but is only enforced by type checkers.
- Usage::
-
- class Point2D(TypedDict):
- x: int
- y: int
- label: str
-
- a: Point2D = {'x': 1, 'y': 2, 'label': 'good'} # OK
- b: Point2D = {'z': 3, 'label': 'bad'} # Fails type check
-
- assert Point2D(x=1, y=2, label='first') == dict(x=1, y=2, label='first')
-
- The type info can be accessed via the Point2D.__annotations__ dict, and
- the Point2D.__required_keys__ and Point2D.__optional_keys__ frozensets.
- TypedDict supports two additional equivalent forms::
-
- Point2D = TypedDict('Point2D', x=int, y=int, label=str)
- Point2D = TypedDict('Point2D', {'x': int, 'y': int, 'label': str})
-
- The class syntax is only supported in Python 3.6+, while two other
- syntax forms work for Python 2.7 and 3.2+
- """
-
- if hasattr(typing, "_TypedDictMeta"):
- _TYPEDDICT_TYPES = (typing._TypedDictMeta, _TypedDictMeta)
- else:
- _TYPEDDICT_TYPES = (_TypedDictMeta,)
-
- def is_typeddict(tp):
- """Check if an annotation is a TypedDict class
-
- For example::
- class Film(TypedDict):
- title: str
- year: int
-
- is_typeddict(Film) # => True
- is_typeddict(Union[list, str]) # => False
- """
- return isinstance(tp, tuple(_TYPEDDICT_TYPES))
-
-
-if hasattr(typing, "assert_type"):
- assert_type = typing.assert_type
-
-else:
- def assert_type(__val, __typ):
- """Assert (to the type checker) that the value is of the given type.
-
- When the type checker encounters a call to assert_type(), it
- emits an error if the value is not of the specified type::
-
- def greet(name: str) -> None:
- assert_type(name, str) # ok
- assert_type(name, int) # type checker error
-
- At runtime this returns the first argument unchanged and otherwise
- does nothing.
- """
- return __val
-
-
-if hasattr(typing, "Required"):
- get_type_hints = typing.get_type_hints
-else:
- import functools
- import types
-
- # replaces _strip_annotations()
- def _strip_extras(t):
- """Strips Annotated, Required and NotRequired from a given type."""
- if isinstance(t, _AnnotatedAlias):
- return _strip_extras(t.__origin__)
- if hasattr(t, "__origin__") and t.__origin__ in (Required, NotRequired):
- return _strip_extras(t.__args__[0])
- if isinstance(t, typing._GenericAlias):
- stripped_args = tuple(_strip_extras(a) for a in t.__args__)
- if stripped_args == t.__args__:
- return t
- return t.copy_with(stripped_args)
- if hasattr(types, "GenericAlias") and isinstance(t, types.GenericAlias):
- stripped_args = tuple(_strip_extras(a) for a in t.__args__)
- if stripped_args == t.__args__:
- return t
- return types.GenericAlias(t.__origin__, stripped_args)
- if hasattr(types, "UnionType") and isinstance(t, types.UnionType):
- stripped_args = tuple(_strip_extras(a) for a in t.__args__)
- if stripped_args == t.__args__:
- return t
- return functools.reduce(operator.or_, stripped_args)
-
- return t
-
- def get_type_hints(obj, globalns=None, localns=None, include_extras=False):
- """Return type hints for an object.
-
- This is often the same as obj.__annotations__, but it handles
- forward references encoded as string literals, adds Optional[t] if a
- default value equal to None is set and recursively replaces all
- 'Annotated[T, ...]', 'Required[T]' or 'NotRequired[T]' with 'T'
- (unless 'include_extras=True').
-
- The argument may be a module, class, method, or function. The annotations
- are returned as a dictionary. For classes, annotations include also
- inherited members.
-
- TypeError is raised if the argument is not of a type that can contain
- annotations, and an empty dictionary is returned if no annotations are
- present.
-
- BEWARE -- the behavior of globalns and localns is counterintuitive
- (unless you are familiar with how eval() and exec() work). The
- search order is locals first, then globals.
-
- - If no dict arguments are passed, an attempt is made to use the
- globals from obj (or the respective module's globals for classes),
- and these are also used as the locals. If the object does not appear
- to have globals, an empty dictionary is used.
-
- - If one dict argument is passed, it is used for both globals and
- locals.
-
- - If two dict arguments are passed, they specify globals and
- locals, respectively.
- """
- if hasattr(typing, "Annotated"):
- hint = typing.get_type_hints(
- obj, globalns=globalns, localns=localns, include_extras=True
- )
- else:
- hint = typing.get_type_hints(obj, globalns=globalns, localns=localns)
- if include_extras:
- return hint
- return {k: _strip_extras(t) for k, t in hint.items()}
-
-
-# Python 3.9+ has PEP 593 (Annotated)
-if hasattr(typing, 'Annotated'):
- Annotated = typing.Annotated
- # Not exported and not a public API, but needed for get_origin() and get_args()
- # to work.
- _AnnotatedAlias = typing._AnnotatedAlias
-# 3.7-3.8
-else:
- class _AnnotatedAlias(typing._GenericAlias, _root=True):
- """Runtime representation of an annotated type.
-
- At its core 'Annotated[t, dec1, dec2, ...]' is an alias for the type 't'
- with extra annotations. The alias behaves like a normal typing alias,
- instantiating is the same as instantiating the underlying type, binding
- it to types is also the same.
- """
- def __init__(self, origin, metadata):
- if isinstance(origin, _AnnotatedAlias):
- metadata = origin.__metadata__ + metadata
- origin = origin.__origin__
- super().__init__(origin, origin)
- self.__metadata__ = metadata
-
- def copy_with(self, params):
- assert len(params) == 1
- new_type = params[0]
- return _AnnotatedAlias(new_type, self.__metadata__)
-
- def __repr__(self):
- return (f"typing_extensions.Annotated[{typing._type_repr(self.__origin__)}, "
- f"{', '.join(repr(a) for a in self.__metadata__)}]")
-
- def __reduce__(self):
- return operator.getitem, (
- Annotated, (self.__origin__,) + self.__metadata__
- )
-
- def __eq__(self, other):
- if not isinstance(other, _AnnotatedAlias):
- return NotImplemented
- if self.__origin__ != other.__origin__:
- return False
- return self.__metadata__ == other.__metadata__
-
- def __hash__(self):
- return hash((self.__origin__, self.__metadata__))
-
- class Annotated:
- """Add context specific metadata to a type.
-
- Example: Annotated[int, runtime_check.Unsigned] indicates to the
- hypothetical runtime_check module that this type is an unsigned int.
- Every other consumer of this type can ignore this metadata and treat
- this type as int.
-
- The first argument to Annotated must be a valid type (and will be in
- the __origin__ field), the remaining arguments are kept as a tuple in
- the __extra__ field.
-
- Details:
-
- - It's an error to call `Annotated` with less than two arguments.
- - Nested Annotated are flattened::
-
- Annotated[Annotated[T, Ann1, Ann2], Ann3] == Annotated[T, Ann1, Ann2, Ann3]
-
- - Instantiating an annotated type is equivalent to instantiating the
- underlying type::
-
- Annotated[C, Ann1](5) == C(5)
-
- - Annotated can be used as a generic type alias::
-
- Optimized = Annotated[T, runtime.Optimize()]
- Optimized[int] == Annotated[int, runtime.Optimize()]
-
- OptimizedList = Annotated[List[T], runtime.Optimize()]
- OptimizedList[int] == Annotated[List[int], runtime.Optimize()]
- """
-
- __slots__ = ()
-
- def __new__(cls, *args, **kwargs):
- raise TypeError("Type Annotated cannot be instantiated.")
-
- @typing._tp_cache
- def __class_getitem__(cls, params):
- if not isinstance(params, tuple) or len(params) < 2:
- raise TypeError("Annotated[...] should be used "
- "with at least two arguments (a type and an "
- "annotation).")
- allowed_special_forms = (ClassVar, Final)
- if get_origin(params[0]) in allowed_special_forms:
- origin = params[0]
- else:
- msg = "Annotated[t, ...]: t must be a type."
- origin = typing._type_check(params[0], msg)
- metadata = tuple(params[1:])
- return _AnnotatedAlias(origin, metadata)
-
- def __init_subclass__(cls, *args, **kwargs):
- raise TypeError(
- f"Cannot subclass {cls.__module__}.Annotated"
- )
-
-# Python 3.8 has get_origin() and get_args() but those implementations aren't
-# Annotated-aware, so we can't use those. Python 3.9's versions don't support
-# ParamSpecArgs and ParamSpecKwargs, so only Python 3.10's versions will do.
-if sys.version_info[:2] >= (3, 10):
- get_origin = typing.get_origin
- get_args = typing.get_args
-# 3.7-3.9
-else:
- try:
- # 3.9+
- from typing import _BaseGenericAlias
- except ImportError:
- _BaseGenericAlias = typing._GenericAlias
- try:
- # 3.9+
- from typing import GenericAlias as _typing_GenericAlias
- except ImportError:
- _typing_GenericAlias = typing._GenericAlias
-
- def get_origin(tp):
- """Get the unsubscripted version of a type.
-
- This supports generic types, Callable, Tuple, Union, Literal, Final, ClassVar
- and Annotated. Return None for unsupported types. Examples::
-
- get_origin(Literal[42]) is Literal
- get_origin(int) is None
- get_origin(ClassVar[int]) is ClassVar
- get_origin(Generic) is Generic
- get_origin(Generic[T]) is Generic
- get_origin(Union[T, int]) is Union
- get_origin(List[Tuple[T, T]][int]) == list
- get_origin(P.args) is P
- """
- if isinstance(tp, _AnnotatedAlias):
- return Annotated
- if isinstance(tp, (typing._GenericAlias, _typing_GenericAlias, _BaseGenericAlias,
- ParamSpecArgs, ParamSpecKwargs)):
- return tp.__origin__
- if tp is typing.Generic:
- return typing.Generic
- return None
-
- def get_args(tp):
- """Get type arguments with all substitutions performed.
-
- For unions, basic simplifications used by Union constructor are performed.
- Examples::
- get_args(Dict[str, int]) == (str, int)
- get_args(int) == ()
- get_args(Union[int, Union[T, int], str][int]) == (int, str)
- get_args(Union[int, Tuple[T, int]][str]) == (int, Tuple[str, int])
- get_args(Callable[[], T][int]) == ([], int)
- """
- if isinstance(tp, _AnnotatedAlias):
- return (tp.__origin__,) + tp.__metadata__
- if isinstance(tp, (typing._GenericAlias, _typing_GenericAlias)):
- if getattr(tp, "_special", False):
- return ()
- res = tp.__args__
- if get_origin(tp) is collections.abc.Callable and res[0] is not Ellipsis:
- res = (list(res[:-1]), res[-1])
- return res
- return ()
-
-
-# 3.10+
-if hasattr(typing, 'TypeAlias'):
- TypeAlias = typing.TypeAlias
-# 3.9
-elif sys.version_info[:2] >= (3, 9):
- class _TypeAliasForm(typing._SpecialForm, _root=True):
- def __repr__(self):
- return 'typing_extensions.' + self._name
-
- @_TypeAliasForm
- def TypeAlias(self, parameters):
- """Special marker indicating that an assignment should
- be recognized as a proper type alias definition by type
- checkers.
-
- For example::
-
- Predicate: TypeAlias = Callable[..., bool]
-
- It's invalid when used anywhere except as in the example above.
- """
- raise TypeError(f"{self} is not subscriptable")
-# 3.7-3.8
-else:
- class _TypeAliasForm(typing._SpecialForm, _root=True):
- def __repr__(self):
- return 'typing_extensions.' + self._name
-
- TypeAlias = _TypeAliasForm('TypeAlias',
- doc="""Special marker indicating that an assignment should
- be recognized as a proper type alias definition by type
- checkers.
-
- For example::
-
- Predicate: TypeAlias = Callable[..., bool]
-
- It's invalid when used anywhere except as in the example
- above.""")
-
-
-class _DefaultMixin:
- """Mixin for TypeVarLike defaults."""
-
- __slots__ = ()
-
- def __init__(self, default):
- if isinstance(default, (tuple, list)):
- self.__default__ = tuple((typing._type_check(d, "Default must be a type")
- for d in default))
- elif default != _marker:
- self.__default__ = typing._type_check(default, "Default must be a type")
- else:
- self.__default__ = None
-
-
-# Add default and infer_variance parameters from PEP 696 and 695
-class TypeVar(typing.TypeVar, _DefaultMixin, _root=True):
- """Type variable."""
-
- __module__ = 'typing'
-
- def __init__(self, name, *constraints, bound=None,
- covariant=False, contravariant=False,
- default=_marker, infer_variance=False):
- super().__init__(name, *constraints, bound=bound, covariant=covariant,
- contravariant=contravariant)
- _DefaultMixin.__init__(self, default)
- self.__infer_variance__ = infer_variance
-
- # for pickling:
- try:
- def_mod = sys._getframe(1).f_globals.get('__name__', '__main__')
- except (AttributeError, ValueError):
- def_mod = None
- if def_mod != 'typing_extensions':
- self.__module__ = def_mod
-
-
-# Python 3.10+ has PEP 612
-if hasattr(typing, 'ParamSpecArgs'):
- ParamSpecArgs = typing.ParamSpecArgs
- ParamSpecKwargs = typing.ParamSpecKwargs
-# 3.7-3.9
-else:
- class _Immutable:
- """Mixin to indicate that object should not be copied."""
- __slots__ = ()
-
- def __copy__(self):
- return self
-
- def __deepcopy__(self, memo):
- return self
-
- class ParamSpecArgs(_Immutable):
- """The args for a ParamSpec object.
-
- Given a ParamSpec object P, P.args is an instance of ParamSpecArgs.
-
- ParamSpecArgs objects have a reference back to their ParamSpec:
-
- P.args.__origin__ is P
-
- This type is meant for runtime introspection and has no special meaning to
- static type checkers.
- """
- def __init__(self, origin):
- self.__origin__ = origin
-
- def __repr__(self):
- return f"{self.__origin__.__name__}.args"
-
- def __eq__(self, other):
- if not isinstance(other, ParamSpecArgs):
- return NotImplemented
- return self.__origin__ == other.__origin__
-
- class ParamSpecKwargs(_Immutable):
- """The kwargs for a ParamSpec object.
-
- Given a ParamSpec object P, P.kwargs is an instance of ParamSpecKwargs.
-
- ParamSpecKwargs objects have a reference back to their ParamSpec:
-
- P.kwargs.__origin__ is P
-
- This type is meant for runtime introspection and has no special meaning to
- static type checkers.
- """
- def __init__(self, origin):
- self.__origin__ = origin
-
- def __repr__(self):
- return f"{self.__origin__.__name__}.kwargs"
-
- def __eq__(self, other):
- if not isinstance(other, ParamSpecKwargs):
- return NotImplemented
- return self.__origin__ == other.__origin__
-
-# 3.10+
-if hasattr(typing, 'ParamSpec'):
-
- # Add default Parameter - PEP 696
- class ParamSpec(typing.ParamSpec, _DefaultMixin, _root=True):
- """Parameter specification variable."""
-
- __module__ = 'typing'
-
- def __init__(self, name, *, bound=None, covariant=False, contravariant=False,
- default=_marker):
- super().__init__(name, bound=bound, covariant=covariant,
- contravariant=contravariant)
- _DefaultMixin.__init__(self, default)
-
- # for pickling:
- try:
- def_mod = sys._getframe(1).f_globals.get('__name__', '__main__')
- except (AttributeError, ValueError):
- def_mod = None
- if def_mod != 'typing_extensions':
- self.__module__ = def_mod
-
-# 3.7-3.9
-else:
-
- # Inherits from list as a workaround for Callable checks in Python < 3.9.2.
- class ParamSpec(list, _DefaultMixin):
- """Parameter specification variable.
-
- Usage::
-
- P = ParamSpec('P')
-
- Parameter specification variables exist primarily for the benefit of static
- type checkers. They are used to forward the parameter types of one
- callable to another callable, a pattern commonly found in higher order
- functions and decorators. They are only valid when used in ``Concatenate``,
- or s the first argument to ``Callable``. In Python 3.10 and higher,
- they are also supported in user-defined Generics at runtime.
- See class Generic for more information on generic types. An
- example for annotating a decorator::
-
- T = TypeVar('T')
- P = ParamSpec('P')
-
- def add_logging(f: Callable[P, T]) -> Callable[P, T]:
- '''A type-safe decorator to add logging to a function.'''
- def inner(*args: P.args, **kwargs: P.kwargs) -> T:
- logging.info(f'{f.__name__} was called')
- return f(*args, **kwargs)
- return inner
-
- @add_logging
- def add_two(x: float, y: float) -> float:
- '''Add two numbers together.'''
- return x + y
-
- Parameter specification variables defined with covariant=True or
- contravariant=True can be used to declare covariant or contravariant
- generic types. These keyword arguments are valid, but their actual semantics
- are yet to be decided. See PEP 612 for details.
-
- Parameter specification variables can be introspected. e.g.:
-
- P.__name__ == 'T'
- P.__bound__ == None
- P.__covariant__ == False
- P.__contravariant__ == False
-
- Note that only parameter specification variables defined in global scope can
- be pickled.
- """
-
- # Trick Generic __parameters__.
- __class__ = typing.TypeVar
-
- @property
- def args(self):
- return ParamSpecArgs(self)
-
- @property
- def kwargs(self):
- return ParamSpecKwargs(self)
-
- def __init__(self, name, *, bound=None, covariant=False, contravariant=False,
- default=_marker):
- super().__init__([self])
- self.__name__ = name
- self.__covariant__ = bool(covariant)
- self.__contravariant__ = bool(contravariant)
- if bound:
- self.__bound__ = typing._type_check(bound, 'Bound must be a type.')
- else:
- self.__bound__ = None
- _DefaultMixin.__init__(self, default)
-
- # for pickling:
- try:
- def_mod = sys._getframe(1).f_globals.get('__name__', '__main__')
- except (AttributeError, ValueError):
- def_mod = None
- if def_mod != 'typing_extensions':
- self.__module__ = def_mod
-
- def __repr__(self):
- if self.__covariant__:
- prefix = '+'
- elif self.__contravariant__:
- prefix = '-'
- else:
- prefix = '~'
- return prefix + self.__name__
-
- def __hash__(self):
- return object.__hash__(self)
-
- def __eq__(self, other):
- return self is other
-
- def __reduce__(self):
- return self.__name__
-
- # Hack to get typing._type_check to pass.
- def __call__(self, *args, **kwargs):
- pass
-
-
-# 3.7-3.9
-if not hasattr(typing, 'Concatenate'):
- # Inherits from list as a workaround for Callable checks in Python < 3.9.2.
- class _ConcatenateGenericAlias(list):
-
- # Trick Generic into looking into this for __parameters__.
- __class__ = typing._GenericAlias
-
- # Flag in 3.8.
- _special = False
-
- def __init__(self, origin, args):
- super().__init__(args)
- self.__origin__ = origin
- self.__args__ = args
-
- def __repr__(self):
- _type_repr = typing._type_repr
- return (f'{_type_repr(self.__origin__)}'
- f'[{", ".join(_type_repr(arg) for arg in self.__args__)}]')
-
- def __hash__(self):
- return hash((self.__origin__, self.__args__))
-
- # Hack to get typing._type_check to pass in Generic.
- def __call__(self, *args, **kwargs):
- pass
-
- @property
- def __parameters__(self):
- return tuple(
- tp for tp in self.__args__ if isinstance(tp, (typing.TypeVar, ParamSpec))
- )
-
-
-# 3.7-3.9
-@typing._tp_cache
-def _concatenate_getitem(self, parameters):
- if parameters == ():
- raise TypeError("Cannot take a Concatenate of no types.")
- if not isinstance(parameters, tuple):
- parameters = (parameters,)
- if not isinstance(parameters[-1], ParamSpec):
- raise TypeError("The last parameter to Concatenate should be a "
- "ParamSpec variable.")
- msg = "Concatenate[arg, ...]: each arg must be a type."
- parameters = tuple(typing._type_check(p, msg) for p in parameters)
- return _ConcatenateGenericAlias(self, parameters)
-
-
-# 3.10+
-if hasattr(typing, 'Concatenate'):
- Concatenate = typing.Concatenate
- _ConcatenateGenericAlias = typing._ConcatenateGenericAlias # noqa
-# 3.9
-elif sys.version_info[:2] >= (3, 9):
- @_TypeAliasForm
- def Concatenate(self, parameters):
- """Used in conjunction with ``ParamSpec`` and ``Callable`` to represent a
- higher order function which adds, removes or transforms parameters of a
- callable.
-
- For example::
-
- Callable[Concatenate[int, P], int]
-
- See PEP 612 for detailed information.
- """
- return _concatenate_getitem(self, parameters)
-# 3.7-8
-else:
- class _ConcatenateForm(typing._SpecialForm, _root=True):
- def __repr__(self):
- return 'typing_extensions.' + self._name
-
- def __getitem__(self, parameters):
- return _concatenate_getitem(self, parameters)
-
- Concatenate = _ConcatenateForm(
- 'Concatenate',
- doc="""Used in conjunction with ``ParamSpec`` and ``Callable`` to represent a
- higher order function which adds, removes or transforms parameters of a
- callable.
-
- For example::
-
- Callable[Concatenate[int, P], int]
-
- See PEP 612 for detailed information.
- """)
-
-# 3.10+
-if hasattr(typing, 'TypeGuard'):
- TypeGuard = typing.TypeGuard
-# 3.9
-elif sys.version_info[:2] >= (3, 9):
- class _TypeGuardForm(typing._SpecialForm, _root=True):
- def __repr__(self):
- return 'typing_extensions.' + self._name
-
- @_TypeGuardForm
- def TypeGuard(self, parameters):
- """Special typing form used to annotate the return type of a user-defined
- type guard function. ``TypeGuard`` only accepts a single type argument.
- At runtime, functions marked this way should return a boolean.
-
- ``TypeGuard`` aims to benefit *type narrowing* -- a technique used by static
- type checkers to determine a more precise type of an expression within a
- program's code flow. Usually type narrowing is done by analyzing
- conditional code flow and applying the narrowing to a block of code. The
- conditional expression here is sometimes referred to as a "type guard".
-
- Sometimes it would be convenient to use a user-defined boolean function
- as a type guard. Such a function should use ``TypeGuard[...]`` as its
- return type to alert static type checkers to this intention.
-
- Using ``-> TypeGuard`` tells the static type checker that for a given
- function:
-
- 1. The return value is a boolean.
- 2. If the return value is ``True``, the type of its argument
- is the type inside ``TypeGuard``.
-
- For example::
-
- def is_str(val: Union[str, float]):
- # "isinstance" type guard
- if isinstance(val, str):
- # Type of ``val`` is narrowed to ``str``
- ...
- else:
- # Else, type of ``val`` is narrowed to ``float``.
- ...
-
- Strict type narrowing is not enforced -- ``TypeB`` need not be a narrower
- form of ``TypeA`` (it can even be a wider form) and this may lead to
- type-unsafe results. The main reason is to allow for things like
- narrowing ``List[object]`` to ``List[str]`` even though the latter is not
- a subtype of the former, since ``List`` is invariant. The responsibility of
- writing type-safe type guards is left to the user.
-
- ``TypeGuard`` also works with type variables. For more information, see
- PEP 647 (User-Defined Type Guards).
- """
- item = typing._type_check(parameters, f'{self} accepts only a single type.')
- return typing._GenericAlias(self, (item,))
-# 3.7-3.8
-else:
- class _TypeGuardForm(typing._SpecialForm, _root=True):
-
- def __repr__(self):
- return 'typing_extensions.' + self._name
-
- def __getitem__(self, parameters):
- item = typing._type_check(parameters,
- f'{self._name} accepts only a single type')
- return typing._GenericAlias(self, (item,))
-
- TypeGuard = _TypeGuardForm(
- 'TypeGuard',
- doc="""Special typing form used to annotate the return type of a user-defined
- type guard function. ``TypeGuard`` only accepts a single type argument.
- At runtime, functions marked this way should return a boolean.
-
- ``TypeGuard`` aims to benefit *type narrowing* -- a technique used by static
- type checkers to determine a more precise type of an expression within a
- program's code flow. Usually type narrowing is done by analyzing
- conditional code flow and applying the narrowing to a block of code. The
- conditional expression here is sometimes referred to as a "type guard".
-
- Sometimes it would be convenient to use a user-defined boolean function
- as a type guard. Such a function should use ``TypeGuard[...]`` as its
- return type to alert static type checkers to this intention.
-
- Using ``-> TypeGuard`` tells the static type checker that for a given
- function:
-
- 1. The return value is a boolean.
- 2. If the return value is ``True``, the type of its argument
- is the type inside ``TypeGuard``.
-
- For example::
-
- def is_str(val: Union[str, float]):
- # "isinstance" type guard
- if isinstance(val, str):
- # Type of ``val`` is narrowed to ``str``
- ...
- else:
- # Else, type of ``val`` is narrowed to ``float``.
- ...
-
- Strict type narrowing is not enforced -- ``TypeB`` need not be a narrower
- form of ``TypeA`` (it can even be a wider form) and this may lead to
- type-unsafe results. The main reason is to allow for things like
- narrowing ``List[object]`` to ``List[str]`` even though the latter is not
- a subtype of the former, since ``List`` is invariant. The responsibility of
- writing type-safe type guards is left to the user.
-
- ``TypeGuard`` also works with type variables. For more information, see
- PEP 647 (User-Defined Type Guards).
- """)
-
-
-# Vendored from cpython typing._SpecialFrom
-class _SpecialForm(typing._Final, _root=True):
- __slots__ = ('_name', '__doc__', '_getitem')
-
- def __init__(self, getitem):
- self._getitem = getitem
- self._name = getitem.__name__
- self.__doc__ = getitem.__doc__
-
- def __getattr__(self, item):
- if item in {'__name__', '__qualname__'}:
- return self._name
-
- raise AttributeError(item)
-
- def __mro_entries__(self, bases):
- raise TypeError(f"Cannot subclass {self!r}")
-
- def __repr__(self):
- return f'typing_extensions.{self._name}'
-
- def __reduce__(self):
- return self._name
-
- def __call__(self, *args, **kwds):
- raise TypeError(f"Cannot instantiate {self!r}")
-
- def __or__(self, other):
- return typing.Union[self, other]
-
- def __ror__(self, other):
- return typing.Union[other, self]
-
- def __instancecheck__(self, obj):
- raise TypeError(f"{self} cannot be used with isinstance()")
-
- def __subclasscheck__(self, cls):
- raise TypeError(f"{self} cannot be used with issubclass()")
-
- @typing._tp_cache
- def __getitem__(self, parameters):
- return self._getitem(self, parameters)
-
-
-if hasattr(typing, "LiteralString"):
- LiteralString = typing.LiteralString
-else:
- @_SpecialForm
- def LiteralString(self, params):
- """Represents an arbitrary literal string.
-
- Example::
-
- from pip._vendor.typing_extensions import LiteralString
-
- def query(sql: LiteralString) -> ...:
- ...
-
- query("SELECT * FROM table") # ok
- query(f"SELECT * FROM {input()}") # not ok
-
- See PEP 675 for details.
-
- """
- raise TypeError(f"{self} is not subscriptable")
-
-
-if hasattr(typing, "Self"):
- Self = typing.Self
-else:
- @_SpecialForm
- def Self(self, params):
- """Used to spell the type of "self" in classes.
-
- Example::
-
- from typing import Self
-
- class ReturnsSelf:
- def parse(self, data: bytes) -> Self:
- ...
- return self
-
- """
-
- raise TypeError(f"{self} is not subscriptable")
-
-
-if hasattr(typing, "Never"):
- Never = typing.Never
-else:
- @_SpecialForm
- def Never(self, params):
- """The bottom type, a type that has no members.
-
- This can be used to define a function that should never be
- called, or a function that never returns::
-
- from pip._vendor.typing_extensions import Never
-
- def never_call_me(arg: Never) -> None:
- pass
-
- def int_or_str(arg: int | str) -> None:
- never_call_me(arg) # type checker error
- match arg:
- case int():
- print("It's an int")
- case str():
- print("It's a str")
- case _:
- never_call_me(arg) # ok, arg is of type Never
-
- """
-
- raise TypeError(f"{self} is not subscriptable")
-
-
-if hasattr(typing, 'Required'):
- Required = typing.Required
- NotRequired = typing.NotRequired
-elif sys.version_info[:2] >= (3, 9):
- class _ExtensionsSpecialForm(typing._SpecialForm, _root=True):
- def __repr__(self):
- return 'typing_extensions.' + self._name
-
- @_ExtensionsSpecialForm
- def Required(self, parameters):
- """A special typing construct to mark a key of a total=False TypedDict
- as required. For example:
-
- class Movie(TypedDict, total=False):
- title: Required[str]
- year: int
-
- m = Movie(
- title='The Matrix', # typechecker error if key is omitted
- year=1999,
- )
-
- There is no runtime checking that a required key is actually provided
- when instantiating a related TypedDict.
- """
- item = typing._type_check(parameters, f'{self._name} accepts only a single type.')
- return typing._GenericAlias(self, (item,))
-
- @_ExtensionsSpecialForm
- def NotRequired(self, parameters):
- """A special typing construct to mark a key of a TypedDict as
- potentially missing. For example:
-
- class Movie(TypedDict):
- title: str
- year: NotRequired[int]
-
- m = Movie(
- title='The Matrix', # typechecker error if key is omitted
- year=1999,
- )
- """
- item = typing._type_check(parameters, f'{self._name} accepts only a single type.')
- return typing._GenericAlias(self, (item,))
-
-else:
- class _RequiredForm(typing._SpecialForm, _root=True):
- def __repr__(self):
- return 'typing_extensions.' + self._name
-
- def __getitem__(self, parameters):
- item = typing._type_check(parameters,
- f'{self._name} accepts only a single type.')
- return typing._GenericAlias(self, (item,))
-
- Required = _RequiredForm(
- 'Required',
- doc="""A special typing construct to mark a key of a total=False TypedDict
- as required. For example:
-
- class Movie(TypedDict, total=False):
- title: Required[str]
- year: int
-
- m = Movie(
- title='The Matrix', # typechecker error if key is omitted
- year=1999,
- )
-
- There is no runtime checking that a required key is actually provided
- when instantiating a related TypedDict.
- """)
- NotRequired = _RequiredForm(
- 'NotRequired',
- doc="""A special typing construct to mark a key of a TypedDict as
- potentially missing. For example:
-
- class Movie(TypedDict):
- title: str
- year: NotRequired[int]
-
- m = Movie(
- title='The Matrix', # typechecker error if key is omitted
- year=1999,
- )
- """)
-
-
-if hasattr(typing, "Unpack"): # 3.11+
- Unpack = typing.Unpack
-elif sys.version_info[:2] >= (3, 9):
- class _UnpackSpecialForm(typing._SpecialForm, _root=True):
- def __repr__(self):
- return 'typing_extensions.' + self._name
-
- class _UnpackAlias(typing._GenericAlias, _root=True):
- __class__ = typing.TypeVar
-
- @_UnpackSpecialForm
- def Unpack(self, parameters):
- """A special typing construct to unpack a variadic type. For example:
-
- Shape = TypeVarTuple('Shape')
- Batch = NewType('Batch', int)
-
- def add_batch_axis(
- x: Array[Unpack[Shape]]
- ) -> Array[Batch, Unpack[Shape]]: ...
-
- """
- item = typing._type_check(parameters, f'{self._name} accepts only a single type.')
- return _UnpackAlias(self, (item,))
-
- def _is_unpack(obj):
- return isinstance(obj, _UnpackAlias)
-
-else:
- class _UnpackAlias(typing._GenericAlias, _root=True):
- __class__ = typing.TypeVar
-
- class _UnpackForm(typing._SpecialForm, _root=True):
- def __repr__(self):
- return 'typing_extensions.' + self._name
-
- def __getitem__(self, parameters):
- item = typing._type_check(parameters,
- f'{self._name} accepts only a single type.')
- return _UnpackAlias(self, (item,))
-
- Unpack = _UnpackForm(
- 'Unpack',
- doc="""A special typing construct to unpack a variadic type. For example:
-
- Shape = TypeVarTuple('Shape')
- Batch = NewType('Batch', int)
-
- def add_batch_axis(
- x: Array[Unpack[Shape]]
- ) -> Array[Batch, Unpack[Shape]]: ...
-
- """)
-
- def _is_unpack(obj):
- return isinstance(obj, _UnpackAlias)
-
-
-if hasattr(typing, "TypeVarTuple"): # 3.11+
-
- # Add default Parameter - PEP 696
- class TypeVarTuple(typing.TypeVarTuple, _DefaultMixin, _root=True):
- """Type variable tuple."""
-
- def __init__(self, name, *, default=_marker):
- super().__init__(name)
- _DefaultMixin.__init__(self, default)
-
- # for pickling:
- try:
- def_mod = sys._getframe(1).f_globals.get('__name__', '__main__')
- except (AttributeError, ValueError):
- def_mod = None
- if def_mod != 'typing_extensions':
- self.__module__ = def_mod
-
-else:
- class TypeVarTuple(_DefaultMixin):
- """Type variable tuple.
-
- Usage::
-
- Ts = TypeVarTuple('Ts')
-
- In the same way that a normal type variable is a stand-in for a single
- type such as ``int``, a type variable *tuple* is a stand-in for a *tuple*
- type such as ``Tuple[int, str]``.
-
- Type variable tuples can be used in ``Generic`` declarations.
- Consider the following example::
-
- class Array(Generic[*Ts]): ...
-
- The ``Ts`` type variable tuple here behaves like ``tuple[T1, T2]``,
- where ``T1`` and ``T2`` are type variables. To use these type variables
- as type parameters of ``Array``, we must *unpack* the type variable tuple using
- the star operator: ``*Ts``. The signature of ``Array`` then behaves
- as if we had simply written ``class Array(Generic[T1, T2]): ...``.
- In contrast to ``Generic[T1, T2]``, however, ``Generic[*Shape]`` allows
- us to parameterise the class with an *arbitrary* number of type parameters.
-
- Type variable tuples can be used anywhere a normal ``TypeVar`` can.
- This includes class definitions, as shown above, as well as function
- signatures and variable annotations::
-
- class Array(Generic[*Ts]):
-
- def __init__(self, shape: Tuple[*Ts]):
- self._shape: Tuple[*Ts] = shape
-
- def get_shape(self) -> Tuple[*Ts]:
- return self._shape
-
- shape = (Height(480), Width(640))
- x: Array[Height, Width] = Array(shape)
- y = abs(x) # Inferred type is Array[Height, Width]
- z = x + x # ... is Array[Height, Width]
- x.get_shape() # ... is tuple[Height, Width]
-
- """
-
- # Trick Generic __parameters__.
- __class__ = typing.TypeVar
-
- def __iter__(self):
- yield self.__unpacked__
-
- def __init__(self, name, *, default=_marker):
- self.__name__ = name
- _DefaultMixin.__init__(self, default)
-
- # for pickling:
- try:
- def_mod = sys._getframe(1).f_globals.get('__name__', '__main__')
- except (AttributeError, ValueError):
- def_mod = None
- if def_mod != 'typing_extensions':
- self.__module__ = def_mod
-
- self.__unpacked__ = Unpack[self]
-
- def __repr__(self):
- return self.__name__
-
- def __hash__(self):
- return object.__hash__(self)
-
- def __eq__(self, other):
- return self is other
-
- def __reduce__(self):
- return self.__name__
-
- def __init_subclass__(self, *args, **kwds):
- if '_root' not in kwds:
- raise TypeError("Cannot subclass special typing classes")
-
-
-if hasattr(typing, "reveal_type"):
- reveal_type = typing.reveal_type
-else:
- def reveal_type(__obj: T) -> T:
- """Reveal the inferred type of a variable.
-
- When a static type checker encounters a call to ``reveal_type()``,
- it will emit the inferred type of the argument::
-
- x: int = 1
- reveal_type(x)
-
- Running a static type checker (e.g., ``mypy``) on this example
- will produce output similar to 'Revealed type is "builtins.int"'.
-
- At runtime, the function prints the runtime type of the
- argument and returns it unchanged.
-
- """
- print(f"Runtime type is {type(__obj).__name__!r}", file=sys.stderr)
- return __obj
-
-
-if hasattr(typing, "assert_never"):
- assert_never = typing.assert_never
-else:
- def assert_never(__arg: Never) -> Never:
- """Assert to the type checker that a line of code is unreachable.
-
- Example::
-
- def int_or_str(arg: int | str) -> None:
- match arg:
- case int():
- print("It's an int")
- case str():
- print("It's a str")
- case _:
- assert_never(arg)
-
- If a type checker finds that a call to assert_never() is
- reachable, it will emit an error.
-
- At runtime, this throws an exception when called.
-
- """
- raise AssertionError("Expected code to be unreachable")
-
-
-if sys.version_info >= (3, 12):
- # dataclass_transform exists in 3.11 but lacks the frozen_default parameter
- dataclass_transform = typing.dataclass_transform
-else:
- def dataclass_transform(
- *,
- eq_default: bool = True,
- order_default: bool = False,
- kw_only_default: bool = False,
- frozen_default: bool = False,
- field_specifiers: typing.Tuple[
- typing.Union[typing.Type[typing.Any], typing.Callable[..., typing.Any]],
- ...
- ] = (),
- **kwargs: typing.Any,
- ) -> typing.Callable[[T], T]:
- """Decorator that marks a function, class, or metaclass as providing
- dataclass-like behavior.
-
- Example:
-
- from pip._vendor.typing_extensions import dataclass_transform
-
- _T = TypeVar("_T")
-
- # Used on a decorator function
- @dataclass_transform()
- def create_model(cls: type[_T]) -> type[_T]:
- ...
- return cls
-
- @create_model
- class CustomerModel:
- id: int
- name: str
-
- # Used on a base class
- @dataclass_transform()
- class ModelBase: ...
-
- class CustomerModel(ModelBase):
- id: int
- name: str
-
- # Used on a metaclass
- @dataclass_transform()
- class ModelMeta(type): ...
-
- class ModelBase(metaclass=ModelMeta): ...
-
- class CustomerModel(ModelBase):
- id: int
- name: str
-
- Each of the ``CustomerModel`` classes defined in this example will now
- behave similarly to a dataclass created with the ``@dataclasses.dataclass``
- decorator. For example, the type checker will synthesize an ``__init__``
- method.
-
- The arguments to this decorator can be used to customize this behavior:
- - ``eq_default`` indicates whether the ``eq`` parameter is assumed to be
- True or False if it is omitted by the caller.
- - ``order_default`` indicates whether the ``order`` parameter is
- assumed to be True or False if it is omitted by the caller.
- - ``kw_only_default`` indicates whether the ``kw_only`` parameter is
- assumed to be True or False if it is omitted by the caller.
- - ``frozen_default`` indicates whether the ``frozen`` parameter is
- assumed to be True or False if it is omitted by the caller.
- - ``field_specifiers`` specifies a static list of supported classes
- or functions that describe fields, similar to ``dataclasses.field()``.
-
- At runtime, this decorator records its arguments in the
- ``__dataclass_transform__`` attribute on the decorated object.
-
- See PEP 681 for details.
-
- """
- def decorator(cls_or_fn):
- cls_or_fn.__dataclass_transform__ = {
- "eq_default": eq_default,
- "order_default": order_default,
- "kw_only_default": kw_only_default,
- "frozen_default": frozen_default,
- "field_specifiers": field_specifiers,
- "kwargs": kwargs,
- }
- return cls_or_fn
- return decorator
-
-
-if hasattr(typing, "override"):
- override = typing.override
-else:
- _F = typing.TypeVar("_F", bound=typing.Callable[..., typing.Any])
-
- def override(__arg: _F) -> _F:
- """Indicate that a method is intended to override a method in a base class.
-
- Usage:
-
- class Base:
- def method(self) -> None: ...
- pass
-
- class Child(Base):
- @override
- def method(self) -> None:
- super().method()
-
- When this decorator is applied to a method, the type checker will
- validate that it overrides a method with the same name on a base class.
- This helps prevent bugs that may occur when a base class is changed
- without an equivalent change to a child class.
-
- There is no runtime checking of these properties. The decorator
- sets the ``__override__`` attribute to ``True`` on the decorated object
- to allow runtime introspection.
-
- See PEP 698 for details.
-
- """
- try:
- __arg.__override__ = True
- except (AttributeError, TypeError):
- # Skip the attribute silently if it is not writable.
- # AttributeError happens if the object has __slots__ or a
- # read-only property, TypeError if it's a builtin class.
- pass
- return __arg
-
-
-if hasattr(typing, "deprecated"):
- deprecated = typing.deprecated
-else:
- _T = typing.TypeVar("_T")
-
- def deprecated(
- __msg: str,
- *,
- category: typing.Optional[typing.Type[Warning]] = DeprecationWarning,
- stacklevel: int = 1,
- ) -> typing.Callable[[_T], _T]:
- """Indicate that a class, function or overload is deprecated.
-
- Usage:
-
- @deprecated("Use B instead")
- class A:
- pass
-
- @deprecated("Use g instead")
- def f():
- pass
-
- @overload
- @deprecated("int support is deprecated")
- def g(x: int) -> int: ...
- @overload
- def g(x: str) -> int: ...
-
- When this decorator is applied to an object, the type checker
- will generate a diagnostic on usage of the deprecated object.
-
- No runtime warning is issued. The decorator sets the ``__deprecated__``
- attribute on the decorated object to the deprecation message
- passed to the decorator. If applied to an overload, the decorator
- must be after the ``@overload`` decorator for the attribute to
- exist on the overload as returned by ``get_overloads()``.
-
- See PEP 702 for details.
-
- """
- def decorator(__arg: _T) -> _T:
- if category is None:
- __arg.__deprecated__ = __msg
- return __arg
- elif isinstance(__arg, type):
- original_new = __arg.__new__
- has_init = __arg.__init__ is not object.__init__
-
- @functools.wraps(original_new)
- def __new__(cls, *args, **kwargs):
- warnings.warn(__msg, category=category, stacklevel=stacklevel + 1)
- # Mirrors a similar check in object.__new__.
- if not has_init and (args or kwargs):
- raise TypeError(f"{cls.__name__}() takes no arguments")
- if original_new is not object.__new__:
- return original_new(cls, *args, **kwargs)
- else:
- return original_new(cls)
-
- __arg.__new__ = staticmethod(__new__)
- __arg.__deprecated__ = __new__.__deprecated__ = __msg
- return __arg
- elif callable(__arg):
- @functools.wraps(__arg)
- def wrapper(*args, **kwargs):
- warnings.warn(__msg, category=category, stacklevel=stacklevel + 1)
- return __arg(*args, **kwargs)
-
- __arg.__deprecated__ = wrapper.__deprecated__ = __msg
- return wrapper
- else:
- raise TypeError(
- "@deprecated decorator with non-None category must be applied to "
- f"a class or callable, not {__arg!r}"
- )
-
- return decorator
-
-
-# We have to do some monkey patching to deal with the dual nature of
-# Unpack/TypeVarTuple:
-# - We want Unpack to be a kind of TypeVar so it gets accepted in
-# Generic[Unpack[Ts]]
-# - We want it to *not* be treated as a TypeVar for the purposes of
-# counting generic parameters, so that when we subscript a generic,
-# the runtime doesn't try to substitute the Unpack with the subscripted type.
-if not hasattr(typing, "TypeVarTuple"):
- typing._collect_type_vars = _collect_type_vars
- typing._check_generic = _check_generic
-
-
-# Backport typing.NamedTuple as it exists in Python 3.11.
-# In 3.11, the ability to define generic `NamedTuple`s was supported.
-# This was explicitly disallowed in 3.9-3.10, and only half-worked in <=3.8.
-if sys.version_info >= (3, 11):
- NamedTuple = typing.NamedTuple
-else:
- def _caller():
- try:
- return sys._getframe(2).f_globals.get('__name__', '__main__')
- except (AttributeError, ValueError): # For platforms without _getframe()
- return None
-
- def _make_nmtuple(name, types, module, defaults=()):
- fields = [n for n, t in types]
- annotations = {n: typing._type_check(t, f"field {n} annotation must be a type")
- for n, t in types}
- nm_tpl = collections.namedtuple(name, fields,
- defaults=defaults, module=module)
- nm_tpl.__annotations__ = nm_tpl.__new__.__annotations__ = annotations
- # The `_field_types` attribute was removed in 3.9;
- # in earlier versions, it is the same as the `__annotations__` attribute
- if sys.version_info < (3, 9):
- nm_tpl._field_types = annotations
- return nm_tpl
-
- _prohibited_namedtuple_fields = typing._prohibited
- _special_namedtuple_fields = frozenset({'__module__', '__name__', '__annotations__'})
-
- class _NamedTupleMeta(type):
- def __new__(cls, typename, bases, ns):
- assert _NamedTuple in bases
- for base in bases:
- if base is not _NamedTuple and base is not typing.Generic:
- raise TypeError(
- 'can only inherit from a NamedTuple type and Generic')
- bases = tuple(tuple if base is _NamedTuple else base for base in bases)
- types = ns.get('__annotations__', {})
- default_names = []
- for field_name in types:
- if field_name in ns:
- default_names.append(field_name)
- elif default_names:
- raise TypeError(f"Non-default namedtuple field {field_name} "
- f"cannot follow default field"
- f"{'s' if len(default_names) > 1 else ''} "
- f"{', '.join(default_names)}")
- nm_tpl = _make_nmtuple(
- typename, types.items(),
- defaults=[ns[n] for n in default_names],
- module=ns['__module__']
- )
- nm_tpl.__bases__ = bases
- if typing.Generic in bases:
- class_getitem = typing.Generic.__class_getitem__.__func__
- nm_tpl.__class_getitem__ = classmethod(class_getitem)
- # update from user namespace without overriding special namedtuple attributes
- for key in ns:
- if key in _prohibited_namedtuple_fields:
- raise AttributeError("Cannot overwrite NamedTuple attribute " + key)
- elif key not in _special_namedtuple_fields and key not in nm_tpl._fields:
- setattr(nm_tpl, key, ns[key])
- if typing.Generic in bases:
- nm_tpl.__init_subclass__()
- return nm_tpl
-
- def NamedTuple(__typename, __fields=None, **kwargs):
- if __fields is None:
- __fields = kwargs.items()
- elif kwargs:
- raise TypeError("Either list of fields or keywords"
- " can be provided to NamedTuple, not both")
- return _make_nmtuple(__typename, __fields, module=_caller())
-
- NamedTuple.__doc__ = typing.NamedTuple.__doc__
- _NamedTuple = type.__new__(_NamedTupleMeta, 'NamedTuple', (), {})
-
- # On 3.8+, alter the signature so that it matches typing.NamedTuple.
- # The signature of typing.NamedTuple on >=3.8 is invalid syntax in Python 3.7,
- # so just leave the signature as it is on 3.7.
- if sys.version_info >= (3, 8):
- NamedTuple.__text_signature__ = '(typename, fields=None, /, **kwargs)'
-
- def _namedtuple_mro_entries(bases):
- assert NamedTuple in bases
- return (_NamedTuple,)
-
- NamedTuple.__mro_entries__ = _namedtuple_mro_entries
diff --git a/spaces/CVPR/LIVE/thrust/thrust/mr/tls_pool.h b/spaces/CVPR/LIVE/thrust/thrust/mr/tls_pool.h
deleted file mode 100644
index c732f022f74c29eb71a9cbe1335c9f0177becdc8..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/mr/tls_pool.h
+++ /dev/null
@@ -1,64 +0,0 @@
-/*
- * Copyright 2018 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/*! \file tls_pool.h
- * \brief A function wrapping a thread local instance of a \p unsynchronized_pool_resource.
- */
-
-#pragma once
-
-#include
-
-#if THRUST_CPP_DIALECT >= 2011
-
-#include
-
-namespace thrust
-{
-namespace mr
-{
-
-/*! \addtogroup memory_management Memory Management
- * \addtogroup memory_resources Memory Resources
- * \ingroup memory_resources
- * \{
- */
-
-/*! Potentially constructs, if not yet created, and then returns the address of a thread-local \p unsynchronized_pool_resource,
- *
- * \tparam Upstream the template argument to the pool template
- * \param upstream the argument to the constructor, if invoked
- */
-template
-__host__
-thrust::mr::unsynchronized_pool_resource & tls_pool(Upstream * upstream = NULL)
-{
- static thread_local auto adaptor = [&]{
- assert(upstream);
- return thrust::mr::unsynchronized_pool_resource(upstream);
- }();
-
- return adaptor;
-}
-
-/*! \}
- */
-
-} // end mr
-} // end thrust
-
-#endif // THRUST_CPP_DIALECT >= 2011
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/cross_system.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/cross_system.h
deleted file mode 100644
index f89f3dba8d3c9c07e259e0aba3ed7aed6dfa1f54..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/cross_system.h
+++ /dev/null
@@ -1,344 +0,0 @@
-/******************************************************************************
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- * * Neither the name of the NVIDIA CORPORATION nor the
- * names of its contributors may be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- ******************************************************************************/
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace cuda_cub {
-
- template
- struct cross_system : execution_policy >
- {
- typedef thrust::execution_policy policy1;
- typedef thrust::execution_policy policy2;
-
- policy1 &sys1;
- policy2 &sys2;
-
- inline __host__ __device__
- cross_system(policy1 &sys1, policy2 &sys2) : sys1(sys1), sys2(sys2) {}
-
- inline __host__ __device__
- cross_system rotate() const
- {
- return cross_system(sys2, sys1);
- }
- };
-
-#if THRUST_CPP_DIALECT >= 2011
- // Device to host.
- template
- THRUST_CONSTEXPR __host__ __device__
- auto direction_of_copy(
- thrust::system::cuda::execution_policy const&
- , thrust::cpp::execution_policy const&
- )
- THRUST_DECLTYPE_RETURNS(
- thrust::detail::integral_constant<
- cudaMemcpyKind, cudaMemcpyDeviceToHost
- >{}
- )
-
- // Host to device.
- template
- THRUST_CONSTEXPR __host__ __device__
- auto direction_of_copy(
- thrust::cpp::execution_policy const&
- , thrust::system::cuda::execution_policy const&
- )
- THRUST_DECLTYPE_RETURNS(
- thrust::detail::integral_constant<
- cudaMemcpyKind, cudaMemcpyHostToDevice
- >{}
- )
-
- // Device to device.
- template
- THRUST_CONSTEXPR __host__ __device__
- auto direction_of_copy(
- thrust::system::cuda::execution_policy const&
- , thrust::system::cuda::execution_policy const&
- )
- THRUST_DECLTYPE_RETURNS(
- thrust::detail::integral_constant<
- cudaMemcpyKind, cudaMemcpyDeviceToDevice
- >{}
- )
-
- // Device to device.
- template
- THRUST_CONSTEXPR __host__ __device__
- auto direction_of_copy(execution_policy const &)
- THRUST_DECLTYPE_RETURNS(
- thrust::detail::integral_constant<
- cudaMemcpyKind, cudaMemcpyDeviceToDevice
- >{}
- )
-
- template
- THRUST_CONSTEXPR __host__ __device__
- auto direction_of_copy(
- execution_policy> const &systems
- )
- THRUST_DECLTYPE_RETURNS(
- direction_of_copy(
- derived_cast(derived_cast(systems).sys1)
- , derived_cast(derived_cast(systems).sys2)
- )
- )
-
- template (),
- std::declval()))>
- THRUST_CONSTEXPR __host__ __device__
- auto is_device_to_host_copy(
- ExecutionPolicy0 const& exec0
- , ExecutionPolicy1 const& exec1
- )
- noexcept ->
- thrust::detail::integral_constant<
- bool, cudaMemcpyDeviceToHost == Direction::value
- >
- {
- return {};
- }
-
- template ()))>
- THRUST_CONSTEXPR __host__ __device__
- auto is_device_to_host_copy(ExecutionPolicy const& exec)
- noexcept ->
- thrust::detail::integral_constant<
- bool, cudaMemcpyDeviceToHost == Direction::value
- >
- {
- return {};
- }
-
- template (),
- std::declval()))>
- THRUST_CONSTEXPR __host__ __device__
- auto is_host_to_device_copy(
- ExecutionPolicy0 const& exec0
- , ExecutionPolicy1 const& exec1
- )
- noexcept ->
- thrust::detail::integral_constant<
- bool, cudaMemcpyHostToDevice == Direction::value
- >
- {
- return {};
- }
-
- template ()))>
- THRUST_CONSTEXPR __host__ __device__
- auto is_host_to_device_copy(ExecutionPolicy const& exec)
- noexcept ->
- thrust::detail::integral_constant<
- bool, cudaMemcpyHostToDevice == Direction::value
- >
- {
- return {};
- }
-
- template (),
- std::declval()))>
- THRUST_CONSTEXPR __host__ __device__
- auto is_device_to_device_copy(
- ExecutionPolicy0 const& exec0
- , ExecutionPolicy1 const& exec1
- )
- noexcept ->
- thrust::detail::integral_constant<
- bool, cudaMemcpyDeviceToDevice == Direction::value
- >
- {
- return {};
- }
-
- template ()))>
- THRUST_CONSTEXPR __host__ __device__
- auto is_device_to_device_copy(ExecutionPolicy const& exec)
- noexcept ->
- thrust::detail::integral_constant<
- bool, cudaMemcpyDeviceToDevice == Direction::value
- >
- {
- return {};
- }
-
- /////////////////////////////////////////////////////////////////////////////
-
- // Device to host.
- template
- __host__ __device__
- auto
- select_device_system(thrust::cuda::execution_policy &sys1,
- thrust::execution_policy &)
- THRUST_DECLTYPE_RETURNS(sys1)
-
- // Device to host.
- template
- __host__ __device__
- auto
- select_device_system(thrust::cuda::execution_policy const &sys1,
- thrust::execution_policy const &)
- THRUST_DECLTYPE_RETURNS(sys1)
-
- // Host to device.
- template
- __host__ __device__
- auto
- select_device_system(thrust::execution_policy &,
- thrust::cuda::execution_policy &sys2)
- THRUST_DECLTYPE_RETURNS(sys2)
-
- // Host to device.
- template
- __host__ __device__
- auto
- select_device_system(thrust::execution_policy const &,
- thrust::cuda::execution_policy const &sys2)
- THRUST_DECLTYPE_RETURNS(sys2)
-
- // Device to device.
- template
- __host__ __device__
- auto
- select_device_system(thrust::cuda::execution_policy &sys1,
- thrust::cuda::execution_policy &)
- THRUST_DECLTYPE_RETURNS(sys1)
-
- // Device to device.
- template
- __host__ __device__
- auto
- select_device_system(thrust::cuda::execution_policy const &sys1,
- thrust::cuda::execution_policy const &)
- THRUST_DECLTYPE_RETURNS(sys1)
-
- /////////////////////////////////////////////////////////////////////////////
-
- // Device to host.
- template
- __host__ __device__
- auto
- select_host_system(thrust::cuda::execution_policy &,
- thrust::execution_policy &sys2)
- THRUST_DECLTYPE_RETURNS(sys2)
-
- // Device to host.
- template
- __host__ __device__
- auto
- select_host_system(thrust::cuda::execution_policy const &,
- thrust::execution_policy const &sys2)
- THRUST_DECLTYPE_RETURNS(sys2)
-
- // Host to device.
- template
- __host__ __device__
- auto
- select_host_system(thrust::execution_policy &sys1,
- thrust::cuda::execution_policy &)
- THRUST_DECLTYPE_RETURNS(sys1)
-
- // Host to device.
- template
- __host__ __device__
- auto
- select_host_system(thrust::execution_policy const &sys1,
- thrust::cuda::execution_policy const &)
- THRUST_DECLTYPE_RETURNS(sys1)
-
- // Device to device.
- template
- __host__ __device__
- auto
- select_host_system(thrust::execution_policy &sys1,
- thrust::execution_policy &)
- THRUST_DECLTYPE_RETURNS(sys1)
-
- // Device to device.
- template
- __host__ __device__
- auto
- select_host_system(thrust::execution_policy const &sys1,
- thrust::execution_policy const &)
- THRUST_DECLTYPE_RETURNS(sys1)
-#endif
-
- // Device to host.
- template
- __host__ __device__
- cross_system
- select_system(execution_policy const & sys1,
- thrust::cpp::execution_policy const &sys2)
- {
- thrust::execution_policy & non_const_sys1 = const_cast &>(sys1);
- thrust::cpp::execution_policy &non_const_sys2 = const_cast &>(sys2);
- return cross_system(non_const_sys1, non_const_sys2);
- }
-
- // Host to device.
- template
- __host__ __device__
- cross_system
- select_system(thrust::cpp::execution_policy const &sys1,
- execution_policy const & sys2)
- {
- thrust::cpp::execution_policy &non_const_sys1 = const_cast &>(sys1);
- thrust::execution_policy & non_const_sys2 = const_cast &>(sys2);
- return cross_system(non_const_sys1, non_const_sys2);
- }
-
-} // namespace cuda_cub
-} // end namespace thrust
-
diff --git a/spaces/CVPR/WALT/mmcv_custom/runner/__init__.py b/spaces/CVPR/WALT/mmcv_custom/runner/__init__.py
deleted file mode 100644
index c701cb016abe470611830dc960999970738352bb..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmcv_custom/runner/__init__.py
+++ /dev/null
@@ -1,8 +0,0 @@
-# Copyright (c) Open-MMLab. All rights reserved.
-from .checkpoint import save_checkpoint
-from .epoch_based_runner import EpochBasedRunnerAmp
-
-
-__all__ = [
- 'EpochBasedRunnerAmp', 'save_checkpoint'
-]
diff --git a/spaces/CVPR/WALT/mmdet/core/anchor/utils.py b/spaces/CVPR/WALT/mmdet/core/anchor/utils.py
deleted file mode 100644
index ab9b53f37f7be1f52fe63c5e53df64ac1303b9e0..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/core/anchor/utils.py
+++ /dev/null
@@ -1,71 +0,0 @@
-import torch
-
-
-def images_to_levels(target, num_levels):
- """Convert targets by image to targets by feature level.
-
- [target_img0, target_img1] -> [target_level0, target_level1, ...]
- """
- target = torch.stack(target, 0)
- level_targets = []
- start = 0
- for n in num_levels:
- end = start + n
- # level_targets.append(target[:, start:end].squeeze(0))
- level_targets.append(target[:, start:end])
- start = end
- return level_targets
-
-
-def anchor_inside_flags(flat_anchors,
- valid_flags,
- img_shape,
- allowed_border=0):
- """Check whether the anchors are inside the border.
-
- Args:
- flat_anchors (torch.Tensor): Flatten anchors, shape (n, 4).
- valid_flags (torch.Tensor): An existing valid flags of anchors.
- img_shape (tuple(int)): Shape of current image.
- allowed_border (int, optional): The border to allow the valid anchor.
- Defaults to 0.
-
- Returns:
- torch.Tensor: Flags indicating whether the anchors are inside a \
- valid range.
- """
- img_h, img_w = img_shape[:2]
- if allowed_border >= 0:
- inside_flags = valid_flags & \
- (flat_anchors[:, 0] >= -allowed_border) & \
- (flat_anchors[:, 1] >= -allowed_border) & \
- (flat_anchors[:, 2] < img_w + allowed_border) & \
- (flat_anchors[:, 3] < img_h + allowed_border)
- else:
- inside_flags = valid_flags
- return inside_flags
-
-
-def calc_region(bbox, ratio, featmap_size=None):
- """Calculate a proportional bbox region.
-
- The bbox center are fixed and the new h' and w' is h * ratio and w * ratio.
-
- Args:
- bbox (Tensor): Bboxes to calculate regions, shape (n, 4).
- ratio (float): Ratio of the output region.
- featmap_size (tuple): Feature map size used for clipping the boundary.
-
- Returns:
- tuple: x1, y1, x2, y2
- """
- x1 = torch.round((1 - ratio) * bbox[0] + ratio * bbox[2]).long()
- y1 = torch.round((1 - ratio) * bbox[1] + ratio * bbox[3]).long()
- x2 = torch.round(ratio * bbox[0] + (1 - ratio) * bbox[2]).long()
- y2 = torch.round(ratio * bbox[1] + (1 - ratio) * bbox[3]).long()
- if featmap_size is not None:
- x1 = x1.clamp(min=0, max=featmap_size[1])
- y1 = y1.clamp(min=0, max=featmap_size[0])
- x2 = x2.clamp(min=0, max=featmap_size[1])
- y2 = y2.clamp(min=0, max=featmap_size[0])
- return (x1, y1, x2, y2)
diff --git a/spaces/CVPR/lama-example/saicinpainting/training/modules/multiscale.py b/spaces/CVPR/lama-example/saicinpainting/training/modules/multiscale.py
deleted file mode 100644
index 65f0a54925593e9da8106bfc6d65a4098ce001d7..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/saicinpainting/training/modules/multiscale.py
+++ /dev/null
@@ -1,244 +0,0 @@
-from typing import List, Tuple, Union, Optional
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from saicinpainting.training.modules.base import get_conv_block_ctor, get_activation
-from saicinpainting.training.modules.pix2pixhd import ResnetBlock
-
-
-class ResNetHead(nn.Module):
- def __init__(self, input_nc, ngf=64, n_downsampling=3, n_blocks=9, norm_layer=nn.BatchNorm2d,
- padding_type='reflect', conv_kind='default', activation=nn.ReLU(True)):
- assert (n_blocks >= 0)
- super(ResNetHead, self).__init__()
-
- conv_layer = get_conv_block_ctor(conv_kind)
-
- model = [nn.ReflectionPad2d(3),
- conv_layer(input_nc, ngf, kernel_size=7, padding=0),
- norm_layer(ngf),
- activation]
-
- ### downsample
- for i in range(n_downsampling):
- mult = 2 ** i
- model += [conv_layer(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1),
- norm_layer(ngf * mult * 2),
- activation]
-
- mult = 2 ** n_downsampling
-
- ### resnet blocks
- for i in range(n_blocks):
- model += [ResnetBlock(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer,
- conv_kind=conv_kind)]
-
- self.model = nn.Sequential(*model)
-
- def forward(self, input):
- return self.model(input)
-
-
-class ResNetTail(nn.Module):
- def __init__(self, output_nc, ngf=64, n_downsampling=3, n_blocks=9, norm_layer=nn.BatchNorm2d,
- padding_type='reflect', conv_kind='default', activation=nn.ReLU(True),
- up_norm_layer=nn.BatchNorm2d, up_activation=nn.ReLU(True), add_out_act=False, out_extra_layers_n=0,
- add_in_proj=None):
- assert (n_blocks >= 0)
- super(ResNetTail, self).__init__()
-
- mult = 2 ** n_downsampling
-
- model = []
-
- if add_in_proj is not None:
- model.append(nn.Conv2d(add_in_proj, ngf * mult, kernel_size=1))
-
- ### resnet blocks
- for i in range(n_blocks):
- model += [ResnetBlock(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer,
- conv_kind=conv_kind)]
-
- ### upsample
- for i in range(n_downsampling):
- mult = 2 ** (n_downsampling - i)
- model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2), kernel_size=3, stride=2, padding=1,
- output_padding=1),
- up_norm_layer(int(ngf * mult / 2)),
- up_activation]
- self.model = nn.Sequential(*model)
-
- out_layers = []
- for _ in range(out_extra_layers_n):
- out_layers += [nn.Conv2d(ngf, ngf, kernel_size=1, padding=0),
- up_norm_layer(ngf),
- up_activation]
- out_layers += [nn.ReflectionPad2d(3),
- nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]
-
- if add_out_act:
- out_layers.append(get_activation('tanh' if add_out_act is True else add_out_act))
-
- self.out_proj = nn.Sequential(*out_layers)
-
- def forward(self, input, return_last_act=False):
- features = self.model(input)
- out = self.out_proj(features)
- if return_last_act:
- return out, features
- else:
- return out
-
-
-class MultiscaleResNet(nn.Module):
- def __init__(self, input_nc, output_nc, ngf=64, n_downsampling=2, n_blocks_head=2, n_blocks_tail=6, n_scales=3,
- norm_layer=nn.BatchNorm2d, padding_type='reflect', conv_kind='default', activation=nn.ReLU(True),
- up_norm_layer=nn.BatchNorm2d, up_activation=nn.ReLU(True), add_out_act=False, out_extra_layers_n=0,
- out_cumulative=False, return_only_hr=False):
- super().__init__()
-
- self.heads = nn.ModuleList([ResNetHead(input_nc, ngf=ngf, n_downsampling=n_downsampling,
- n_blocks=n_blocks_head, norm_layer=norm_layer, padding_type=padding_type,
- conv_kind=conv_kind, activation=activation)
- for i in range(n_scales)])
- tail_in_feats = ngf * (2 ** n_downsampling) + ngf
- self.tails = nn.ModuleList([ResNetTail(output_nc,
- ngf=ngf, n_downsampling=n_downsampling,
- n_blocks=n_blocks_tail, norm_layer=norm_layer, padding_type=padding_type,
- conv_kind=conv_kind, activation=activation, up_norm_layer=up_norm_layer,
- up_activation=up_activation, add_out_act=add_out_act,
- out_extra_layers_n=out_extra_layers_n,
- add_in_proj=None if (i == n_scales - 1) else tail_in_feats)
- for i in range(n_scales)])
-
- self.out_cumulative = out_cumulative
- self.return_only_hr = return_only_hr
-
- @property
- def num_scales(self):
- return len(self.heads)
-
- def forward(self, ms_inputs: List[torch.Tensor], smallest_scales_num: Optional[int] = None) \
- -> Union[torch.Tensor, List[torch.Tensor]]:
- """
- :param ms_inputs: List of inputs of different resolutions from HR to LR
- :param smallest_scales_num: int or None, number of smallest scales to take at input
- :return: Depending on return_only_hr:
- True: Only the most HR output
- False: List of outputs of different resolutions from HR to LR
- """
- if smallest_scales_num is None:
- assert len(self.heads) == len(ms_inputs), (len(self.heads), len(ms_inputs), smallest_scales_num)
- smallest_scales_num = len(self.heads)
- else:
- assert smallest_scales_num == len(ms_inputs) <= len(self.heads), (len(self.heads), len(ms_inputs), smallest_scales_num)
-
- cur_heads = self.heads[-smallest_scales_num:]
- ms_features = [cur_head(cur_inp) for cur_head, cur_inp in zip(cur_heads, ms_inputs)]
-
- all_outputs = []
- prev_tail_features = None
- for i in range(len(ms_features)):
- scale_i = -i - 1
-
- cur_tail_input = ms_features[-i - 1]
- if prev_tail_features is not None:
- if prev_tail_features.shape != cur_tail_input.shape:
- prev_tail_features = F.interpolate(prev_tail_features, size=cur_tail_input.shape[2:],
- mode='bilinear', align_corners=False)
- cur_tail_input = torch.cat((cur_tail_input, prev_tail_features), dim=1)
-
- cur_out, cur_tail_feats = self.tails[scale_i](cur_tail_input, return_last_act=True)
-
- prev_tail_features = cur_tail_feats
- all_outputs.append(cur_out)
-
- if self.out_cumulative:
- all_outputs_cum = [all_outputs[0]]
- for i in range(1, len(ms_features)):
- cur_out = all_outputs[i]
- cur_out_cum = cur_out + F.interpolate(all_outputs_cum[-1], size=cur_out.shape[2:],
- mode='bilinear', align_corners=False)
- all_outputs_cum.append(cur_out_cum)
- all_outputs = all_outputs_cum
-
- if self.return_only_hr:
- return all_outputs[-1]
- else:
- return all_outputs[::-1]
-
-
-class MultiscaleDiscriminatorSimple(nn.Module):
- def __init__(self, ms_impl):
- super().__init__()
- self.ms_impl = nn.ModuleList(ms_impl)
-
- @property
- def num_scales(self):
- return len(self.ms_impl)
-
- def forward(self, ms_inputs: List[torch.Tensor], smallest_scales_num: Optional[int] = None) \
- -> List[Tuple[torch.Tensor, List[torch.Tensor]]]:
- """
- :param ms_inputs: List of inputs of different resolutions from HR to LR
- :param smallest_scales_num: int or None, number of smallest scales to take at input
- :return: List of pairs (prediction, features) for different resolutions from HR to LR
- """
- if smallest_scales_num is None:
- assert len(self.ms_impl) == len(ms_inputs), (len(self.ms_impl), len(ms_inputs), smallest_scales_num)
- smallest_scales_num = len(self.heads)
- else:
- assert smallest_scales_num == len(ms_inputs) <= len(self.ms_impl), \
- (len(self.ms_impl), len(ms_inputs), smallest_scales_num)
-
- return [cur_discr(cur_input) for cur_discr, cur_input in zip(self.ms_impl[-smallest_scales_num:], ms_inputs)]
-
-
-class SingleToMultiScaleInputMixin:
- def forward(self, x: torch.Tensor) -> List:
- orig_height, orig_width = x.shape[2:]
- factors = [2 ** i for i in range(self.num_scales)]
- ms_inputs = [F.interpolate(x, size=(orig_height // f, orig_width // f), mode='bilinear', align_corners=False)
- for f in factors]
- return super().forward(ms_inputs)
-
-
-class GeneratorMultiToSingleOutputMixin:
- def forward(self, x):
- return super().forward(x)[0]
-
-
-class DiscriminatorMultiToSingleOutputMixin:
- def forward(self, x):
- out_feat_tuples = super().forward(x)
- return out_feat_tuples[0][0], [f for _, flist in out_feat_tuples for f in flist]
-
-
-class DiscriminatorMultiToSingleOutputStackedMixin:
- def __init__(self, *args, return_feats_only_levels=None, **kwargs):
- super().__init__(*args, **kwargs)
- self.return_feats_only_levels = return_feats_only_levels
-
- def forward(self, x):
- out_feat_tuples = super().forward(x)
- outs = [out for out, _ in out_feat_tuples]
- scaled_outs = [outs[0]] + [F.interpolate(cur_out, size=outs[0].shape[-2:],
- mode='bilinear', align_corners=False)
- for cur_out in outs[1:]]
- out = torch.cat(scaled_outs, dim=1)
- if self.return_feats_only_levels is not None:
- feat_lists = [out_feat_tuples[i][1] for i in self.return_feats_only_levels]
- else:
- feat_lists = [flist for _, flist in out_feat_tuples]
- feats = [f for flist in feat_lists for f in flist]
- return out, feats
-
-
-class MultiscaleDiscrSingleInput(SingleToMultiScaleInputMixin, DiscriminatorMultiToSingleOutputStackedMixin, MultiscaleDiscriminatorSimple):
- pass
-
-
-class MultiscaleResNetSingle(GeneratorMultiToSingleOutputMixin, SingleToMultiScaleInputMixin, MultiscaleResNet):
- pass
diff --git a/spaces/CVPR/regionclip-demo/detectron2/data/datasets/lvis.py b/spaces/CVPR/regionclip-demo/detectron2/data/datasets/lvis.py
deleted file mode 100644
index 2248129e798baefda037a8dddf7abe3c8f15dd40..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/data/datasets/lvis.py
+++ /dev/null
@@ -1,357 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import os
-from fvcore.common.timer import Timer
-
-from detectron2.data import DatasetCatalog, MetadataCatalog
-from detectron2.structures import BoxMode
-from detectron2.utils.file_io import PathManager
-
-from .builtin_meta import _get_coco_instances_meta
-from .lvis_v0_5_categories import LVIS_CATEGORIES as LVIS_V0_5_CATEGORIES
-from .lvis_v1_categories import LVIS_CATEGORIES as LVIS_V1_CATEGORIES
-
-import torch
-import numpy as np
-"""
-This file contains functions to parse LVIS-format annotations into dicts in the
-"Detectron2 format".
-"""
-
-logger = logging.getLogger(__name__)
-
-__all__ = ["load_lvis_json", "register_lvis_instances", "get_lvis_instances_meta"]
-
-
-def register_lvis_instances(name, metadata, json_file, image_root):
- """
- Register a dataset in LVIS's json annotation format for instance detection and segmentation.
-
- Args:
- name (str): a name that identifies the dataset, e.g. "lvis_v0.5_train".
- metadata (dict): extra metadata associated with this dataset. It can be an empty dict.
- json_file (str): path to the json instance annotation file.
- image_root (str or path-like): directory which contains all the images.
- """
- DatasetCatalog.register(name, lambda: load_lvis_json(json_file, image_root, name))
- MetadataCatalog.get(name).set(
- json_file=json_file, image_root=image_root, evaluator_type="lvis", **metadata
- )
-
-
-def load_lvis_json_original(json_file, image_root, dataset_name=None, filter_open_cls=True, clip_gt_crop=True, max_gt_per_img=500):
- """
- Load a json file in LVIS's annotation format.
-
- Args:
- json_file (str): full path to the LVIS json annotation file.
- image_root (str): the directory where the images in this json file exists.
- dataset_name (str): the name of the dataset (e.g., "lvis_v0.5_train").
- If provided, this function will put "thing_classes" into the metadata
- associated with this dataset.
- filter_open_cls: open-set setting, filter the open-set categories during training
- clip_gt_crop: must filter images with empty annotations or too many GT bbox,
- even if in testing (eg, use CLIP on GT regions)
- Returns:
- list[dict]: a list of dicts in Detectron2 standard format. (See
- `Using Custom Datasets `_ )
-
- Notes:
- 1. This function does not read the image files.
- The results do not have the "image" field.
- """
- from lvis import LVIS
-
- if 'train' in dataset_name: #'zeroshot' in dataset_name and 'train' in dataset_name: # openset setting, filter the novel classes during training
- filter_open_cls = True
- else:
- filter_open_cls = False
-
- json_file = PathManager.get_local_path(json_file)
-
- timer = Timer()
- lvis_api = LVIS(json_file)
- if timer.seconds() > 1:
- logger.info("Loading {} takes {:.2f} seconds.".format(json_file, timer.seconds()))
-
- if dataset_name is not None:
- meta = get_lvis_instances_meta(dataset_name)
- MetadataCatalog.get(dataset_name).set(**meta)
-
- # sort indices for reproducible results
- img_ids = sorted(lvis_api.imgs.keys())
- # imgs is a list of dicts, each looks something like:
- # {'license': 4,
- # 'url': 'http://farm6.staticflickr.com/5454/9413846304_881d5e5c3b_z.jpg',
- # 'file_name': 'COCO_val2014_000000001268.jpg',
- # 'height': 427,
- # 'width': 640,
- # 'date_captured': '2013-11-17 05:57:24',
- # 'id': 1268}
- imgs = lvis_api.load_imgs(img_ids)
- # anns is a list[list[dict]], where each dict is an annotation
- # record for an object. The inner list enumerates the objects in an image
- # and the outer list enumerates over images. Example of anns[0]:
- # [{'segmentation': [[192.81,
- # 247.09,
- # ...
- # 219.03,
- # 249.06]],
- # 'area': 1035.749,
- # 'image_id': 1268,
- # 'bbox': [192.81, 224.8, 74.73, 33.43],
- # 'category_id': 16,
- # 'id': 42986},
- # ...]
- anns = [lvis_api.img_ann_map[img_id] for img_id in img_ids]
-
- # Sanity check that each annotation has a unique id
- ann_ids = [ann["id"] for anns_per_image in anns for ann in anns_per_image]
- assert len(set(ann_ids)) == len(ann_ids), "Annotation ids in '{}' are not unique".format(
- json_file
- )
-
- imgs_anns = list(zip(imgs, anns))
-
- logger.info("Loaded {} images in the LVIS format from {}".format(len(imgs_anns), json_file))
-
- def get_file_name(img_root, img_dict):
- # Determine the path including the split folder ("train2017", "val2017", "test2017") from
- # the coco_url field. Example:
- # 'coco_url': 'http://images.cocodataset.org/train2017/000000155379.jpg'
- split_folder, file_name = img_dict["coco_url"].split("/")[-2:]
- return os.path.join(img_root + split_folder, file_name)
-
- dataset_dicts = []
- cls_type_dict = {cls_meta['id']: cls_meta['frequency'] for cls_meta in lvis_api.dataset['categories']} # map cls id to cls type
- area_dict = {'r': [], 'c': [], 'f': []} # calculate box area for each type of class
- # import os
- # from PIL import Image
- # custom_img_path = 'datasets/epic_sample_frames'
- # custom_img_list = [os.path.join(custom_img_path, item) for item in os.listdir(custom_img_path)]
- # cnt = 0
- for (img_dict, anno_dict_list) in imgs_anns:
- record = {}
- record["file_name"] = get_file_name(image_root, img_dict)
- # record["file_name"] = custom_img_list[cnt]; cnt += 1;
- # if cnt == 46:
- # break # get_file_name(image_root, img_dict)
- # img_file = Image.open(record["file_name"])
- record["height"] = img_dict["height"]
- record["width"] = img_dict["width"]
- # record["height"] = img_file.size[1] # img_dict["height"]
- # record["width"] = img_file.size[0] # img_dict["width"]
- record["not_exhaustive_category_ids"] = img_dict.get("not_exhaustive_category_ids", [])
- record["neg_category_ids"] = img_dict.get("neg_category_ids", [])
- image_id = record["image_id"] = img_dict["id"]
-
- objs = []
- for anno in anno_dict_list:
- # Check that the image_id in this annotation is the same as
- # the image_id we're looking at.
- # This fails only when the data parsing logic or the annotation file is buggy.
- assert anno["image_id"] == image_id
- obj = {"bbox": anno["bbox"], "bbox_mode": BoxMode.XYWH_ABS}
- # LVIS data loader can be used to load COCO dataset categories. In this case `meta`
- # variable will have a field with COCO-specific category mapping.
- if dataset_name is not None and "thing_dataset_id_to_contiguous_id" in meta:
- obj["category_id"] = meta["thing_dataset_id_to_contiguous_id"][anno["category_id"]]
- else:
- obj["category_id"] = anno["category_id"] - 1 # Convert 1-indexed to 0-indexed
- obj['frequency'] = cls_type_dict[anno["category_id"]] # used for open-set filtering
- if filter_open_cls: # filter categories for open-set training
- if obj['frequency'] == 'r':
- continue
- area_dict[obj['frequency']].append(anno["bbox"][2] * anno["bbox"][3])
-
- segm = anno["segmentation"] # list[list[float]]
- # filter out invalid polygons (< 3 points)
- valid_segm = [poly for poly in segm if len(poly) % 2 == 0 and len(poly) >= 6]
- assert len(segm) == len(
- valid_segm
- ), "Annotation contains an invalid polygon with < 3 points"
- assert len(segm) > 0
- obj["segmentation"] = segm
- objs.append(obj)
- if (filter_open_cls or clip_gt_crop) and len(objs) == 0: # no annotation for this image
- continue
- record["annotations"] = objs
- dataset_dicts.append(record)
-
- # For the training in open-set setting, map original category id to new category id number (base categories)
- if filter_open_cls:
- # get new category id in order
- old_to_new = {}
- for i in range(len(cls_type_dict)):
- if cls_type_dict[i+1] != 'r': # cls_type_dict is 1-indexed
- old_to_new[i] = len(old_to_new)
- # map annotation to new category id
- for record in dataset_dicts:
- record.pop('not_exhaustive_category_ids') # won't be used
- record.pop('neg_category_ids') # won't be used
- for obj in record['annotations']:
- obj['category_id'] = old_to_new[obj['category_id']] # 0-indexed id
- assert obj['frequency'] != 'r'
- logger.info("\n\nModel will be trained in the open-set setting! {} / {} categories are kept.\n".format(len(old_to_new),len(cls_type_dict)))
- # calculate box area for each type of class
- area_lst = np.array([0, 400, 1600, 2500, 5000, 10000, 22500, 224 * 224, 90000, 160000, 1e8])
- # rare_cls = np.histogram(np.array(area_dict['r']), bins=area_lst)[0]
- # common_cls = np.histogram(np.array(area_dict['c']), bins=area_lst)[0]
- # freq_cls = np.histogram(np.array(area_dict['f']), bins=area_lst)[0]
- # print("rare classes: {}; \ncommon classes: {}; \nfrequent classes: {}".format(rare_cls/rare_cls.sum()*100, common_cls/common_cls.sum()*100, freq_cls/freq_cls.sum()*100))
- # # apply CLIP on GT regions: some images has large number of GT bbox (eg, 759), remove them, otherwise, OOM
- if clip_gt_crop:
- # len_num = sorted([len(item["annotations"]) for item in dataset_dicts], reverse=True)
- dataset_dicts = sorted(dataset_dicts, key=lambda x: len(x["annotations"]), reverse=True)
- for record in dataset_dicts:
- record["annotations"] = record["annotations"][:max_gt_per_img] # only <10 / 20k images in test have >300 GT boxes
- #dataset_dicts = sorted(dataset_dicts, key=lambda x: len(x["annotations"]))[:12] #[12000:14000] #
- #dataset_dicts = sorted(dataset_dicts, key=lambda x: len(x["annotations"]))[-1200:-1000]
- #eval_cls_acc(dataset_dicts, area_lst)
- return dataset_dicts
-
-def load_lvis_json(json_file, image_root, dataset_name=None, filter_open_cls=True, clip_gt_crop=True, max_gt_per_img=500, custom_img_path='datasets/custom_images'):
- """
- This is a tentitive function for loading custom images.
- Given a folder of images (eg, 'datasets/custom_images'), load their meta data into a dictionary
- """
- import os
- from PIL import Image
- custom_img_list = [os.path.join(custom_img_path, item) for item in os.listdir(custom_img_path)]
-
- dataset_dicts = []
- for f_i, file_name in enumerate(custom_img_list):
- record = {}
- record["file_name"] = file_name
- img_file = Image.open(record["file_name"])
- record["height"] = img_file.size[1]
- record["width"] = img_file.size[0]
- record["image_id"] = f_i
-
- dataset_dicts.append(record)
-
- return dataset_dicts
-
-def eval_cls_acc(dataset_dicts, area_lst):
- #pred_file = '/home/v-yiwuzhong/projects/detectron2-open-set/output/rcnn_gt_crop/vit/instances_predictions.pth'
- #pred_file = '/home/v-yiwuzhong/projects/azureblobs/vyiwuzhong_phillytools/results/test_CLIP_rcnn_resnet50_crop_regions_perclassnms/inference/instances_predictions.pth'
- #pred_file = '/home/v-yiwuzhong/projects/azureblobs/vyiwuzhong_phillytools/results/test_CLIP_rcnn_vitb32_crop_regions_perclassnms/inference/instances_predictions.pth'
- #pred_file = '/home/v-yiwuzhong/projects/azureblobs/vyiwuzhong_phillytools/results/test_CLIP_fast_rcnn_resnet50_roifeatmap/inference/instances_predictions.pth'
- #pred_file = '/home/v-yiwuzhong/projects/azureblobs/vyiwuzhong_phillytools/results/test_CLIP_fast_rcnn_resnet50_supmrcnnbaselinefpn/inference/instances_predictions.pth'
- #pred_file = '/home/v-yiwuzhong/projects/azureblobs/vyiwuzhong_phillytools/results/test_CLIP_fast_rcnn_resnet50_supmrcnnbaselinec4/inference/instances_predictions.pth'
- pred_file = '/home/v-yiwuzhong/projects/azureblobs/vyiwuzhong_phillytools/results/test_CLIP_fast_rcnn_resnet50_e1-3-3gtbox/inference/instances_predictions.pth'
- predictions = torch.load(pred_file)
- correct = 0
- wrong = 0
- area_threshold = area_lst[1:-1] # np.array([400, 1600, 2500, 5000, 10000, 22500, 224 * 224, 90000, 160000])
- acc_list = [[0, 0] for i in range(area_threshold.shape[0] + 1)]
- small_cnt = 0
- for preds, gts in zip(predictions, dataset_dicts):
- assert preds['image_id'] == gts['image_id'] # same image
- #assert len(preds['instances']) == len(gts['annotations'])
- box_seen = {} # keep a set for the predicted boxes that have been checked
- for pred, gt in zip(preds['instances'], gts['annotations']):
- if pred['bbox'][0] in box_seen: # duplicate box due to perclass NMS
- continue
- else:
- box_seen[pred['bbox'][0]] = 1
- if np.sum(np.array(pred['bbox']) - np.array(gt['bbox'])) < 1.0: # same box
- pass
- else: # has been NMS and shuffled
- for gt in gts['annotations']:
- if np.sum(np.array(pred['bbox']) - np.array(gt['bbox'])) < 1.0: # same box
- break
- assert np.sum(np.array(pred['bbox']) - np.array(gt['bbox'])) < 1.0 # same box
- this_area = gt['bbox'][2] * gt['bbox'][3]
- block = (area_threshold < this_area).nonzero()[0].shape[0]
- if pred['category_id'] == gt['category_id']: # matched
- correct += 1
- acc_list[block][0] += 1
- else:
- wrong += 1
- acc_list[block][1] += 1
-
- print("\n\nGot correct {} and wrong {}. Accuracy is {} / {} = {}\n\n".format(correct,wrong,correct,correct+wrong,correct/(correct+wrong)))
- block_acc = [100 * acc_list[i][0] / (acc_list[i][0] + acc_list[i][1]) for i in range(len(acc_list))]
- block_acc = [round(i, 1) for i in block_acc]
- print("Block accuracy: {}".format(block_acc))
- block_num = [acc_list[i][0] + acc_list[i][1] for i in range(len(acc_list))]
- block_num = list(block_num / np.sum(block_num) * 100)
- block_num = [round(i, 1) for i in block_num]
- print("Block #instances: {}".format(block_num))
- return
-
-def get_lvis_instances_meta(dataset_name):
- """
- Load LVIS metadata.
-
- Args:
- dataset_name (str): LVIS dataset name without the split name (e.g., "lvis_v0.5").
-
- Returns:
- dict: LVIS metadata with keys: thing_classes
- """
- if "cocofied" in dataset_name:
- return _get_coco_instances_meta()
- if "v0.5" in dataset_name:
- return _get_lvis_instances_meta_v0_5()
- elif "v1" in dataset_name:
- return _get_lvis_instances_meta_v1()
- raise ValueError("No built-in metadata for dataset {}".format(dataset_name))
-
-
-def _get_lvis_instances_meta_v0_5():
- assert len(LVIS_V0_5_CATEGORIES) == 1230
- cat_ids = [k["id"] for k in LVIS_V0_5_CATEGORIES]
- assert min(cat_ids) == 1 and max(cat_ids) == len(
- cat_ids
- ), "Category ids are not in [1, #categories], as expected"
- # Ensure that the category list is sorted by id
- lvis_categories = sorted(LVIS_V0_5_CATEGORIES, key=lambda x: x["id"])
- thing_classes = [k["synonyms"][0] for k in lvis_categories]
- meta = {"thing_classes": thing_classes}
- return meta
-
-
-def _get_lvis_instances_meta_v1():
- assert len(LVIS_V1_CATEGORIES) == 1203
- cat_ids = [k["id"] for k in LVIS_V1_CATEGORIES]
- assert min(cat_ids) == 1 and max(cat_ids) == len(
- cat_ids
- ), "Category ids are not in [1, #categories], as expected"
- # Ensure that the category list is sorted by id
- lvis_categories = sorted(LVIS_V1_CATEGORIES, key=lambda x: x["id"])
- thing_classes = [k["synonyms"][0] for k in lvis_categories]
- meta = {"thing_classes": thing_classes}
- return meta
-
-
-if __name__ == "__main__":
- """
- Test the LVIS json dataset loader.
-
- Usage:
- python -m detectron2.data.datasets.lvis \
- path/to/json path/to/image_root dataset_name vis_limit
- """
- import sys
- import numpy as np
- from detectron2.utils.logger import setup_logger
- from PIL import Image
- import detectron2.data.datasets # noqa # add pre-defined metadata
- from detectron2.utils.visualizer import Visualizer
-
- logger = setup_logger(name=__name__)
- meta = MetadataCatalog.get(sys.argv[3])
-
- dicts = load_lvis_json(sys.argv[1], sys.argv[2], sys.argv[3])
- logger.info("Done loading {} samples.".format(len(dicts)))
-
- dirname = "lvis-data-vis"
- os.makedirs(dirname, exist_ok=True)
- for d in dicts[: int(sys.argv[4])]:
- img = np.array(Image.open(d["file_name"]))
- visualizer = Visualizer(img, metadata=meta)
- vis = visualizer.draw_dataset_dict(d)
- fpath = os.path.join(dirname, os.path.basename(d["file_name"]))
- vis.save(fpath)
diff --git a/spaces/Cecil8352/vits-models/attentions.py b/spaces/Cecil8352/vits-models/attentions.py
deleted file mode 100644
index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000
--- a/spaces/Cecil8352/vits-models/attentions.py
+++ /dev/null
@@ -1,300 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/CofAI/chat/client/css/style.css b/spaces/CofAI/chat/client/css/style.css
deleted file mode 100644
index 918cf83eb9a36bf07c861e4476c60af65f5bf91d..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat/client/css/style.css
+++ /dev/null
@@ -1,18 +0,0 @@
-@import "./global.css";
-@import "./hljs.css";
-@import "./main.css";
-@import "./sidebar.css";
-@import "./conversation.css";
-@import "./message.css";
-@import "./stop-generating.css";
-@import "./typing.css";
-@import "./checkbox.css";
-@import "./label.css";
-@import "./button.css";
-@import "./buttons.css";
-@import "./dropdown.css";
-@import "./field.css";
-@import "./select.css";
-@import "./options.css";
-@import "./settings.css";
-@import "./message-input.css";
diff --git "a/spaces/Cong723/gpt-academic-public/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py" "b/spaces/Cong723/gpt-academic-public/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py"
deleted file mode 100644
index e57f80f1d45bd3ec23837253848f7b32a5ccd751..0000000000000000000000000000000000000000
--- "a/spaces/Cong723/gpt-academic-public/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py"
+++ /dev/null
@@ -1,138 +0,0 @@
-import threading
-from request_llm.bridge_all import predict_no_ui_long_connection
-from toolbox import update_ui
-from toolbox import CatchException, write_results_to_file, report_execption
-from .crazy_utils import breakdown_txt_to_satisfy_token_limit
-
-def extract_code_block_carefully(txt):
- splitted = txt.split('```')
- n_code_block_seg = len(splitted) - 1
- if n_code_block_seg <= 1: return txt
- # 剩下的情况都开头除去 ``` 结尾除去一次 ```
- txt_out = '```'.join(splitted[1:-1])
- return txt_out
-
-
-
-def break_txt_into_half_at_some_linebreak(txt):
- lines = txt.split('\n')
- n_lines = len(lines)
- pre = lines[:(n_lines//2)]
- post = lines[(n_lines//2):]
- return "\n".join(pre), "\n".join(post)
-
-
-@CatchException
-def 全项目切换英文(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt, web_port):
- # 第1步:清空历史,以免输入溢出
- history = []
-
- # 第2步:尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- import tiktoken
- except:
- report_execption(chatbot, history,
- a = f"解析项目: {txt}",
- b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 第3步:集合文件
- import time, glob, os, shutil, re
- os.makedirs('gpt_log/generated_english_version', exist_ok=True)
- os.makedirs('gpt_log/generated_english_version/crazy_functions', exist_ok=True)
- file_manifest = [f for f in glob.glob('./*.py') if ('test_project' not in f) and ('gpt_log' not in f)] + \
- [f for f in glob.glob('./crazy_functions/*.py') if ('test_project' not in f) and ('gpt_log' not in f)]
- # file_manifest = ['./toolbox.py']
- i_say_show_user_buffer = []
-
- # 第4步:随便显示点什么防止卡顿的感觉
- for index, fp in enumerate(file_manifest):
- # if 'test_project' in fp: continue
- with open(fp, 'r', encoding='utf-8', errors='replace') as f:
- file_content = f.read()
- i_say_show_user =f'[{index}/{len(file_manifest)}] 接下来请将以下代码中包含的所有中文转化为英文,只输出转化后的英文代码,请用代码块输出代码: {os.path.abspath(fp)}'
- i_say_show_user_buffer.append(i_say_show_user)
- chatbot.append((i_say_show_user, "[Local Message] 等待多线程操作,中间过程不予显示."))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
-
- # 第5步:Token限制下的截断与处理
- MAX_TOKEN = 3000
- from request_llm.bridge_all import model_info
- enc = model_info["gpt-3.5-turbo"]['tokenizer']
- def get_token_fn(txt): return len(enc.encode(txt, disallowed_special=()))
-
-
- # 第6步:任务函数
- mutable_return = [None for _ in file_manifest]
- observe_window = [[""] for _ in file_manifest]
- def thread_worker(fp,index):
- if index > 10:
- time.sleep(60)
- print('Openai 限制免费用户每分钟20次请求,降低请求频率中。')
- with open(fp, 'r', encoding='utf-8', errors='replace') as f:
- file_content = f.read()
- i_say_template = lambda fp, file_content: f'接下来请将以下代码中包含的所有中文转化为英文,只输出代码,文件名是{fp},文件代码是 ```{file_content}```'
- try:
- gpt_say = ""
- # 分解代码文件
- file_content_breakdown = breakdown_txt_to_satisfy_token_limit(file_content, get_token_fn, MAX_TOKEN)
- for file_content_partial in file_content_breakdown:
- i_say = i_say_template(fp, file_content_partial)
- # # ** gpt request **
- gpt_say_partial = predict_no_ui_long_connection(inputs=i_say, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=observe_window[index])
- gpt_say_partial = extract_code_block_carefully(gpt_say_partial)
- gpt_say += gpt_say_partial
- mutable_return[index] = gpt_say
- except ConnectionAbortedError as token_exceed_err:
- print('至少一个线程任务Token溢出而失败', e)
- except Exception as e:
- print('至少一个线程任务意外失败', e)
-
- # 第7步:所有线程同时开始执行任务函数
- handles = [threading.Thread(target=thread_worker, args=(fp,index)) for index, fp in enumerate(file_manifest)]
- for h in handles:
- h.daemon = True
- h.start()
- chatbot.append(('开始了吗?', f'多线程操作已经开始'))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 第8步:循环轮询各个线程是否执行完毕
- cnt = 0
- while True:
- cnt += 1
- time.sleep(0.2)
- th_alive = [h.is_alive() for h in handles]
- if not any(th_alive): break
- # 更好的UI视觉效果
- observe_win = []
- for thread_index, alive in enumerate(th_alive):
- observe_win.append("[ ..."+observe_window[thread_index][0][-60:].replace('\n','').replace('```','...').replace(' ','.').replace(' ','.....').replace('$','.')+"... ]")
- stat = [f'执行中: {obs}\n\n' if alive else '已完成\n\n' for alive, obs in zip(th_alive, observe_win)]
- stat_str = ''.join(stat)
- chatbot[-1] = (chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt%10+1)))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 第9步:把结果写入文件
- for index, h in enumerate(handles):
- h.join() # 这里其实不需要join了,肯定已经都结束了
- fp = file_manifest[index]
- gpt_say = mutable_return[index]
- i_say_show_user = i_say_show_user_buffer[index]
-
- where_to_relocate = f'gpt_log/generated_english_version/{fp}'
- if gpt_say is not None:
- with open(where_to_relocate, 'w+', encoding='utf-8') as f:
- f.write(gpt_say)
- else: # 失败
- shutil.copyfile(file_manifest[index], where_to_relocate)
- chatbot.append((i_say_show_user, f'[Local Message] 已完成{os.path.abspath(fp)}的转化,\n\n存入{os.path.abspath(where_to_relocate)}'))
- history.append(i_say_show_user); history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- time.sleep(1)
-
- # 第10步:备份一个文件
- res = write_results_to_file(history)
- chatbot.append(("生成一份任务执行报告", res))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
diff --git a/spaces/Cvandi/remake/scripts/pytorch2onnx.py b/spaces/Cvandi/remake/scripts/pytorch2onnx.py
deleted file mode 100644
index 09d99b2e0171265e70e7507ed8e882b616b449a1..0000000000000000000000000000000000000000
--- a/spaces/Cvandi/remake/scripts/pytorch2onnx.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import argparse
-import torch
-import torch.onnx
-from basicsr.archs.rrdbnet_arch import RRDBNet
-
-
-def main(args):
- # An instance of the model
- model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4)
- if args.params:
- keyname = 'params'
- else:
- keyname = 'params_ema'
- model.load_state_dict(torch.load(args.input)[keyname])
- # set the train mode to false since we will only run the forward pass.
- model.train(False)
- model.cpu().eval()
-
- # An example input
- x = torch.rand(1, 3, 64, 64)
- # Export the model
- with torch.no_grad():
- torch_out = torch.onnx._export(model, x, args.output, opset_version=11, export_params=True)
- print(torch_out.shape)
-
-
-if __name__ == '__main__':
- """Convert pytorch model to onnx models"""
- parser = argparse.ArgumentParser()
- parser.add_argument(
- '--input', type=str, default='experiments/pretrained_models/RealESRGAN_x4plus.pth', help='Input model path')
- parser.add_argument('--output', type=str, default='realesrgan-x4.onnx', help='Output onnx path')
- parser.add_argument('--params', action='store_false', help='Use params instead of params_ema')
- args = parser.parse_args()
-
- main(args)
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/rpn/inference.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/rpn/inference.py
deleted file mode 100644
index 9a2e3871e42fac9fcef3db00da626ec0386d68b2..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/rpn/inference.py
+++ /dev/null
@@ -1,199 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-import torch
-
-from maskrcnn_benchmark.modeling.box_coder import BoxCoder
-from maskrcnn_benchmark.structures.bounding_box import BoxList
-from maskrcnn_benchmark.structures.boxlist_ops import cat_boxlist
-from maskrcnn_benchmark.structures.boxlist_ops import boxlist_nms
-from maskrcnn_benchmark.structures.boxlist_ops import remove_small_boxes
-
-from ..utils import cat
-from .utils import permute_and_flatten
-
-class RPNPostProcessor(torch.nn.Module):
- """
- Performs post-processing on the outputs of the RPN boxes, before feeding the
- proposals to the heads
- """
-
- def __init__(
- self,
- pre_nms_top_n,
- post_nms_top_n,
- nms_thresh,
- min_size,
- box_coder=None,
- fpn_post_nms_top_n=None,
- ):
- """
- Arguments:
- pre_nms_top_n (int)
- post_nms_top_n (int)
- nms_thresh (float)
- min_size (int)
- box_coder (BoxCoder)
- fpn_post_nms_top_n (int)
- """
- super(RPNPostProcessor, self).__init__()
- self.pre_nms_top_n = pre_nms_top_n # 12000
- self.post_nms_top_n = post_nms_top_n # 2000
- self.nms_thresh = nms_thresh # 0.7
- self.min_size = min_size # 0
-
- if box_coder is None:
- box_coder = BoxCoder(weights=(1.0, 1.0, 1.0, 1.0))
- self.box_coder = box_coder
-
- if fpn_post_nms_top_n is None:
- fpn_post_nms_top_n = post_nms_top_n
- self.fpn_post_nms_top_n = fpn_post_nms_top_n # 2000
-
- def add_gt_proposals(self, proposals, targets):
- """
- Arguments:
- proposals: list[BoxList]
- targets: list[BoxList]
- """
- # Get the device we're operating on
- device = proposals[0].bbox.device
-
- gt_boxes = [target.copy_with_fields([]) for target in targets]
-
- # later cat of bbox requires all fields to be present for all bbox
- # so we need to add a dummy for objectness that's missing
- for gt_box in gt_boxes:
- gt_box.add_field("objectness", torch.ones(len(gt_box), device=device))
-
- proposals = [
- cat_boxlist((proposal, gt_box))
- for proposal, gt_box in zip(proposals, gt_boxes)
- ]
-
- return proposals
-
- def forward_for_single_feature_map(self, anchors, objectness, box_regression):
- """
- Arguments:
- anchors: list[BoxList] # [image,number,[n,4]]
- objectness: tensor of size N, A, H, W
- box_regression: tensor of size N, A * 4, H, W
- """
- device = objectness.device
- N, A, H, W = objectness.shape
- # put in the same format as anchors
- objectness = permute_and_flatten(objectness, N, A, 1, H, W).view(N, -1) # N H*W*A*1
- objectness = objectness.sigmoid()
- box_regression = permute_and_flatten(box_regression, N, A, 18, H, W) # N H*W*A 4
- num_anchors = A * H * W # 391040 97760
-
- pre_nms_top_n = min(self.pre_nms_top_n, num_anchors) #12000
- objectness, topk_idx = objectness.topk(pre_nms_top_n, dim=1, sorted=True)
- # objectness = objectness.cpu()
- batch_idx = torch.arange(N, device=device)[:, None]
- box_regression = box_regression[batch_idx, topk_idx]
- image_shapes = [box.size for box in anchors]
- concat_anchors = torch.cat([a.bbox for a in anchors], dim=0)
- concat_anchors = concat_anchors.reshape(N, -1, 4)[batch_idx, topk_idx]
- proposals = self.box_coder.decode_iou(
- box_regression.view(-1, 18), concat_anchors.view(-1, 4)
- )
-
- proposals = proposals.view(N, -1, 4)
-
- result = []
- for proposal, score, im_shape in zip(proposals, objectness, image_shapes):
- boxlist = BoxList(proposal, im_shape, mode="xyxy")
- boxlist.add_field("objectness", score)
- boxlist = boxlist.clip_to_image(remove_empty=False)
- boxlist = remove_small_boxes(boxlist, self.min_size)
- boxlist = boxlist_nms(
- boxlist,
- self.nms_thresh,
- max_proposals=self.post_nms_top_n,
- score_field="objectness",
- )
- result.append(boxlist)
- return result
-
- def forward(self, anchors, objectness, box_regression, targets=None):
- """
- Arguments:
- anchors: list[list[BoxList]]
- objectness: list[tensor]
- box_regression: list[tensor]
-
- Returns:
- boxlists (list[BoxList]): the post-processed anchors, after
- applying box decoding and NMS
- """
- sampled_boxes = []
- num_levels = len(objectness) # classification
- anchors = list(zip(*anchors)) # [image,number,[n,4]]
- # i =-1
- for a, o, b in zip(anchors, objectness, box_regression):
- sampled_boxes.append(self.forward_for_single_feature_map(a, o, b))
-
-
- boxlists = list(zip(*sampled_boxes))
- boxlists = [cat_boxlist(boxlist) for boxlist in boxlists]
-
- if num_levels > 1:
- boxlists = self.select_over_all_levels(boxlists)
-
- # append ground-truth bboxes to proposals
- if self.training and targets is not None:
- boxlists = self.add_gt_proposals(boxlists, targets)
-
- return boxlists
-
- def select_over_all_levels(self, boxlists):
- num_images = len(boxlists)
- # different behavior during training and during testing:
- # during training, post_nms_top_n is over *all* the proposals combined, while
- # during testing, it is over the proposals for each image
- # TODO resolve this difference and make it consistent. It should be per image,
- # and not per batch
- if self.training:
- objectness = torch.cat(
- [boxlist.get_field("objectness") for boxlist in boxlists], dim=0
- )
- box_sizes = [len(boxlist) for boxlist in boxlists]
- post_nms_top_n = min(self.fpn_post_nms_top_n, len(objectness))
- _, inds_sorted = torch.topk(objectness, post_nms_top_n, dim=0, sorted=True)
- inds_mask = torch.zeros_like(objectness, dtype=torch.uint8)
- inds_mask[inds_sorted] = 1
- inds_mask = inds_mask.split(box_sizes)
- for i in range(num_images):
- boxlists[i] = boxlists[i][inds_mask[i]]
- else:
- for i in range(num_images):
- objectness = boxlists[i].get_field("objectness")
- post_nms_top_n = min(self.fpn_post_nms_top_n, len(objectness))
- _, inds_sorted = torch.topk(
- objectness, post_nms_top_n, dim=0, sorted=True
- )
- boxlists[i] = boxlists[i][inds_sorted]
- return boxlists
-
-
-def make_rpn_postprocessor(config, rpn_box_coder, is_train):
- fpn_post_nms_top_n = config.MODEL.RPN.FPN_POST_NMS_TOP_N_TRAIN # 2000
- if not is_train:
- fpn_post_nms_top_n = config.MODEL.RPN.FPN_POST_NMS_TOP_N_TEST
-
- pre_nms_top_n = config.MODEL.RPN.PRE_NMS_TOP_N_TRAIN # 12000
- post_nms_top_n = config.MODEL.RPN.POST_NMS_TOP_N_TRAIN # 2000
- if not is_train:
- pre_nms_top_n = config.MODEL.RPN.PRE_NMS_TOP_N_TEST
- post_nms_top_n = config.MODEL.RPN.POST_NMS_TOP_N_TEST
- nms_thresh = config.MODEL.RPN.NMS_THRESH # 0.7
- min_size = config.MODEL.RPN.MIN_SIZE # 0
- box_selector = RPNPostProcessor(
- pre_nms_top_n=pre_nms_top_n, #12000
- post_nms_top_n=post_nms_top_n, #2000
- nms_thresh=nms_thresh, # 0.7
- min_size=min_size, # 0
- box_coder=rpn_box_coder,
- fpn_post_nms_top_n=fpn_post_nms_top_n, #2000
- )
- return box_selector
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_P_.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_P_.py
deleted file mode 100644
index 1abc02590c240377177d4ac12fe4848720e24959..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_P_.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from .T_S_I_V_ import table_T_S_I_V_
-
-
-class table_T_S_I_P_(table_T_S_I_V_):
- pass
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-9da94804.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-9da94804.css
deleted file mode 100644
index 79d901421a55ea578fdaf2c50c84e8fafcea8c41..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-9da94804.css
+++ /dev/null
@@ -1 +0,0 @@
-div.svelte-1gww5xe{display:flex;position:absolute;justify-content:center;align-items:center;border-radius:var(--radius-sm);background-color:#000c;padding:var(--size-1) .4rem;color:#fff;font-size:var(--text-sm)}span.svelte-1gww5xe{display:inline-block;margin-right:var(--size-1);border-radius:var(--radius-xs);width:var(--size-3);height:var(--size-3)}.wrap.svelte-1mjxput{margin-top:var(--size-3)}.legend.svelte-1mjxput{display:flex;justify-content:center;align-items:center;color:var(--body-text-color)}.legend-item.svelte-1mjxput{display:flex;align-items:center;gap:var(--spacing-sm);margin-right:var(--size-2);margin-left:var(--size-2)}.legend-box.svelte-1mjxput{display:inline-block;border-radius:var(--radius-xs);width:var(--size-3);height:var(--size-3)}svg.svelte-1mjxput{width:var(--size-full)}.label-text.svelte-1mjxput{fill:var(--body-text-color);font-size:var(--text-sm);font-family:var(--font-mono)}.main-label.svelte-1mjxput{display:flex;justify-content:center;align-items:center;color:var(--body-text-color)}.chart.svelte-etmurc{display:flex;display:relative;justify-content:center;align-items:center;background:var(--background-fill-primary);width:var(--size-full);height:var(--size-64)}
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_async/connection.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_async/connection.py
deleted file mode 100644
index 9014ab957a2b03a9ca258ec693f15189c6d8cd77..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_async/connection.py
+++ /dev/null
@@ -1,215 +0,0 @@
-import itertools
-import logging
-import ssl
-from types import TracebackType
-from typing import Iterable, Iterator, Optional, Type
-
-from .._backends.auto import AutoBackend
-from .._backends.base import SOCKET_OPTION, AsyncNetworkBackend, AsyncNetworkStream
-from .._exceptions import ConnectError, ConnectionNotAvailable, ConnectTimeout
-from .._models import Origin, Request, Response
-from .._ssl import default_ssl_context
-from .._synchronization import AsyncLock
-from .._trace import Trace
-from .http11 import AsyncHTTP11Connection
-from .interfaces import AsyncConnectionInterface
-
-RETRIES_BACKOFF_FACTOR = 0.5 # 0s, 0.5s, 1s, 2s, 4s, etc.
-
-
-logger = logging.getLogger("httpcore.connection")
-
-
-def exponential_backoff(factor: float) -> Iterator[float]:
- yield 0
- for n in itertools.count(2):
- yield factor * (2 ** (n - 2))
-
-
-class AsyncHTTPConnection(AsyncConnectionInterface):
- def __init__(
- self,
- origin: Origin,
- ssl_context: Optional[ssl.SSLContext] = None,
- keepalive_expiry: Optional[float] = None,
- http1: bool = True,
- http2: bool = False,
- retries: int = 0,
- local_address: Optional[str] = None,
- uds: Optional[str] = None,
- network_backend: Optional[AsyncNetworkBackend] = None,
- socket_options: Optional[Iterable[SOCKET_OPTION]] = None,
- ) -> None:
- self._origin = origin
- self._ssl_context = ssl_context
- self._keepalive_expiry = keepalive_expiry
- self._http1 = http1
- self._http2 = http2
- self._retries = retries
- self._local_address = local_address
- self._uds = uds
-
- self._network_backend: AsyncNetworkBackend = (
- AutoBackend() if network_backend is None else network_backend
- )
- self._connection: Optional[AsyncConnectionInterface] = None
- self._connect_failed: bool = False
- self._request_lock = AsyncLock()
- self._socket_options = socket_options
-
- async def handle_async_request(self, request: Request) -> Response:
- if not self.can_handle_request(request.url.origin):
- raise RuntimeError(
- f"Attempted to send request to {request.url.origin} on connection to {self._origin}"
- )
-
- async with self._request_lock:
- if self._connection is None:
- try:
- stream = await self._connect(request)
-
- ssl_object = stream.get_extra_info("ssl_object")
- http2_negotiated = (
- ssl_object is not None
- and ssl_object.selected_alpn_protocol() == "h2"
- )
- if http2_negotiated or (self._http2 and not self._http1):
- from .http2 import AsyncHTTP2Connection
-
- self._connection = AsyncHTTP2Connection(
- origin=self._origin,
- stream=stream,
- keepalive_expiry=self._keepalive_expiry,
- )
- else:
- self._connection = AsyncHTTP11Connection(
- origin=self._origin,
- stream=stream,
- keepalive_expiry=self._keepalive_expiry,
- )
- except Exception as exc:
- self._connect_failed = True
- raise exc
- elif not self._connection.is_available():
- raise ConnectionNotAvailable()
-
- return await self._connection.handle_async_request(request)
-
- async def _connect(self, request: Request) -> AsyncNetworkStream:
- timeouts = request.extensions.get("timeout", {})
- sni_hostname = request.extensions.get("sni_hostname", None)
- timeout = timeouts.get("connect", None)
-
- retries_left = self._retries
- delays = exponential_backoff(factor=RETRIES_BACKOFF_FACTOR)
-
- while True:
- try:
- if self._uds is None:
- kwargs = {
- "host": self._origin.host.decode("ascii"),
- "port": self._origin.port,
- "local_address": self._local_address,
- "timeout": timeout,
- "socket_options": self._socket_options,
- }
- async with Trace("connect_tcp", logger, request, kwargs) as trace:
- stream = await self._network_backend.connect_tcp(**kwargs)
- trace.return_value = stream
- else:
- kwargs = {
- "path": self._uds,
- "timeout": timeout,
- "socket_options": self._socket_options,
- }
- async with Trace(
- "connect_unix_socket", logger, request, kwargs
- ) as trace:
- stream = await self._network_backend.connect_unix_socket(
- **kwargs
- )
- trace.return_value = stream
-
- if self._origin.scheme == b"https":
- ssl_context = (
- default_ssl_context()
- if self._ssl_context is None
- else self._ssl_context
- )
- alpn_protocols = ["http/1.1", "h2"] if self._http2 else ["http/1.1"]
- ssl_context.set_alpn_protocols(alpn_protocols)
-
- kwargs = {
- "ssl_context": ssl_context,
- "server_hostname": sni_hostname
- or self._origin.host.decode("ascii"),
- "timeout": timeout,
- }
- async with Trace("start_tls", logger, request, kwargs) as trace:
- stream = await stream.start_tls(**kwargs)
- trace.return_value = stream
- return stream
- except (ConnectError, ConnectTimeout):
- if retries_left <= 0:
- raise
- retries_left -= 1
- delay = next(delays)
- async with Trace("retry", logger, request, kwargs) as trace:
- await self._network_backend.sleep(delay)
-
- def can_handle_request(self, origin: Origin) -> bool:
- return origin == self._origin
-
- async def aclose(self) -> None:
- if self._connection is not None:
- async with Trace("close", logger, None, {}):
- await self._connection.aclose()
-
- def is_available(self) -> bool:
- if self._connection is None:
- # If HTTP/2 support is enabled, and the resulting connection could
- # end up as HTTP/2 then we should indicate the connection as being
- # available to service multiple requests.
- return (
- self._http2
- and (self._origin.scheme == b"https" or not self._http1)
- and not self._connect_failed
- )
- return self._connection.is_available()
-
- def has_expired(self) -> bool:
- if self._connection is None:
- return self._connect_failed
- return self._connection.has_expired()
-
- def is_idle(self) -> bool:
- if self._connection is None:
- return self._connect_failed
- return self._connection.is_idle()
-
- def is_closed(self) -> bool:
- if self._connection is None:
- return self._connect_failed
- return self._connection.is_closed()
-
- def info(self) -> str:
- if self._connection is None:
- return "CONNECTION FAILED" if self._connect_failed else "CONNECTING"
- return self._connection.info()
-
- def __repr__(self) -> str:
- return f"<{self.__class__.__name__} [{self.info()}]>"
-
- # These context managers are not used in the standard flow, but are
- # useful for testing or working with connection instances directly.
-
- async def __aenter__(self) -> "AsyncHTTPConnection":
- return self
-
- async def __aexit__(
- self,
- exc_type: Optional[Type[BaseException]] = None,
- exc_value: Optional[BaseException] = None,
- traceback: Optional[TracebackType] = None,
- ) -> None:
- await self.aclose()
diff --git a/spaces/Dai1123/CalqChat/README.md b/spaces/Dai1123/CalqChat/README.md
deleted file mode 100644
index 9454fc58f8bf8701aa5c061ab77d4576a3e60ee0..0000000000000000000000000000000000000000
--- a/spaces/Dai1123/CalqChat/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: CalqResume
-emoji: 🦀
-colorFrom: pink
-colorTo: green
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
-license: apache-2.0
-
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/dataset/pano_s2d3d_dataset.py b/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/dataset/pano_s2d3d_dataset.py
deleted file mode 100644
index b6939fea1a08e5f1c1eb985b85fc739be0c53b04..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/dataset/pano_s2d3d_dataset.py
+++ /dev/null
@@ -1,107 +0,0 @@
-"""
-@date: 2021/6/16
-@description:
-"""
-import math
-import os
-import numpy as np
-
-from dataset.communal.read import read_image, read_label
-from dataset.communal.base_dataset import BaseDataset
-from utils.logger import get_logger
-
-
-class PanoS2D3DDataset(BaseDataset):
- def __init__(self, root_dir, mode, shape=None, max_wall_num=0, aug=None, camera_height=1.6, logger=None,
- split_list=None, patch_num=256, keys=None, for_test_index=None, subset=None):
- super().__init__(mode, shape, max_wall_num, aug, camera_height, patch_num, keys)
-
- if logger is None:
- logger = get_logger()
- self.root_dir = root_dir
-
- if mode is None:
- return
- label_dir = os.path.join(root_dir, 'valid' if mode == 'val' else mode, 'label_cor')
- img_dir = os.path.join(root_dir, 'valid' if mode == 'val' else mode, 'img')
-
- if split_list is None:
- split_list = [name.split('.')[0] for name in os.listdir(label_dir) if
- not name.startswith('.') and name.endswith('txt')]
-
- split_list.sort()
-
- assert subset == 'pano' or subset == 's2d3d' or subset is None, 'error subset'
- if subset == 'pano':
- split_list = [name for name in split_list if 'pano_' in name]
- logger.info(f"Use PanoContext Dataset")
- elif subset == 's2d3d':
- split_list = [name for name in split_list if 'camera_' in name]
- logger.info(f"Use Stanford2D3D Dataset")
-
- if for_test_index is not None:
- split_list = split_list[:for_test_index]
-
- self.data = []
- invalid_num = 0
- for name in split_list:
- img_path = os.path.join(img_dir, f"{name}.png")
- label_path = os.path.join(label_dir, f"{name}.txt")
-
- if not os.path.exists(img_path):
- logger.warning(f"{img_path} not exists")
- invalid_num += 1
- continue
- if not os.path.exists(label_path):
- logger.warning(f"{label_path} not exists")
- invalid_num += 1
- continue
-
- with open(label_path, 'r') as f:
- lines = [line for line in f.readlines() if
- len([c for c in line.split(' ') if c[0].isnumeric()]) > 1]
- if len(lines) % 2 != 0:
- invalid_num += 1
- continue
- self.data.append([img_path, label_path])
-
- logger.info(
- f"Build dataset mode: {self.mode} valid: {len(self.data)} invalid: {invalid_num}")
-
- def __getitem__(self, idx):
- rgb_path, label_path = self.data[idx]
- label = read_label(label_path, data_type='Pano_S2D3D')
- image = read_image(rgb_path, self.shape)
- output = self.process_data(label, image, self.patch_num)
- return output
-
-
-if __name__ == '__main__':
-
- modes = ['test', 'val', 'train']
- for i in range(1):
- for mode in modes:
- print(mode)
- mp3d_dataset = PanoS2D3DDataset(root_dir='../src/dataset/pano_s2d3d', mode=mode, aug={
- # 'STRETCH': True,
- # 'ROTATE': True,
- # 'FLIP': True,
- # 'GAMMA': True
- })
- continue
- save_dir = f'../src/dataset/pano_s2d3d/visualization/{mode}'
- if not os.path.isdir(save_dir):
- os.makedirs(save_dir)
-
- bar = tqdm(mp3d_dataset, ncols=100)
- for data in bar:
- bar.set_description(f"Processing {data['id']}")
- boundary_list = depth2boundaries(data['ratio'], data['depth'], step=None)
- pano_img = draw_boundaries(data['image'].transpose(1, 2, 0), boundary_list=boundary_list, show=False)
- Image.fromarray((pano_img * 255).astype(np.uint8)).save(
- os.path.join(save_dir, f"{data['id']}_boundary.png"))
-
- floorplan = draw_floorplan(uv2xyz(boundary_list[0])[..., ::2], show=False,
- marker_color=None, center_color=0.8, show_radius=None)
- Image.fromarray((floorplan.squeeze() * 255).astype(np.uint8)).save(
- os.path.join(save_dir, f"{data['id']}_floorplan.png"))
diff --git a/spaces/Datasculptor/DescriptionGPT/detic/modeling/backbone/swintransformer.py b/spaces/Datasculptor/DescriptionGPT/detic/modeling/backbone/swintransformer.py
deleted file mode 100644
index 21cabb37dd87a443e27eeb805f9739bef86540bf..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/DescriptionGPT/detic/modeling/backbone/swintransformer.py
+++ /dev/null
@@ -1,750 +0,0 @@
-# --------------------------------------------------------
-# Swin Transformer
-# Copyright (c) 2021 Microsoft
-# Licensed under The MIT License [see LICENSE for details]
-# Written by Ze Liu, Yutong Lin, Yixuan Wei
-# --------------------------------------------------------
-
-# Copyright (c) Facebook, Inc. and its affiliates.
-# Modified by Xingyi Zhou from https://github.com/SwinTransformer/Swin-Transformer-Object-Detection/blob/master/mmdet/models/backbones/swin_transformer.py
-
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.checkpoint as checkpoint
-import numpy as np
-from timm.models.layers import DropPath, to_2tuple, trunc_normal_
-
-from detectron2.layers import ShapeSpec
-from detectron2.modeling.backbone.backbone import Backbone
-from detectron2.modeling.backbone.build import BACKBONE_REGISTRY
-from detectron2.modeling.backbone.fpn import FPN
-
-from centernet.modeling.backbone.fpn_p5 import LastLevelP6P7_P5
-from centernet.modeling.backbone.bifpn import BiFPN
-# from .checkpoint import load_checkpoint
-
-class Mlp(nn.Module):
- """ Multilayer perceptron."""
-
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-def window_partition(x, window_size):
- """
- Args:
- x: (B, H, W, C)
- window_size (int): window size
- Returns:
- windows: (num_windows*B, window_size, window_size, C)
- """
- B, H, W, C = x.shape
- x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
- windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
- return windows
-
-
-def window_reverse(windows, window_size, H, W):
- """
- Args:
- windows: (num_windows*B, window_size, window_size, C)
- window_size (int): Window size
- H (int): Height of image
- W (int): Width of image
- Returns:
- x: (B, H, W, C)
- """
- B = int(windows.shape[0] / (H * W / window_size / window_size))
- x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
- x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
- return x
-
-
-class WindowAttention(nn.Module):
- """ Window based multi-head self attention (W-MSA) module with relative position bias.
- It supports both of shifted and non-shifted window.
- Args:
- dim (int): Number of input channels.
- window_size (tuple[int]): The height and width of the window.
- num_heads (int): Number of attention heads.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
- attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
- proj_drop (float, optional): Dropout ratio of output. Default: 0.0
- """
-
- def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):
-
- super().__init__()
- self.dim = dim
- self.window_size = window_size # Wh, Ww
- self.num_heads = num_heads
- head_dim = dim // num_heads
- self.scale = qk_scale or head_dim ** -0.5
-
- # define a parameter table of relative position bias
- self.relative_position_bias_table = nn.Parameter(
- torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH
-
- # get pair-wise relative position index for each token inside the window
- coords_h = torch.arange(self.window_size[0])
- coords_w = torch.arange(self.window_size[1])
- coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
- coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
- relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
- relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
- relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
- relative_coords[:, :, 1] += self.window_size[1] - 1
- relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
- relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
- self.register_buffer("relative_position_index", relative_position_index)
-
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- trunc_normal_(self.relative_position_bias_table, std=.02)
- self.softmax = nn.Softmax(dim=-1)
-
- def forward(self, x, mask=None):
- """ Forward function.
- Args:
- x: input features with shape of (num_windows*B, N, C)
- mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
- """
- B_, N, C = x.shape
- qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- q = q * self.scale
- attn = (q @ k.transpose(-2, -1))
-
- relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
- self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH
- relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
- attn = attn + relative_position_bias.unsqueeze(0)
-
- if mask is not None:
- nW = mask.shape[0]
- attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
- attn = attn.view(-1, self.num_heads, N, N)
- attn = self.softmax(attn)
- else:
- attn = self.softmax(attn)
-
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-
-class SwinTransformerBlock(nn.Module):
- """ Swin Transformer Block.
- Args:
- dim (int): Number of input channels.
- num_heads (int): Number of attention heads.
- window_size (int): Window size.
- shift_size (int): Shift size for SW-MSA.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float, optional): Stochastic depth rate. Default: 0.0
- act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
-
- def __init__(self, dim, num_heads, window_size=7, shift_size=0,
- mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
- act_layer=nn.GELU, norm_layer=nn.LayerNorm):
- super().__init__()
- self.dim = dim
- self.num_heads = num_heads
- self.window_size = window_size
- self.shift_size = shift_size
- self.mlp_ratio = mlp_ratio
- assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
-
- self.norm1 = norm_layer(dim)
- self.attn = WindowAttention(
- dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
- qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
-
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- self.H = None
- self.W = None
-
- def forward(self, x, mask_matrix):
- """ Forward function.
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- mask_matrix: Attention mask for cyclic shift.
- """
- B, L, C = x.shape
- H, W = self.H, self.W
- assert L == H * W, "input feature has wrong size"
-
- shortcut = x
- x = self.norm1(x)
- x = x.view(B, H, W, C)
-
- # pad feature maps to multiples of window size
- pad_l = pad_t = 0
- pad_r = (self.window_size - W % self.window_size) % self.window_size
- pad_b = (self.window_size - H % self.window_size) % self.window_size
- x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b))
- _, Hp, Wp, _ = x.shape
-
- # cyclic shift
- if self.shift_size > 0:
- shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
- attn_mask = mask_matrix
- else:
- shifted_x = x
- attn_mask = None
-
- # partition windows
- x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
- x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
-
- # W-MSA/SW-MSA
- attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C
-
- # merge windows
- attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
- shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C
-
- # reverse cyclic shift
- if self.shift_size > 0:
- x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
- else:
- x = shifted_x
-
- if pad_r > 0 or pad_b > 0:
- x = x[:, :H, :W, :].contiguous()
-
- x = x.view(B, H * W, C)
-
- # FFN
- x = shortcut + self.drop_path(x)
- x = x + self.drop_path(self.mlp(self.norm2(x)))
-
- return x
-
-
-class PatchMerging(nn.Module):
- """ Patch Merging Layer
- Args:
- dim (int): Number of input channels.
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
- def __init__(self, dim, norm_layer=nn.LayerNorm):
- super().__init__()
- self.dim = dim
- self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
- self.norm = norm_layer(4 * dim)
-
- def forward(self, x, H, W):
- """ Forward function.
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- """
- B, L, C = x.shape
- assert L == H * W, "input feature has wrong size"
-
- x = x.view(B, H, W, C)
-
- # padding
- pad_input = (H % 2 == 1) or (W % 2 == 1)
- if pad_input:
- x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2))
-
- x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
- x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
- x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
- x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
- x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
- x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
-
- x = self.norm(x)
- x = self.reduction(x)
-
- return x
-
-
-class BasicLayer(nn.Module):
- """ A basic Swin Transformer layer for one stage.
- Args:
- dim (int): Number of feature channels
- depth (int): Depths of this stage.
- num_heads (int): Number of attention head.
- window_size (int): Local window size. Default: 7.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- """
-
- def __init__(self,
- dim,
- depth,
- num_heads,
- window_size=7,
- mlp_ratio=4.,
- qkv_bias=True,
- qk_scale=None,
- drop=0.,
- attn_drop=0.,
- drop_path=0.,
- norm_layer=nn.LayerNorm,
- downsample=None,
- use_checkpoint=False):
- super().__init__()
- self.window_size = window_size
- self.shift_size = window_size // 2
- self.depth = depth
- self.use_checkpoint = use_checkpoint
-
- # build blocks
- self.blocks = nn.ModuleList([
- SwinTransformerBlock(
- dim=dim,
- num_heads=num_heads,
- window_size=window_size,
- shift_size=0 if (i % 2 == 0) else window_size // 2,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias,
- qk_scale=qk_scale,
- drop=drop,
- attn_drop=attn_drop,
- drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
- norm_layer=norm_layer)
- for i in range(depth)])
-
- # patch merging layer
- if downsample is not None:
- self.downsample = downsample(dim=dim, norm_layer=norm_layer)
- else:
- self.downsample = None
-
- def forward(self, x, H, W):
- """ Forward function.
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- """
-
- # calculate attention mask for SW-MSA
- Hp = int(np.ceil(H / self.window_size)) * self.window_size
- Wp = int(np.ceil(W / self.window_size)) * self.window_size
- img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
- h_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- w_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- cnt = 0
- for h in h_slices:
- for w in w_slices:
- img_mask[:, h, w, :] = cnt
- cnt += 1
-
- mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
- mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
- attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
- attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
-
- for blk in self.blocks:
- blk.H, blk.W = H, W
- if self.use_checkpoint:
- x = checkpoint.checkpoint(blk, x, attn_mask)
- else:
- x = blk(x, attn_mask)
- if self.downsample is not None:
- x_down = self.downsample(x, H, W)
- Wh, Ww = (H + 1) // 2, (W + 1) // 2
- return x, H, W, x_down, Wh, Ww
- else:
- return x, H, W, x, H, W
-
-
-class PatchEmbed(nn.Module):
- """ Image to Patch Embedding
- Args:
- patch_size (int): Patch token size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- norm_layer (nn.Module, optional): Normalization layer. Default: None
- """
-
- def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
- super().__init__()
- patch_size = to_2tuple(patch_size)
- self.patch_size = patch_size
-
- self.in_chans = in_chans
- self.embed_dim = embed_dim
-
- self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
- if norm_layer is not None:
- self.norm = norm_layer(embed_dim)
- else:
- self.norm = None
-
- def forward(self, x):
- """Forward function."""
- # padding
- _, _, H, W = x.size()
- if W % self.patch_size[1] != 0:
- x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1]))
- if H % self.patch_size[0] != 0:
- x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0]))
-
- x = self.proj(x) # B C Wh Ww
- if self.norm is not None:
- Wh, Ww = x.size(2), x.size(3)
- x = x.flatten(2).transpose(1, 2)
- x = self.norm(x)
- x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww)
-
- return x
-
-
-class SwinTransformer(Backbone):
- """ Swin Transformer backbone.
- A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` -
- https://arxiv.org/pdf/2103.14030
- Args:
- pretrain_img_size (int): Input image size for training the pretrained model,
- used in absolute postion embedding. Default 224.
- patch_size (int | tuple(int)): Patch size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- depths (tuple[int]): Depths of each Swin Transformer stage.
- num_heads (tuple[int]): Number of attention head of each stage.
- window_size (int): Window size. Default: 7.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
- qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float): Override default qk scale of head_dim ** -0.5 if set.
- drop_rate (float): Dropout rate.
- attn_drop_rate (float): Attention dropout rate. Default: 0.
- drop_path_rate (float): Stochastic depth rate. Default: 0.2.
- norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
- ape (bool): If True, add absolute position embedding to the patch embedding. Default: False.
- patch_norm (bool): If True, add normalization after patch embedding. Default: True.
- out_indices (Sequence[int]): Output from which stages.
- frozen_stages (int): Stages to be frozen (stop grad and set eval mode).
- -1 means not freezing any parameters.
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- """
-
- def __init__(self,
- pretrain_img_size=224,
- patch_size=4,
- in_chans=3,
- embed_dim=96,
- depths=[2, 2, 6, 2],
- num_heads=[3, 6, 12, 24],
- window_size=7,
- mlp_ratio=4.,
- qkv_bias=True,
- qk_scale=None,
- drop_rate=0.,
- attn_drop_rate=0.,
- drop_path_rate=0.2,
- norm_layer=nn.LayerNorm,
- ape=False,
- patch_norm=True,
- out_indices=(0, 1, 2, 3),
- frozen_stages=-1,
- use_checkpoint=False):
- super().__init__()
-
- self.pretrain_img_size = pretrain_img_size
- self.num_layers = len(depths)
- self.embed_dim = embed_dim
- self.ape = ape
- self.patch_norm = patch_norm
- self.out_indices = out_indices
- self.frozen_stages = frozen_stages
-
- # split image into non-overlapping patches
- self.patch_embed = PatchEmbed(
- patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim,
- norm_layer=norm_layer if self.patch_norm else None)
-
- # absolute position embedding
- if self.ape:
- pretrain_img_size = to_2tuple(pretrain_img_size)
- patch_size = to_2tuple(patch_size)
- patches_resolution = [pretrain_img_size[0] // patch_size[0], pretrain_img_size[1] // patch_size[1]]
-
- self.absolute_pos_embed = nn.Parameter(torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1]))
- trunc_normal_(self.absolute_pos_embed, std=.02)
-
- self.pos_drop = nn.Dropout(p=drop_rate)
-
- # stochastic depth
- dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule
-
- # build layers
- self.layers = nn.ModuleList()
- for i_layer in range(self.num_layers):
- layer = BasicLayer(
- dim=int(embed_dim * 2 ** i_layer),
- depth=depths[i_layer],
- num_heads=num_heads[i_layer],
- window_size=window_size,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias,
- qk_scale=qk_scale,
- drop=drop_rate,
- attn_drop=attn_drop_rate,
- drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])],
- norm_layer=norm_layer,
- downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,
- use_checkpoint=use_checkpoint)
- self.layers.append(layer)
-
- num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)]
- self.num_features = num_features
-
- # add a norm layer for each output
- for i_layer in out_indices:
- layer = norm_layer(num_features[i_layer])
- layer_name = f'norm{i_layer}'
- self.add_module(layer_name, layer)
-
- self._freeze_stages()
- self._out_features = ['swin{}'.format(i) for i in self.out_indices]
- self._out_feature_channels = {
- 'swin{}'.format(i): self.embed_dim * 2 ** i for i in self.out_indices
- }
- self._out_feature_strides = {
- 'swin{}'.format(i): 2 ** (i + 2) for i in self.out_indices
- }
- self._size_devisibility = 32
-
-
- def _freeze_stages(self):
- if self.frozen_stages >= 0:
- self.patch_embed.eval()
- for param in self.patch_embed.parameters():
- param.requires_grad = False
-
- if self.frozen_stages >= 1 and self.ape:
- self.absolute_pos_embed.requires_grad = False
-
- if self.frozen_stages >= 2:
- self.pos_drop.eval()
- for i in range(0, self.frozen_stages - 1):
- m = self.layers[i]
- m.eval()
- for param in m.parameters():
- param.requires_grad = False
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in backbone.
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
-
- def _init_weights(m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- if isinstance(pretrained, str):
- self.apply(_init_weights)
- # load_checkpoint(self, pretrained, strict=False)
- elif pretrained is None:
- self.apply(_init_weights)
- else:
- raise TypeError('pretrained must be a str or None')
-
- def forward(self, x):
- """Forward function."""
- x = self.patch_embed(x)
-
- Wh, Ww = x.size(2), x.size(3)
- if self.ape:
- # interpolate the position embedding to the corresponding size
- absolute_pos_embed = F.interpolate(self.absolute_pos_embed, size=(Wh, Ww), mode='bicubic')
- x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C
- else:
- x = x.flatten(2).transpose(1, 2)
- x = self.pos_drop(x)
-
- # outs = []
- outs = {}
- for i in range(self.num_layers):
- layer = self.layers[i]
- x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww)
-
- if i in self.out_indices:
- norm_layer = getattr(self, f'norm{i}')
- x_out = norm_layer(x_out)
-
- out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous()
- # outs.append(out)
- outs['swin{}'.format(i)] = out
-
- return outs
-
- def train(self, mode=True):
- """Convert the model into training mode while keep layers freezed."""
- super(SwinTransformer, self).train(mode)
- self._freeze_stages()
-
-size2config = {
- 'T': {
- 'window_size': 7,
- 'embed_dim': 96,
- 'depth': [2, 2, 6, 2],
- 'num_heads': [3, 6, 12, 24],
- 'drop_path_rate': 0.2,
- 'pretrained': 'models/swin_tiny_patch4_window7_224.pth'
- },
- 'S': {
- 'window_size': 7,
- 'embed_dim': 96,
- 'depth': [2, 2, 18, 2],
- 'num_heads': [3, 6, 12, 24],
- 'drop_path_rate': 0.2,
- 'pretrained': 'models/swin_small_patch4_window7_224.pth'
- },
- 'B': {
- 'window_size': 7,
- 'embed_dim': 128,
- 'depth': [2, 2, 18, 2],
- 'num_heads': [4, 8, 16, 32],
- 'drop_path_rate': 0.3,
- 'pretrained': 'models/swin_base_patch4_window7_224.pth'
- },
- 'B-22k': {
- 'window_size': 7,
- 'embed_dim': 128,
- 'depth': [2, 2, 18, 2],
- 'num_heads': [4, 8, 16, 32],
- 'drop_path_rate': 0.3,
- 'pretrained': 'models/swin_base_patch4_window7_224_22k.pth'
- },
- 'B-22k-384': {
- 'window_size': 12,
- 'embed_dim': 128,
- 'depth': [2, 2, 18, 2],
- 'num_heads': [4, 8, 16, 32],
- 'drop_path_rate': 0.3,
- 'pretrained': 'models/swin_base_patch4_window12_384_22k.pth'
- },
- 'L-22k': {
- 'window_size': 7,
- 'embed_dim': 192,
- 'depth': [2, 2, 18, 2],
- 'num_heads': [6, 12, 24, 48],
- 'drop_path_rate': 0.3, # TODO (xingyi): this is unclear
- 'pretrained': 'models/swin_large_patch4_window7_224_22k.pth'
- },
- 'L-22k-384': {
- 'window_size': 12,
- 'embed_dim': 192,
- 'depth': [2, 2, 18, 2],
- 'num_heads': [6, 12, 24, 48],
- 'drop_path_rate': 0.3, # TODO (xingyi): this is unclear
- 'pretrained': 'models/swin_large_patch4_window12_384_22k.pth'
- }
-}
-
-@BACKBONE_REGISTRY.register()
-def build_swintransformer_backbone(cfg, input_shape):
- """
- """
- config = size2config[cfg.MODEL.SWIN.SIZE]
- out_indices = cfg.MODEL.SWIN.OUT_FEATURES
- model = SwinTransformer(
- embed_dim=config['embed_dim'],
- window_size=config['window_size'],
- depths=config['depth'],
- num_heads=config['num_heads'],
- drop_path_rate=config['drop_path_rate'],
- out_indices=out_indices,
- frozen_stages=-1,
- use_checkpoint=cfg.MODEL.SWIN.USE_CHECKPOINT
- )
- # print('Initializing', config['pretrained'])
- model.init_weights(config['pretrained'])
- return model
-
-
-@BACKBONE_REGISTRY.register()
-def build_swintransformer_fpn_backbone(cfg, input_shape: ShapeSpec):
- """
- """
- bottom_up = build_swintransformer_backbone(cfg, input_shape)
- in_features = cfg.MODEL.FPN.IN_FEATURES
- out_channels = cfg.MODEL.FPN.OUT_CHANNELS
- backbone = FPN(
- bottom_up=bottom_up,
- in_features=in_features,
- out_channels=out_channels,
- norm=cfg.MODEL.FPN.NORM,
- top_block=LastLevelP6P7_P5(out_channels, out_channels),
- fuse_type=cfg.MODEL.FPN.FUSE_TYPE,
- )
- return backbone
-
-
-@BACKBONE_REGISTRY.register()
-def build_swintransformer_bifpn_backbone(cfg, input_shape: ShapeSpec):
- """
- """
- bottom_up = build_swintransformer_backbone(cfg, input_shape)
- in_features = cfg.MODEL.FPN.IN_FEATURES
- backbone = BiFPN(
- cfg=cfg,
- bottom_up=bottom_up,
- in_features=in_features,
- out_channels=cfg.MODEL.BIFPN.OUT_CHANNELS,
- norm=cfg.MODEL.BIFPN.NORM,
- num_levels=cfg.MODEL.BIFPN.NUM_LEVELS,
- num_bifpn=cfg.MODEL.BIFPN.NUM_BIFPN,
- separable_conv=cfg.MODEL.BIFPN.SEPARABLE_CONV,
- )
- return backbone
\ No newline at end of file
diff --git a/spaces/Dinoking/Guccio-AI-Designer/netdissect/tool/ganseg.py b/spaces/Dinoking/Guccio-AI-Designer/netdissect/tool/ganseg.py
deleted file mode 100644
index e6225736d336cf75aedb8a7d7aec1229b497f6a9..0000000000000000000000000000000000000000
--- a/spaces/Dinoking/Guccio-AI-Designer/netdissect/tool/ganseg.py
+++ /dev/null
@@ -1,89 +0,0 @@
-'''
-A simple tool to generate sample of output of a GAN,
-and apply semantic segmentation on the output.
-'''
-
-import torch, numpy, os, argparse, sys, shutil
-from PIL import Image
-from torch.utils.data import TensorDataset
-from netdissect.zdataset import standard_z_sample, z_dataset_for_model
-from netdissect.progress import default_progress, verbose_progress
-from netdissect.autoeval import autoimport_eval
-from netdissect.workerpool import WorkerBase, WorkerPool
-from netdissect.nethook import edit_layers, retain_layers
-from netdissect.segviz import segment_visualization
-from netdissect.segmenter import UnifiedParsingSegmenter
-from scipy.io import savemat
-
-def main():
- parser = argparse.ArgumentParser(description='GAN output segmentation util')
- parser.add_argument('--model', type=str, default=
- 'netdissect.proggan.from_pth_file("' +
- 'models/karras/churchoutdoor_lsun.pth")',
- help='constructor for the model to test')
- parser.add_argument('--outdir', type=str, default='images',
- help='directory for image output')
- parser.add_argument('--size', type=int, default=100,
- help='number of images to output')
- parser.add_argument('--seed', type=int, default=1,
- help='seed')
- parser.add_argument('--quiet', action='store_true', default=False,
- help='silences console output')
- #if len(sys.argv) == 1:
- # parser.print_usage(sys.stderr)
- # sys.exit(1)
- args = parser.parse_args()
- verbose_progress(not args.quiet)
-
- # Instantiate the model
- model = autoimport_eval(args.model)
-
- # Make the standard z
- z_dataset = z_dataset_for_model(model, size=args.size)
-
- # Make the segmenter
- segmenter = UnifiedParsingSegmenter()
-
- # Write out text labels
- labels, cats = segmenter.get_label_and_category_names()
- with open(os.path.join(args.outdir, 'labels.txt'), 'w') as f:
- for i, (label, cat) in enumerate(labels):
- f.write('%s %s\n' % (label, cat))
-
- # Move models to cuda
- model.cuda()
-
- batch_size = 10
- progress = default_progress()
- dirname = args.outdir
-
- with torch.no_grad():
- # Pass 2: now generate images
- z_loader = torch.utils.data.DataLoader(z_dataset,
- batch_size=batch_size, num_workers=2,
- pin_memory=True)
- for batch_num, [z] in enumerate(progress(z_loader,
- desc='Saving images')):
- z = z.cuda()
- start_index = batch_num * batch_size
- tensor_im = model(z)
- byte_im = ((tensor_im + 1) / 2 * 255).clamp(0, 255).byte().permute(
- 0, 2, 3, 1).cpu()
- seg = segmenter.segment_batch(tensor_im)
- for i in range(len(tensor_im)):
- index = i + start_index
- filename = os.path.join(dirname, '%d_img.jpg' % index)
- Image.fromarray(byte_im[i].numpy()).save(
- filename, optimize=True, quality=100)
- filename = os.path.join(dirname, '%d_seg.mat' % index)
- savemat(filename, dict(seg=seg[i].cpu().numpy()))
- filename = os.path.join(dirname, '%d_seg.png' % index)
- Image.fromarray(segment_visualization(seg[i].cpu().numpy(),
- tensor_im.shape[2:])).save(filename)
- srcdir = os.path.realpath(
- os.path.join(os.getcwd(), os.path.dirname(__file__)))
- shutil.copy(os.path.join(srcdir, 'lightbox.html'),
- os.path.join(dirname, '+lightbox.html'))
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/Docfile/open_llm_leaderboard/src/assets/hardcoded_evals.py b/spaces/Docfile/open_llm_leaderboard/src/assets/hardcoded_evals.py
deleted file mode 100644
index b361d0c7cc8d250ee097fed25e53612c881a2b59..0000000000000000000000000000000000000000
--- a/spaces/Docfile/open_llm_leaderboard/src/assets/hardcoded_evals.py
+++ /dev/null
@@ -1,40 +0,0 @@
-from src.display_models.utils import AutoEvalColumn, model_hyperlink
-
-gpt4_values = {
- AutoEvalColumn.model.name: model_hyperlink("https://arxiv.org/abs/2303.08774", "gpt4"),
- AutoEvalColumn.revision.name: "tech report",
- AutoEvalColumn.precision.name: None,
- AutoEvalColumn.average.name: 84.3,
- AutoEvalColumn.arc.name: 96.3,
- AutoEvalColumn.hellaswag.name: 95.3,
- AutoEvalColumn.mmlu.name: 86.4,
- AutoEvalColumn.truthfulqa.name: 59.0,
- AutoEvalColumn.dummy.name: "GPT-4",
- AutoEvalColumn.model_type.name: "",
-}
-
-gpt35_values = {
- AutoEvalColumn.model.name: model_hyperlink("https://arxiv.org/abs/2303.08774", "gpt3.5"),
- AutoEvalColumn.revision.name: "tech report",
- AutoEvalColumn.precision.name: None,
- AutoEvalColumn.average.name: 71.9,
- AutoEvalColumn.arc.name: 85.2,
- AutoEvalColumn.hellaswag.name: 85.5,
- AutoEvalColumn.mmlu.name: 70.0,
- AutoEvalColumn.truthfulqa.name: 47.0,
- AutoEvalColumn.dummy.name: "GPT-3.5",
- AutoEvalColumn.model_type.name: "",
-}
-
-baseline = {
- AutoEvalColumn.model.name: "
Estimate Biodiversity in the world with the help of land cover.
", unsafe_allow_html=True)
- st.markdown("
The segmentation model is an association of UNet and DinoV1 trained on the dataset CORINE. Land use is divided into 6 differents classes : Each class is assigned a GBS score from 0 to 1