diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Devexpress 12.1 Full 16 Explore the Features and Benefits of DevExpress UI Controls Reporting Systems and IDE Productivity Tools.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Devexpress 12.1 Full 16 Explore the Features and Benefits of DevExpress UI Controls Reporting Systems and IDE Productivity Tools.md deleted file mode 100644 index 45d6a30811ffdde278157085788ca5bdb4a03d8e..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Devexpress 12.1 Full 16 Explore the Features and Benefits of DevExpress UI Controls Reporting Systems and IDE Productivity Tools.md +++ /dev/null @@ -1,212 +0,0 @@ -
-

Download Devexpress 12.1 Full 16 - A Comprehensive Guide

-

If you are a web or desktop developer, you might have heard of Devexpress, a popular suite of tools and components that can help you create stunning applications with ease. In this article, we will show you how to download Devexpress 12.1 Full 16, the latest version of this powerful software, and how to use it effectively in your projects.

-

What is Devexpress?

-

Devexpress is a software company that provides a wide range of products for web and desktop development, such as:

-

Download Devexpress 12.1 Full 16


DOWNLOAD - https://byltly.com/2uKzHK



- -

Devexpress supports various platforms and technologies, such as .NET Framework, .NET Core, .NET 5+, ASP.NET Web Forms, ASP.NET MVC, ASP.NET Core MVC, Blazor, HTML JS Technologies (AngularJS, KnockoutJS), WinForms, WPF, etc.

-

Why do you need Devexpress?

-

Devexpress can help you improve your web and desktop development in many ways, such as:

- -

How to download Devexpress 12.1 Full 16?

-

To download Devexpress 12.1 Full 16, you need to follow these steps:

-

Step 1: Check your system requirements

-

Before you download Devexpress 12.1 Full 16, you need to make sure that your system meets the minimum or recommended requirements for this software. Here are some of the requirements:

- - - - - - - - - - - -

Step 2: Choose your subscription plan

-16, you need to choose a subscription plan that suits your needs and budget. Devexpress offers various subscription plans and pricing options for its products, such as:

- -

You can also choose to buy individual products or components instead of a subscription plan if you only need a specific feature or functionality. However, buying a subscription plan can save you money and give you access to more products and updates.

-

How to download Devexpress 12.1 full version for free
-Devexpress 12.1 full crack download link
-Download Devexpress 12.1 full offline installer
-Devexpress 12.1 full license key generator
-Download Devexpress 12.1 full with source code
-Devexpress 12.1 full documentation download
-Download Devexpress 12.1 full for Visual Studio 2019
-Devexpress 12.1 full tutorial download
-Download Devexpress 12.1 full for Windows 10
-Devexpress 12.1 full patch download
-Download Devexpress 12.1 full for ASP.NET MVC
-Devexpress 12.1 full demo download
-Download Devexpress 12.1 full for WPF
-Devexpress 12.1 full activation code download
-Download Devexpress 12.1 full for WinForms
-Devexpress 12.1 full trial download
-Download Devexpress 12.1 full for Blazor
-Devexpress 12.1 full serial number download
-Download Devexpress 12.1 full for Angular
-Devexpress 12.1 full setup download
-Download Devexpress 12.1 full for React
-Devexpress 12.1 full keygen download
-Download Devexpress 12.1 full for Xamarin
-Devexpress 12.1 full registration code download
-Download Devexpress 12.1 full for .NET Core
-Devexpress 12.1 full torrent download
-Download Devexpress 12.1 full for PHP
-Devexpress 12.1 full product key download
-Download Devexpress 12.1 full for HTML5
-Devexpress 12.1 full activation key download
-Download Devexpress 12.1 full for JavaScript
-Devexpress 12.1 full license code download
-Download Devexpress 12.1 full for SQL Server
-Devexpress 12.1 full crack keygen download
-Download Devexpress 12.1 full for Oracle
-Devexpress 12.1 full serial key download
-Download Devexpress 12.1 full for MySQL
-Devexpress 12.1 full license key crack download
-Download Devexpress 12.1 full for PostgreSQL
-Devexpress 12.1 full activation key crack download
-Download Devexpress 12.1 full for MongoDB
-Devexpress 12.1 full serial number crack download
-Download Devexpress 12.1 full for Firebase
-Devexpress 12.1 full registration code crack download
-Download Devexpress 12.1 full for Azure SQL Database
-Devexpress 12.1 full product key crack download
-Download Devexpress 12.1 full for AWS DynamoDB
-Devexpress 12.1 full activation code crack download
-Download Devexpress 12.1 full for Google Cloud Firestore
-Devexpress 12.1 full license code crack download

-

Step 3: Download the installer

-

After you choose your subscription plan and complete the payment process, you can download the installer for Devexpress 12.1 Full 16 from the official website. To do this, you need to:

-
    -
  1. Go to https://www.devexpress.com/Products/Try/ and sign in with your account.
  2. -
  3. Select your subscription plan from the drop-down menu and click the Download button.
  4. -
  5. Select the version 12.1 Full 16 from the list and click the Download Installer button.
  6. -
  7. Save the installer file (DevExpressComponents-12.1.16.exe) to your computer and wait for the download to finish.
  8. -
-

Step 4: Run the installer

-

After you download the installer file, you can run it to install Devexpress 12.1 Full 16 on your computer. To do this, you need to:

-
    -
  1. Double-click the installer file (DevExpressComponents-12.1.16.exe) to launch it.
  2. -
  3. Click Yes if prompted by User Account Control (UAC).
  4. -
  5. Select your preferred language and click OK.
  6. -
  7. Read and accept the license agreement and click Next.
  8. -
  9. Select the components that you want to install and click Next. You can choose to install all components or only specific ones according to your needs.
  10. -
  11. Select the installation folder and click Next. You can use the default folder or choose a custom one.
  12. -
  13. Select the start menu folder and click Next. You can use the default folder or choose a custom one.
  14. -
  15. Select whether you want to create a desktop shortcut and click Next.
  16. -
  17. Select whether you want to check for updates automatically and click Next.
  18. -
  19. Click Install to start the installation process and wait for it to finish.
  20. -
  21. Click Finish to exit the installer.
  22. -
-

Step 5: Activate your license

-

To use Devexpress 12.1 Full 16, you need to activate your license and register your product. To do this, you need to:

-
    -
  1. Launch Visual Studio and open or create a project that uses Devexpress components.
  2. -
  3. A dialog box will appear asking you to activate your license. Click Login & Activate Now.
  4. -
  5. A web browser will open asking you to sign in with your account. Enter your email and password and click Login & Activate Now.
  6. -
  7. A confirmation message will appear saying that your license has been activated successfully. Click Close Browser & Return To Visual Studio.
  8. -
  9. A dialog box will appear asking you to register your product. Click Login & Register Now.
  10. -
  11. A web browser will open asking you to sign in with your account again. Enter your email and password and click Login & Register Now.
  12. -
  13. A confirmation message will appear saying that your product has been registered successfully. Click Close Browser & Return To Visual Studio.
  14. -

    How to use Devexpress 12.1 Full 16?

    - g>Devexpress 12.1 Full 16, you need to know some tips and tricks that can help you create stunning applications with ease. Here are some of them:

    -

    How to create a project with Devexpress 12.1 Full 16?

    -

    To create a project with Devexpress 12.1 Full 16, you can use the Devexpress Template Gallery, which is a tool that allows you to create projects based on predefined templates that include Devexpress controls and components. To do this, you need to:

    -
      -
    1. Launch Visual Studio and click File > New > Project.
    2. -
    3. Select Devexpress v20.2 Template Gallery from the list of templates and click Next.
    4. -
    5. Select the platform and technology that you want to use for your project, such as WinForms, WPF, ASP.NET Web Forms, ASP.NET MVC, etc.
    6. -
    7. Select the template that you want to use for your project, such as Blank Application, Ribbon Application, Outlook-Inspired Application, etc.
    8. -
    9. Enter the name and location of your project and click Create.
    10. -
    11. A new project will be created with the selected template and Devexpress controls and components.
    12. -
    -

    How to use the Devexpress controls and components?

    -

    To use the Devexpress controls and components in your project, you can use the Devexpress Toolbox, which is a tool that allows you to drag and drop Devexpress controls and components onto your forms or pages. To do this, you need to:

    -
      -
    1. Open a form or a page in your project in the designer mode.
    2. -
    3. Open the Devexpress Toolbox by clicking View > Toolbox.
    4. -
    5. Select the Devexpress control or component that you want to use from the list of categories, such as Data & Analytics, Navigation & Layout, Editors & Simple Controls, etc.
    6. -
    7. Drag and drop the Devexpress control or component onto your form or page.
    8. -
    9. A new Devexpress control or component will be added to your form or page with default settings.
    10. -
    -

    How to customize the appearance and behavior of the Devexpress controls and components?

    -

    To customize the appearance and behavior of the Devexpress controls and components in your project, you can use the Properties Window, which is a tool that allows you to change the properties, events, methods, and styles of Devexpress controls and components. To do this, you need to:

    -
      -
    1. Select a Devexpress control or component on your form or page in the designer mode.
    2. -
    3. Open the Properties Window by clicking View > Properties Window.
    4. -
    5. Select the property, event, method, or style that you want to change from the list of categories, such as Appearance, Behavior, Data Source, Layout Options, etc.
    6. -
    7. Edit the value of the property, event, method, or style according to your needs.
    8. -
    9. The appearance and behavior of the Devexpress control or component will be updated accordingly.
    10. -
    -

    How to access the documentation and support for Devexpress 12.1 Full 16?

    - g>Devexpress 12.1 Full 16, you can use the Help menu in Visual Studio, which is a tool that allows you to access the online or offline documentation and support for Devexpress products. To do this, you need to:

    -
      -
    1. Launch Visual Studio and open a project that uses Devexpress components.
    2. -
    3. Click Help > DevExpress Help.
    4. -
    5. Select the option that you want to use, such as Online Documentation, Offline Documentation, Support Center, Knowledge Base, etc.
    6. -
    7. A web browser will open with the selected option and you can browse the documentation and support for Devexpress products.
    8. -
    -

    Conclusion

    -

    In this article, we have shown you how to download Devexpress 12.1 Full 16, the latest version of this powerful software suite for web and desktop development, and how to use it effectively in your projects. We have covered the following topics:

    - -

    We hope that this article has been helpful and informative for you. If you want to learn more about Devexpress products and features, you can visit the official website or contact the support team. If you want to try Devexpress products for free, you can download a fully-functional 30-day trial version from the website. If you are ready to buy Devexpress products, you can choose a subscription plan that suits your needs and budget.

    -

    Thank you for reading this article and happy coding!

    -

    Frequently Asked Questions

    -

    Here are some of the frequently asked questions about Devexpress 12.1 Full 16:

    -

    Q: What are the new features and improvements in Devexpress 12.1 Full 16?

    -

    A: Devexpress 12.1 Full 16 includes many new features and improvements for web and desktop development, such as:

    - -

    Q: How can I update my existing Devexpress products to Devexpress 12.1 Full 16?

    -

    A: If you have an active subscription plan for Devexpress products, you can update your existing Devexpress products to Devexpress 12.1 Full 16 using the Devexpress Project Converter, which is a tool that allows you to update your projects to use the latest version of Devexpress controls and components. To do this, you need to:

    -
      -
    1. Download and install Devexpress 12.1 Full 16 on your computer.
    2. -
    3. Launch Visual Studio and open a project that uses Devexpress components.
    4. -
    5. Select DevExpress > Project Converter.
    6. -
    7. Select the option Update all DevExpress references in current solution/project(s) to a newer version.
    8. -
    9. Select the version v20.2 (12.1) from the drop-down menu.
    10. -
    11. Select whether you want to backup your project files before updating them.
    12. -
    13. Select whether you want to update your project files automatically or manually.
    14. -
    15. Click Start Conversion.
    16. -
    17. The tool will update your project files to use the latest version of Devexpress controls and components.
    18. -

      Q: How can I get help or report a problem with Devexpress 12.1 Full 16?

      -

      A: If you need help or want to report a problem with Devexpress 12.1 Full 16, you can contact the support team by submitting a ticket on the official website or by sending an email to support@devexpress.com. You can also browse the knowledge base or the forums on the website for answers or solutions to common issues or questions.

      -

      Q: How can I learn more about Devexpress products and features?

      -

      A: If you want to learn more about Devexpress products and features, you can visit the official website or follow the blog or social media channels of Devexpress. You can also watch the videos or webinars on the YouTube channel of Devexpress or attend the events or trainings hosted by Devexpress or its partners.

      -

      Q: How can I give feedback or suggest a feature for Devexpress products?

      -

      A: If you want to give feedback or suggest a feature for Devexpress products, you can use the User Voice portal on the official website, which is a tool that allows you to share your ideas or opinions with other users and developers of Devexpress products. You can also vote or comment on existing ideas or suggestions on the portal.

      -

      0a6ba089eb
      -
      -
      \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Boku No Pico Sin Censura ((FULL)).md b/spaces/1gistliPinn/ChatGPT4/Examples/Boku No Pico Sin Censura ((FULL)).md deleted file mode 100644 index 5f9f98be33dbeaf86c85b029a37662ee99d23170..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Boku No Pico Sin Censura ((FULL)).md +++ /dev/null @@ -1,6 +0,0 @@ -

      Boku No Pico Sin Censura


      Download File ->>->>->> https://imgfil.com/2uy1qp



      - -Similar searchesanimeyoaihentai sin censuralesbiantony lopezboku no picojapiyaoiyaoi animeyaoi hardlesbian hentaiyaoi hentaisenepornoyapyahoogirl dick ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Diabolicpokerstarhackv1002betarapidshare.md b/spaces/1gistliPinn/ChatGPT4/Examples/Diabolicpokerstarhackv1002betarapidshare.md deleted file mode 100644 index d946f92d1e79df9e384f2a78c5c7a99ff416604b..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Diabolicpokerstarhackv1002betarapidshare.md +++ /dev/null @@ -1,6 +0,0 @@ -

      diabolicpokerstarhackv1002betarapidshare


      Download Zip ->>->>->> https://imgfil.com/2uxYsg



      - - 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Embarcadero Delphi Xe Activation ((LINK)).md b/spaces/1gistliPinn/ChatGPT4/Examples/Embarcadero Delphi Xe Activation ((LINK)).md deleted file mode 100644 index 4d8ac2bcc6021b19f197f0c709044983acef2a71..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Embarcadero Delphi Xe Activation ((LINK)).md +++ /dev/null @@ -1,28 +0,0 @@ -

      Embarcadero Delphi Xe Activation


      Download Filehttps://imgfil.com/2uy0Dm



      - -You are on the Activation Link page. This page is protected and can only be accessed by persons who have been invited to visit it. Please note that you do not need to have a My HealtheVet account to access your information or to make a donation. - -You are here - -Privacy - -For medical, marketing or research purposes, we may share and disclose information with the following organizations or companies: - -Vietnam Veterans of America. - -We are required to protect your information in accordance with HIPAA, which protects your health information from unauthorized access or disclosure. Your information is stored in a secure location and is not shared with third parties or sold to others. When you are given your password, it will be your responsibility to keep it secure and private. If you forget your password, please contact us as soon as possible. - -In accordance with United States of America Patriot Act, we are required to collect, maintain, and make available to authorized law enforcement and other government agencies, or their authorized agents, physical and electronic access to all records and other documents and other information relating to you. Such records may include your date of birth, social security number, insurance ID number, or other personally identifying information. We may release your records or information to agents or third parties as follows: - -We may also use this information to contact you for promotional, marketing and research purposes. - -Other websites and mobile applications: If you access the My HealtheVet Account or the My HealtheVet Portal using your wireless device, we may request information from your wireless service provider to verify your identity and that you are authorized to use the wireless network. We may also use this information to track your location and content of visit to My HealtheVet Account. - -Your wireless provider may also access this information to aid in the delivery of your messages or other services. - -We may provide information about you to our service providers and/or agents, including but not limited to insurance companies, marketers, professional advisors, and others, for the purpose of processing payments, performing business operations, sending you marketing or research materials, or delivering our services. We may share information with these agents or service providers for marketing or research purposes, as permitted by the Privacy Policy, and they may contact you via mail, email, or telephone. These agents and service providers may not contact you about their own products or services unless you give them your express consent. - -Some of our third-party service providers may use 4fefd39f24
      -
      -
      -

      diff --git a/spaces/1phancelerku/anime-remove-background/5000rubl nece manatdir A simple guide to currency conversion.md b/spaces/1phancelerku/anime-remove-background/5000rubl nece manatdir A simple guide to currency conversion.md deleted file mode 100644 index dc96356a8c7522678566c29a4ab1ce99c1484f5e..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/5000rubl nece manatdir A simple guide to currency conversion.md +++ /dev/null @@ -1,148 +0,0 @@ - -

      5000 rubl nece manatdir?

      -

      If you are planning to travel or do business in Azerbaijan, you might be wondering how much 5000 rubles are worth in Azerbaijani manats. In this article, we will answer this question and provide you with some useful information on how to exchange currency in Azerbaijan. We will also give you some tips on where to find the best exchange rates and how to avoid scams and fees.

      -

      Introduction

      -

      The official currency of Azerbaijan is the Azerbaijani manat, with symbol ₼ and currency code AZN. The manat is subdivided into 100 qapik. The current series of banknotes and coins was introduced in 2006, when the manat was redenominated at a rate of 5000 old manats to 1 new manat.

      -

      5000rubl nece manatdir


      DOWNLOADhttps://jinyurl.com/2uNNHx



      -

      The official currency of Russia is the Russian ruble, with symbol ₽ and currency code RUB. The ruble is subdivided into 100 kopeks. The current series of banknotes and coins was introduced in 1998, after the ruble was redenominated at a rate of 1000 old rubles to 1 new ruble.

      -

      What is the exchange rate of Russian ruble to Azerbaijani manat?

      -

      The exchange rate of Russian ruble to Azerbaijani manat is the price of one ruble in terms of one manat. It tells you how many manats you can get for one ruble or vice versa. The exchange rate can change over time due to various factors, such as supply and demand, inflation, interest rates, speculation, and so on.

      -

      As of June 22, 2023, the mid-market exchange rate of Russian ruble to Azerbaijani manat was 0.0209865 AZN per RUB, according to Xe.com. This means that 5000 rubles were worth about 104.93 manats on that date.

      -

      What factors affect the exchange rate of Russian ruble to Azerbaijani manat?

      -

      There are many factors that influence the exchange rate of Russian ruble to Azerbaijani manat, some of them are:

      -
RequirementMinimumRecommended
Operating SystemWindows Vista SP2 or laterWindows 10 or later
.NET Framework Version.NET Framework 4.0 or later.NET Framework 4.5.2 or later
.NET Core Version.NET Core 2.0 or later.NET Core 3.0 or later
.NET Version.NET Framework only.NET Framework or .NET Core or .NET 5+
IDE VersionVisual Studio 2010 or laterVisual Studio 2019 or later
Disk SpaceAt least 4 GB free spaceAt least 8 GB free space
CPU SpeedAt least dual-core processor with at least 2 GHz speed At least quad-core processor with at least 3 GHz speed
RAM SizeAt least 4 GB RAM At least 8 GB RAM
Display Resolution At least HD (1366 x768) resolution FHD (1920 x1080) resolution or higher
- - - - - - -
BankWebsite
Kapital Bank[Kapital Bank]
PASHA Bank[PASHA Bank]
International Bank of Azerbaijan[International Bank of Azerbaijan]
Bank Respublika[Bank Respublika]
Nikoil Bank[Nikoil Bank]
-

Exchange offices

-

Exchange offices are another common place to exchange currency in Azerbaijan. They are usually located in busy areas, such as airports, hotels, shopping malls, and tourist attractions. They are convenient and fast, but they may charge higher fees and offer lower rates than banks. You should always check the exchange rate and the commission before you make a transaction.

-

Some of the reputable exchange offices in Azerbaijan are:

- - - - - - - -
Exchange officeLocation
AzərpoçtVarious branches across the country
Baku Express ExchangeBaku International Airport
Currency Exchange BakuNizami Street 67/71, Baku
Ganja ExchangeGanja Mall, Ganja
Lankaran ExchangeLankaran Heydar Aliyev Avenue 59A, Lankaran
-

Online platforms

-

Online platforms are a modern and convenient way to exchange currency in Azerbaijan. They allow you to transfer money or exchange currency digitally, using your smartphone or computer. You can either use an online platform that connects you with a local agent who will deliver cash to you or collect cash from you, or use an online platform that allows you to send money to a bank account or a mobile wallet.

-

Some of the online platforms that offer currency exchange services in Azerbaijan are:

- - - - - - - -
Online platformWebsite
Azimo[Azimo]
CurrencyFair[CurrencyFair]
Moneymove[Moneymove]
Skrill[Skrill]
TransferWise[TransferWise]
-

Conclusion

-

Summary of the main points

-

We have learned that:

- -

Recommendations for travelers and business people

-

Based on the information we have provided, here are some recommendations for travelers and business people who want to exchange rubles to manats:

- -

FAQs

-

Here are some frequently asked questions about exchanging rubles to manats:

-
    -
  1. How do I pronounce Azerbaijani manat?
  2. -

    The Azerbaijani manat is pronounced as "mah-nat", with emphasis on the second syllable. The plural form is "manatlar", pronounced as "mah-nat-lar". The qapik is pronounced as "gah-pik", with emphasis on the first syllable. The plural form is "qapiklar", pronounced as "gah-pik-lar".

    -
  3. What are the denominations of Azerbaijani manat?
  4. -

    The Azerbaijani manat comes in banknotes of 1, 5, 10, 20, 50, 100, and 200 manats, and coins of 1, 3, 5, 10, 20, and 50 qapiks. The banknotes feature portraits of prominent Azerbaijani figures and landmarks on both sides. The coins feature the national emblem and name of Azerbaijan on one side and the denomination and year of issue on the other side.

    -
  5. What are some tips for handling Azerbaijani manat?
  6. -

    Some tips for handling Azerbaijani manat are:

    - -
  7. How do I tip in Azerbaijan?
  8. -

    Tipping is not mandatory in Azerbaijan, but it is appreciated and expected in some situations. You can tip according to the quality of service and your satisfaction. Here are some general guidelines for tipping in Azerbaijan:

    - -
  9. What are some common scams and pitfalls to avoid when exchanging currency in Azerbaijan?
  10. -

    Some common scams and pitfalls to avoid when exchanging currency in Azerbaijan are:

    - -
-

I hope this article has helped you understand how much 5000 rubles are worth in Azerbaijani manats and how to exchange currency in Azerbaijan. If you have any questions or comments, please feel free to leave them below. Thank you for reading!

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Blast Away with 3D Bubble Shooter A Free and Fun Game for All Ages.md b/spaces/1phancelerku/anime-remove-background/Blast Away with 3D Bubble Shooter A Free and Fun Game for All Ages.md deleted file mode 100644 index 7df2c2bd3dca61fd732b4cf62e65c2acd7c6cb09..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Blast Away with 3D Bubble Shooter A Free and Fun Game for All Ages.md +++ /dev/null @@ -1,140 +0,0 @@ - -

3D Bubble Shooter Game Free Download: A Fun and Addictive Way to Relax and Enjoy

-

If you are looking for a fun and addictive game that can help you relax and enjoy your free time, you should try playing a 3D bubble shooter game. A 3D bubble shooter game is a classic puzzle game that involves shooting colorful bubbles and matching them with other bubbles of the same color. The goal is to clear all the bubbles from the board and win levels. Sounds easy, right? Well, not so fast. A 3D bubble shooter game can also be challenging and exciting, especially when you play it in 3D mode. In this article, we will tell you everything you need to know about 3D bubble shooter games, including how to download and play them for free, what are the features and benefits of playing them, and how to improve your skills and strategies in them. So, let's get started!

-

3d bubble shooter game free download


Download Ziphttps://jinyurl.com/2uNNmm



-

What is a 3D bubble shooter game?

-

A 3D bubble shooter game is a type of puzzle game that belongs to the genre of tile-matching or match-three games. In these games, you have to match three or more tiles or objects of the same color or shape to make them disappear from the board. Some examples of popular tile-matching games are Candy Crush Saga, Bejeweled, Tetris, and of course, Bubble Shooter.

-

The basic gameplay of bubble shooter games

-

The basic gameplay of bubble shooter games is simple and easy to learn. You have a cannon or a launcher at the bottom of the screen that shoots bubbles of different colors. You can aim and shoot the bubbles by tapping or clicking on the screen. You have to shoot the bubbles towards the top of the screen, where there are other bubbles already arranged in rows or clusters. When you shoot a bubble, it will stick to the other bubbles of the same color if they are adjacent or touching. If you manage to create a group of three or more bubbles of the same color, they will pop and disappear from the board. The more bubbles you pop at once, the more points you score. You can also create combos by popping multiple groups of bubbles in succession. The game ends when you clear all the bubbles from the board or when the bubbles reach the bottom of the screen.

-

The advantages of playing in 3D mode

-

While most bubble shooter games are played in 2D mode, some games offer you the option to play in 3D mode. This means that instead of having a flat board with rows or columns of bubbles, you have a spherical or cylindrical board with bubbles arranged in layers or rings. This adds a new dimension to the gameplay, as you have to consider not only the horizontal and vertical angles, but also the depth and perspective of your shots. Playing in 3D mode can make the game more realistic, immersive, and challenging. You can also enjoy different views and angles of the board as it rotates or tilts according to your movements. Playing in 3D mode can also enhance your spatial awareness, coordination, and concentration skills.

-

How to download and play 3D bubble shooter games for free?

-

If you are interested in playing 3D bubble shooter games for free, you have several options to choose from. You can find many free 3D bubble shooter games online, on various websites and platforms. You can also download and install free 3D bubble shooter games on your device, such as your smartphone, tablet, laptop, or desktop computer. Here are some tips on how to do that.

-

The best sources to find free 3D bubble shooter games

-

One of the best sources to find free 3D bubble shooter games is the internet. There are many websites and platforms that offer a wide range of 3D bubble shooter games that you can play online, without downloading or installing anything. Some of the most popular and reliable websites and platforms are:

-

3d bubble pop game free download
-3d bubble blast game free download
-3d bubble shooter offline game free download
-3d bubble shooter game for pc free download
-3d bubble shooter game for android free download
-3d bubble shooter game with physics free download
-3d bubble shooter game with levels free download
-3d bubble shooter game with boosters free download
-3d bubble shooter game with puzzles free download
-3d bubble shooter game with arcade mode free download
-3d bubble fall game free download
-3d bubble crush game free download
-3d bubble breaker game free download
-3d bubble match game free download
-3d bubble swap game free download
-3d bubble burst game free download
-3d bubble drop game free download
-3d bubble bounce game free download
-3d bubble smash game free download
-3d bubble shoot game free download
-best 3d bubble shooter game free download
-new 3d bubble shooter game free download
-classic 3d bubble shooter game free download
-original 3d bubble shooter game free download
-addictive 3d bubble shooter game free download
-relaxing 3d bubble shooter game free download
-fun 3d bubble shooter game free download
-challenging 3d bubble shooter game free download
-exciting 3d bubble shooter game free download
-awesome 3d bubble shooter game free download
-colorful 3d bubble shooter game free download
-realistic 3d bubble shooter game free download
-smooth 3d bubble shooter game free download
-easy 3d bubble shooter game free download
-simple 3d bubble shooter game free download
-amazing 3d bubble shooter game free download
-cool 3d bubble shooter game free download
-cute 3d bubble shooter game free download
-beautiful 3d bubble shooter game free download
-fantastic 3d bubble shooter game free download
-voodoo 3d bubble shooter game free download
-tarboosh 3d bubble shooter game free download
-bubbleshooter orig 3d bubble shooter game free download
-bubbleshooter android 3d bubble shooter game free download
-google play store 3d bubble shooter game free download
-app store 3d bubble shooter game free download
-apk file 3d bubble shooter game free download
-mod version 3d bubble shooter game free download
-unlimited coins and lives in the app of the same name.

- -

The steps to download and install 3D bubble shooter games on your device

-

If you prefer to download and install 3D bubble shooter games on your device, rather than playing them online, you need to follow some simple steps. Here are the general steps to do that:

-
    -
  1. Choose a source or a platform that offers free 3D bubble shooter games for download. You can use the ones we mentioned above, or you can search for other options on the internet.
  2. -
  3. Select a game that you want to download and play. Make sure that the game is compatible with your device and meets the system requirements.
  4. -
  5. Click on the download button or link to start the download process. You may need to grant some permissions or accept some terms and conditions before downloading.
  6. -
  7. Wait for the download to finish. Depending on the size of the game and your internet speed, this may take a few minutes or longer.
  8. -
  9. Once the download is complete, locate the game file on your device and open it. Follow the instructions to install the game on your device.
  10. -
  11. After the installation is done, launch the game and enjoy playing it.
  12. -
-

What are the features and benefits of playing 3D bubble shooter games?

-

Playing 3D bubble shooter games can be a lot of fun and rewarding. There are many features and benefits that you can enjoy while playing these games. Here are some of them:

-

The different modes and levels of 3D bubble shooter games

-

One of the features that make 3D bubble shooter games interesting and varied is the different modes and levels that they offer. You can choose from different modes of gameplay, such as classic mode, arcade mode, puzzle mode, adventure mode, time mode, etc. Each mode has its own rules and objectives that you have to follow and achieve. You can also play different levels of difficulty, ranging from easy to hard. Each level has its own layout, design, color scheme, number of bubbles , and obstacles. You can also unlock new levels as you progress and complete the previous ones. The different modes and levels of 3D bubble shooter games can keep you entertained and challenged for hours.

-

The cool boosters and power-ups to help you win

-

Another feature that makes 3D bubble shooter games fun and exciting is the cool boosters and power-ups that you can use to help you win. Boosters and power-ups are special bubbles or items that have different effects and abilities. For example, some boosters and power-ups can change the color of the bubbles, pop more bubbles at once, clear a whole row or column of bubbles, freeze the board, etc. You can get boosters and power-ups by popping certain bubbles, completing certain tasks, or buying them with coins or gems. You can also use them strategically to overcome difficult situations or to score higher points. Boosters and power-ups can add more fun and variety to your gameplay.

-

The amazing graphics and sound effects of 3D bubble shooter games

-

One of the benefits of playing 3D bubble shooter games is that you can enjoy amazing graphics and sound effects that enhance your gaming experience. 3D bubble shooter games have high-quality graphics that make the bubbles look realistic, colorful, and shiny. You can also see the bubbles pop and burst in 3D animation, which is satisfying and rewarding. The sound effects of 3D bubble shooter games are also impressive and immersive. You can hear the bubbles pop, bounce, splash, and crackle as you shoot them. You can also hear the background music and the voice-overs that match the theme and mood of the game. The graphics and sound effects of 3D bubble shooter games can make you feel like you are playing in a real 3D environment.

-

How to improve your skills and strategies in 3D bubble shooter games?

-

Playing 3D bubble shooter games can be easy to learn, but hard to master. If you want to improve your skills and strategies in these games, you need to practice regularly and follow some tips and tricks. Here are some of them:

-

The tips and tricks to aim and shoot accurately

-

One of the most important skills in 3D bubble shooter games is to aim and shoot accurately. You need to be able to hit the right spot with the right bubble at the right time. To do that, you need to pay attention to several factors, such as:

- -

The best ways to clear the board and score high points

-

One of the main goals in 3D bubble shooter games is to clear the board and score high points. To do that, you need to follow some strategies, such as:

- -

The challenges and rewards of playing 3D bubble shooter games

-

Playing 3D bubble shooter games can be challenging and rewarding at the same time. There are many challenges that you can face while playing these games, such as:

- -

However, there are also many rewards that you can get from playing 3D bubble shooter games, such as:

- -

Conclusion

-

In conclusion, 3D bubble shooter games are a fun and addictive way to relax and enjoy your free time. They are easy to learn but hard to master puzzle games that involve shooting colorful bubbles and matching them with other bubbles of the same color. They offer different modes and levels of gameplay, cool boosters and power-ups to help you win , amazing graphics and sound effects to enhance your gaming experience, and many challenges and rewards to keep you motivated and satisfied. You can download and play 3D bubble shooter games for free on your device, from various sources and platforms. You can also improve your skills and strategies in 3D bubble shooter games by following some tips and tricks. 3D bubble shooter games are a great way to have fun and relax, as well as to improve your mental abilities and learn new things. So, what are you waiting for? Download a 3D bubble shooter game today and start popping some bubbles!

-

FAQs

-

Here are some frequently asked questions about 3D bubble shooter games:

-
    -
  1. What is the difference between 2D and 3D bubble shooter games?
  2. -

    A: The main difference between 2D and 3D bubble shooter games is the shape and orientation of the board. In 2D bubble shooter games, the board is flat and has rows or columns of bubbles. In 3D bubble shooter games, the board is spherical or cylindrical and has layers or rings of bubbles. This affects the gameplay, as you have to consider the depth and perspective of your shots in 3D mode.

    -
  3. How can I get more coins or gems in 3D bubble shooter games?
  4. -

    A: Coins or gems are the currency of 3D bubble shooter games, which you can use to buy boosters, power-ups, or extra lives. You can get more coins or gems by completing levels, achieving goals, watching ads, or making in-app purchases.

    -
  5. How can I play 3D bubble shooter games offline?
  6. -

    A: Some 3D bubble shooter games support offline play, which means that you can play them without an internet connection. To do that, you need to download and install the game on your device first, and then launch it while offline. However, some features or functions of the game may not be available offline, such as leaderboards, achievements, or updates.

    -
  7. Are 3D bubble shooter games suitable for children?
  8. -

    A: Yes, 3D bubble shooter games are suitable for children, as they are fun, colorful, and easy to play. They can also help children develop their cognitive, motor, and social skills, as well as their creativity and imagination. However, parents should supervise their children while playing these games, especially when it comes to online interactions or in-app purchases.

    -
  9. What are some of the best 3D bubble shooter games to play?
  10. -

    A: There are many 3D bubble shooter games to choose from, but some of the best ones are:

    - -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download PUBG MOBILE MOD APK with Unlimited Features and Anti-Ban.md b/spaces/1phancelerku/anime-remove-background/Download PUBG MOBILE MOD APK with Unlimited Features and Anti-Ban.md deleted file mode 100644 index 4f0a864877e2a99aee6b99a3952a816dea1f4e2a..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download PUBG MOBILE MOD APK with Unlimited Features and Anti-Ban.md +++ /dev/null @@ -1,101 +0,0 @@ -
-

How to Download PUBG Mobile Mod APK and Enjoy Its Amazing Features

-

PUBG Mobile is one of the most popular and addictive mobile games in the world. Millions of players enjoy its thrilling and realistic gameplay, where they have to survive in a shrinking map with up to 100 other players. However, some players are not satisfied with the official game, and they look for ways to enhance their gaming experience. One of these ways is to download PUBG Mobile Mod APK, a modified version of the game that offers various features that are not available in the original game. In this article, we will tell you what PUBG Mobile Mod APK is, why you should download it, how to download it, what are the risks of downloading it, and some frequently asked questions about it.

-

download pubg mobile mod


DOWNLOADhttps://jinyurl.com/2uNThS



-

What is PUBG Mobile Mod APK?

-

PUBG Mobile Mod APK is a modified version of the popular battle royale game PUBG Mobile

-

PUBG Mobile Mod APK is a modified version of the official PUBG Mobile game, which is developed by Krafton and Level Infinite. The modded version is created by third-party developers or hackers, who modify the original game files and add new features or functions to the game. These features or functions are usually called hacks or cheats, as they give an unfair advantage to the players who use them.

-

It offers various features that are not available in the official game, such as ESP, aimbot, wallhack, speed hack, jump hack, and more

-

PUBG Mobile Mod APK offers various features that are not available in the official game, such as ESP (Extra Sensory Perception), aimbot (auto-aiming), wallhack (seeing through walls), speed hack (increasing movement speed), jump hack (increasing jump height), and more. These features can help you spot your enemies easily, shoot them accurately, move faster, jump higher, and more. These features can make the game more fun and exciting, as you can dominate the battlefield and win more matches. However, they can also make the game unfair and unbalanced, as you can gain an edge over your opponents who play the game normally.

-

Why Download PUBG Mobile Mod APK?

-

PUBG Mobile Mod APK can enhance your gaming experience and give you an edge over your opponents

-

PUBG Mobile Mod APK can enhance your gaming experience and give you an edge over your opponents, as you can use the features that are not available in the official game. You can improve your skills, performance, and stats, as you can spot, shoot, move, and jump better than your enemies. You can also enjoy the game more, as you can explore new possibilities and scenarios that are not possible in the original game. You can have more fun and excitement, as you can win more matches and rank higher in the leaderboards.

-

You can access all the premium items, skins, weapons, and vehicles for free

-

PUBG Mobile Mod APK also allows you to access all the premium items, skins, weapons, and vehicles for free, without spending any real money or UC (Unknown Cash), which is the in-game currency of PUBG Mobile. You can unlock and use all the items that are otherwise only available through purchasing or completing missions or events. You can customize your character and equipment according to your preference and style. You can also impress your friends and other players with your rare and exclusive items.

-

You can customize your game settings and preferences according to your liking

-

PUBG Mobile Mod APK also lets you customize your game settings and preferences according to your liking, without following the default or recommended settings of the official game. You can adjust the graphics quality, sound effects, controls, sensitivity, frame rate, and more. You can also enable or disable the features of the modded version according to your needs and wishes. You can tailor your game experience to suit your device specifications and personal taste.

-

How to Download PUBG Mobile Mod APK?

-

You need to find a reliable and safe source to download the modded APK file

-

The first step to download PUBG Mobile Mod APK is to find a reliable and safe source to download the modded APK file. There are many websites and platforms that claim to offer PUBG Mobile Mod APK for free, but not all of them are trustworthy or secure. Some of them may contain malware or viruses that can harm your device or data. Some of them may also provide fake or outdated versions of the modded APK file that may not work properly or at all. Therefore, you need to do some research and check the reviews and ratings of the source before downloading anything from it.

-

You need to enable the installation of unknown sources on your device

-

The next step to download PUBG Mobile Mod APK is to enable the installation of unknown sources on your device. This is because PUBG Mobile Mod APK is not an official app from Google Play Store or App Store, and it is considered as an unknown or third-party app by your device. Therefore, you need to allow your device to install apps from sources other than the official ones. To do this, you need to go to your device settings, security settings, and enable the option of unknown sources or allow from this source.

-

You need to uninstall the original PUBG Mobile game from your device

-

The third step to download PUBG Mobile Mod APK is to uninstall the original PUBG Mobile game from your device. This is because PUBG Mobile Mod APK cannot coexist with the official game on the same device, as they have the same package name and signature. Therefore, you need to remove the original game from your device before installing the modded version. To do this, you need to go to your device settings, apps settings, find PUBG Mobile app, and uninstall it.

-

download pubg mobile mod apk latest version
-download pubg mobile mod menu
-download pubg mobile mod esp
-download pubg mobile mod unlimited uc
-download pubg mobile mod aimbot
-download pubg mobile mod anti ban
-download pubg mobile mod no recoil
-download pubg mobile mod obb
-download pubg mobile mod global
-download pubg mobile mod kr
-download pubg mobile mod data
-download pubg mobile mod free fire
-download pubg mobile mod god mode
-download pubg mobile mod hack
-download pubg mobile mod injector
-download pubg mobile mod ios
-download pubg mobile mod magic bullet
-download pubg mobile mod new era
-download pubg mobile mod offline
-download pubg mobile mod online
-download pubg mobile mod plus
-download pubg mobile mod root
-download pubg mobile mod script
-download pubg mobile mod speed hack
-download pubg mobile mod vip
-download pubg mobile mod wallhack
-download pubg mobile lite mod apk
-download pubg mobile lite mod menu
-download pubg mobile lite mod esp
-download pubg mobile lite mod unlimited bc
-download pubg mobile lite mod aimbot
-download pubg mobile lite mod anti ban
-download pubg mobile lite mod no recoil
-download pubg mobile lite mod obb
-download pubg mobile lite mod global
-download pubg mobile lite mod data
-download pubg mobile lite mod free fire
-download pubg mobile lite mod god mode
-download pubg mobile lite mod hack
-download pubg mobile lite mod injector
-download pubg mobile lite mod ios
-download pubg mobile lite mod magic bullet
-download pubg mobile lite mod new era
-download pubg mobile lite mod offline
-download pubg mobile lite mod online
-download pubg mobile lite mod plus
-download pubg mobile lite mod root
-download pubg mobile lite mod script
-download pubg mobile lite mod speed hack

You need to install the PUBG Mobile Mod APK file and grant the required permissions

-

The fourth step to download PUBG Mobile Mod APK is to install the PUBG Mobile Mod APK file and grant the required permissions. To do this, you need to locate the downloaded file on your device storage, tap on it, and follow the installation instructions. You may also need to grant some permissions to the app, such as storage, camera, microphone, location, and more. These permissions are necessary for the app to function properly and access the features of the modded version.

-

You need to launch the game and enjoy its features

-

The final step to download PUBG Mobile Mod APK is to launch the game and enjoy its features. To do this, you need to open the app icon on your device screen, sign in with your account or create a new one, and start playing the game. You can access the features of the modded version from the game menu or settings. You can also use some hotkeys or commands to activate or deactivate some features during the game. You can now enjoy the game with more features and advantages than before.

-

What are the Risks of Downloading PUBG Mobile Mod APK?

-

PUBG Mobile Mod APK is not an official product of Krafton or Level Infinite, and it violates their terms of service

-

One of the risks of downloading PUBG Mobile Mod APK is that it is not an official product of Krafton or Level Infinite, and it violates their terms of service. PUBG Mobile Mod APK is created by unauthorized developers or hackers, who have no affiliation or permission from the original game developers or publishers. By downloading and using PUBG Mobile Mod APK, you are breaking the rules and regulations of the official game, and you may face legal consequences or penalties for doing so.

-

You may face legal issues or penalties for using unauthorized software or cheating in the game

-

Another risk of downloading PUBG Mobile Mod APK is that you may face legal issues or penalties for using unauthorized software or cheating in the game. PUBG Mobile Mod APK is considered as a form of software piracy or intellectual property theft, as it infringes on the rights and interests of the original game developers and publishers. By downloading and using PUBG Mobile Mod APK, you are committing a crime and you may be sued or fined for doing so. Moreover, PUBG Mobile Mod APK is also considered as a form of cheating or hacking in the game, as it gives an unfair advantage to the players who use it. By downloading and using PUBG Mobile Mod APK, you are violating the fair play and sportsmanship of the game, and you may be banned or suspended from the game for doing so.

You may expose your device to malware or viruses that can harm your data or privacy

-

A third risk of downloading PUBG Mobile Mod APK is that you may expose your device to malware or viruses that can harm your data or privacy. PUBG Mobile Mod APK is not a verified or tested app, and it may contain malicious code or software that can infect your device or steal your information. By downloading and installing PUBG Mobile Mod APK, you are risking your device security and performance, and you may lose your data or compromise your privacy. You may also face identity theft, fraud, or phishing attacks from hackers or scammers who may use your data for illegal purposes.

-

Conclusion

-

PUBG Mobile Mod APK is a tempting option for players who want to enjoy the game with more features and advantages

-

PUBG Mobile Mod APK is a tempting option for players who want to enjoy the game with more features and advantages than the official game. It offers various features that are not available in the original game, such as ESP, aimbot, wallhack, speed hack, jump hack, and more. It also allows you to access all the premium items, skins, weapons, and vehicles for free. It also lets you customize your game settings and preferences according to your liking.

-

However, it also comes with many risks and drawbacks that can ruin your gaming experience and reputation

-

However, PUBG Mobile Mod APK also comes with many risks and drawbacks that can ruin your gaming experience and reputation. It is not an official product of Krafton or Level Infinite, and it violates their terms of service. You may face legal issues or penalties for using unauthorized software or cheating in the game. You may get banned or suspended from the game for using hacks or exploits. You may expose your device to malware or viruses that can harm your data or privacy.

-

It is advisable to play the game fairly and ethically, and avoid using any cheats or hacks that can harm yourself or others

-

Therefore, it is advisable to play the game fairly and ethically, and avoid using any cheats or hacks that can harm yourself or others. PUBG Mobile is a fun and challenging game that requires skill, strategy, and teamwork. It is more rewarding and satisfying to play the game without any unfair advantages or shortcuts. It is also more respectful and honorable to play the game without any dishonesty or deception. It is also safer and smarter to play the game without any risks or threats to your device or data.

-

FAQs

-

Is PUBG Mobile Mod APK legal?

-

No, PUBG Mobile Mod APK is not legal, as it is a modified version of the official PUBG Mobile game, which is developed by Krafton and Level Infinite. The modded version is created by unauthorized developers or hackers, who have no affiliation or permission from the original game developers or publishers. By downloading and using PUBG Mobile Mod APK, you are breaking the rules and regulations of the official game, and you may face legal consequences or penalties for doing so.

-

How can I avoid getting banned for using PUBG Mobile Mod APK?

-

The best way to avoid getting banned for using PUBG Mobile Mod APK is to not use it at all. PUBG Mobile has a strict anti-cheat system that can detect any abnormal activities or behaviors in the game. If you are caught using any hacks or cheats in the game, you will be banned or suspended from the game immediately. There is no guarantee that any PUBG Mobile Mod APK can bypass the anti-cheat system or protect you from getting banned. Therefore, it is better to play the game normally and fairly, without using any cheats or hacks.

-

What are some of the best features of PUBG Mobile Mod APK?

-

Some of the best features of PUBG Mobile Mod APK are: - ESP (Extra Sensory Perception): This feature allows you to see your enemies' location, health, name, distance, weapons, items, and more on your screen. - Aimbot (auto-aiming): This feature allows you to automatically aim at your enemies' head or body, and shoot them with high accuracy and precision. - Wallhack (seeing through walls): This feature allows you to see through walls and other obstacles, and spot your enemies behind them. - Speed hack (increasing movement speed): This feature allows you to increase your movement speed, and run faster than normal. - Jump hack (increasing jump height): This feature allows you to increase your jump height, and jump higher than normal.

-

Where can I download PUBG Mobile Mod APK safely?

-

There is no safe source to download PUBG Mobile Mod APK, as it is an unofficial and unverified app that may contain malware or viruses that can harm your device or data. PUBG Mobile Mod APK is also illegal and unethical, and it may get you banned or penalized from the game. Therefore, it is not recommended to download PUBG Mobile Mod APK from any source. The only safe and legal way to play PUBG Mobile is to download the official game from Google Play Store or App Store, and play it without any cheats or hacks.

-

How can I update PUBG Mobile Mod APK?

-

You cannot update PUBG Mobile Mod APK from the official game, as they are not compatible or synchronized with each other. If you want to update PUBG Mobile Mod APK, you need to find a new version of the modded APK file from the source where you downloaded it, and install it on your device. However, this may not be easy or safe, as the source may not provide regular updates or may provide fake or harmful updates. Therefore, it is better to avoid using PUBG Mobile Mod APK, and stick to the official game that provides frequent and secure updates.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/2ndelement/voicevox/voicevox_engine/utility/__init__.py b/spaces/2ndelement/voicevox/voicevox_engine/utility/__init__.py deleted file mode 100644 index d40fea3e6c22f8bcb960ca12cf626e1f3a40afef..0000000000000000000000000000000000000000 --- a/spaces/2ndelement/voicevox/voicevox_engine/utility/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -from .connect_base64_waves import ( - ConnectBase64WavesException, - connect_base64_waves, - decode_base64_waves, -) -from .core_version_utility import get_latest_core_version, parse_core_version -from .mutex_utility import mutex_wrapper -from .path_utility import delete_file, engine_root, get_save_dir - -__all__ = [ - "ConnectBase64WavesException", - "connect_base64_waves", - "decode_base64_waves", - "get_latest_core_version", - "parse_core_version", - "delete_file", - "engine_root", - "get_save_dir", - "mutex_wrapper", -] diff --git a/spaces/AIConsultant/MusicGen/tests/common_utils/temp_utils.py b/spaces/AIConsultant/MusicGen/tests/common_utils/temp_utils.py deleted file mode 100644 index b45d896836799edcf1fee271409b390b3b6e4127..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/tests/common_utils/temp_utils.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import os -import tempfile - - -class TempDirMixin: - """Mixin to provide easy access to temp dir. - """ - - temp_dir_ = None - - @classmethod - def get_base_temp_dir(cls): - # If AUDIOCRAFT_TEST_DIR is set, use it instead of temporary directory. - # this is handy for debugging. - key = "AUDIOCRAFT_TEST_DIR" - if key in os.environ: - return os.environ[key] - if cls.temp_dir_ is None: - cls.temp_dir_ = tempfile.TemporaryDirectory() - return cls.temp_dir_.name - - @classmethod - def tearDownClass(cls): - if cls.temp_dir_ is not None: - try: - cls.temp_dir_.cleanup() - cls.temp_dir_ = None - except PermissionError: - # On Windows there is a know issue with `shutil.rmtree`, - # which fails intermittently. - # https://github.com/python/cpython/issues/74168 - # Following the above thread, we ignore it. - pass - super().tearDownClass() - - @property - def id(self): - return self.__class__.__name__ - - def get_temp_path(self, *paths): - temp_dir = os.path.join(self.get_base_temp_dir(), self.id) - path = os.path.join(temp_dir, *paths) - os.makedirs(os.path.dirname(path), exist_ok=True) - return path - - def get_temp_dir(self, *paths): - temp_dir = os.path.join(self.get_base_temp_dir(), self.id) - path = os.path.join(temp_dir, *paths) - os.makedirs(path, exist_ok=True) - return path diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/openai.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/openai.py deleted file mode 100644 index 9911b6e135e51970177fcac067c12192b0b57c1c..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/openai.py +++ /dev/null @@ -1,129 +0,0 @@ -""" OpenAI pretrained model functions - -Adapted from https://github.com/openai/CLIP. Originally MIT License, Copyright (c) 2021 OpenAI. -""" - -import os -import warnings -from typing import Union, List - -import torch - -from .model import build_model_from_openai_state_dict -from .pretrained import get_pretrained_url, list_pretrained_tag_models, download_pretrained - -__all__ = ["list_openai_models", "load_openai_model"] - - -def list_openai_models() -> List[str]: - """Returns the names of available CLIP models""" - return list_pretrained_tag_models('openai') - - -def load_openai_model( - name: str, - model_cfg, - device: Union[str, torch.device] = "cuda" if torch.cuda.is_available() else "cpu", - jit=True, - cache_dir=os.path.expanduser("~/.cache/clip"), - enable_fusion: bool = False, - fusion_type: str = 'None' -): - """Load a CLIP model, preserve its text pretrained part, and set in the CLAP model - - Parameters - ---------- - name : str - A model name listed by `clip.available_models()`, or the path to a model checkpoint containing the state_dict - device : Union[str, torch.device] - The device to put the loaded model - jit : bool - Whether to load the optimized JIT model (default) or more hackable non-JIT model. - - Returns - ------- - model : torch.nn.Module - The CLAP model - preprocess : Callable[[PIL.Image], torch.Tensor] - A torchvision transform that converts a PIL image into a tensor that the returned model can take as its input - """ - if get_pretrained_url(name, 'openai'): - model_path = download_pretrained(get_pretrained_url(name, 'openai'), root=cache_dir) - elif os.path.isfile(name): - model_path = name - else: - raise RuntimeError(f"Model {name} not found; available models = {list_openai_models()}") - - try: - # loading JIT archive - model = torch.jit.load(model_path, map_location=device if jit else "cpu").eval() - state_dict = None - except RuntimeError: - # loading saved state dict - if jit: - warnings.warn(f"File {model_path} is not a JIT archive. Loading as a state dict instead") - jit = False - state_dict = torch.load(model_path, map_location="cpu") - - if not jit: - try: - model = build_model_from_openai_state_dict(state_dict or model.state_dict(), model_cfg, enable_fusion, fusion_type).to(device) - except KeyError: - sd = {k[7:]: v for k, v in state_dict["state_dict"].items()} - model = build_model_from_openai_state_dict(sd, model_cfg, enable_fusion, fusion_type).to(device) - - if str(device) == "cpu": - model.float() - return model - - # patch the device names - device_holder = torch.jit.trace(lambda: torch.ones([]).to(torch.device(device)), example_inputs=[]) - device_node = [n for n in device_holder.graph.findAllNodes("prim::Constant") if "Device" in repr(n)][-1] - - def patch_device(module): - try: - graphs = [module.graph] if hasattr(module, "graph") else [] - except RuntimeError: - graphs = [] - - if hasattr(module, "forward1"): - graphs.append(module.forward1.graph) - - for graph in graphs: - for node in graph.findAllNodes("prim::Constant"): - if "value" in node.attributeNames() and str(node["value"]).startswith("cuda"): - node.copyAttributes(device_node) - - model.apply(patch_device) - patch_device(model.encode_audio) - patch_device(model.encode_text) - - # patch dtype to float32 on CPU - if str(device) == "cpu": - float_holder = torch.jit.trace(lambda: torch.ones([]).float(), example_inputs=[]) - float_input = list(float_holder.graph.findNode("aten::to").inputs())[1] - float_node = float_input.node() - - def patch_float(module): - try: - graphs = [module.graph] if hasattr(module, "graph") else [] - except RuntimeError: - graphs = [] - - if hasattr(module, "forward1"): - graphs.append(module.forward1.graph) - - for graph in graphs: - for node in graph.findAllNodes("aten::to"): - inputs = list(node.inputs()) - for i in [1, 2]: # dtype can be the second or third argument to aten::to() - if inputs[i].node()["value"] == 5: - inputs[i].node().copyAttributes(float_node) - - model.apply(patch_float) - patch_float(model.encode_audio) - patch_float(model.encode_text) - model.float() - - model.audio_branch.audio_length = model.audio_cfg.audio_length - return model diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/losses_audio/contperceptual.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/losses_audio/contperceptual.py deleted file mode 100644 index 3e3018da79c5c24d85af1687f6f0875530dcc7c6..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/losses_audio/contperceptual.py +++ /dev/null @@ -1,123 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import sys - -sys.path.insert(0, '.') # nopep8 -from ldm.modules.losses_audio.vqperceptual import * - - -class LPAPSWithDiscriminator(nn.Module): - def __init__(self, disc_start, logvar_init=0.0, kl_weight=1.0, pixelloss_weight=1.0, - disc_num_layers=3, disc_in_channels=3, disc_factor=1.0, disc_weight=1.0, - perceptual_weight=1.0, use_actnorm=False, disc_conditional=False, - disc_loss="hinge"): - - super().__init__() - assert disc_loss in ["hinge", "vanilla"] - self.kl_weight = kl_weight - self.pixel_weight = pixelloss_weight - self.perceptual_loss = LPAPS().eval()# LPIPS用于日常图像,而LPAPS用于梅尔谱图 - self.perceptual_weight = perceptual_weight - # output log variance - self.logvar = nn.Parameter(torch.ones(size=()) * logvar_init) - - self.discriminator = NLayerDiscriminator(input_nc=disc_in_channels, - n_layers=disc_num_layers, - use_actnorm=use_actnorm, - ).apply(weights_init) - self.discriminator_iter_start = disc_start - if disc_loss == "hinge": - self.disc_loss = hinge_d_loss - elif disc_loss == "vanilla": - self.disc_loss = vanilla_d_loss - else: - raise ValueError(f"Unknown GAN loss '{disc_loss}'.") - print(f"LPAPSWithDiscriminator running with {disc_loss} loss.") - self.disc_factor = disc_factor - self.discriminator_weight = disc_weight - self.disc_conditional = disc_conditional - - - def calculate_adaptive_weight(self, nll_loss, g_loss, last_layer=None): - if last_layer is not None: - nll_grads = torch.autograd.grad(nll_loss, last_layer, retain_graph=True)[0] - g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0] - else: - nll_grads = torch.autograd.grad(nll_loss, self.last_layer[0], retain_graph=True)[0] - g_grads = torch.autograd.grad(g_loss, self.last_layer[0], retain_graph=True)[0] - - d_weight = torch.norm(nll_grads) / (torch.norm(g_grads) + 1e-4) - d_weight = torch.clamp(d_weight, 0.0, 1e4).detach() - d_weight = d_weight * self.discriminator_weight - return d_weight - - def forward(self, inputs, reconstructions, posteriors, optimizer_idx, - global_step, last_layer=None, cond=None, split="train", weights=None): - rec_loss = torch.abs(inputs.contiguous() - reconstructions.contiguous()) - if self.perceptual_weight > 0: - p_loss = self.perceptual_loss(inputs.contiguous(), reconstructions.contiguous()) - # print(f"p_loss {p_loss}") - rec_loss = rec_loss + self.perceptual_weight * p_loss - else: - p_loss = torch.tensor([0.0]) - - nll_loss = rec_loss / torch.exp(self.logvar) + self.logvar - weighted_nll_loss = nll_loss - if weights is not None: - weighted_nll_loss = weights*nll_loss - weighted_nll_loss = torch.sum(weighted_nll_loss) / weighted_nll_loss.shape[0] - nll_loss = torch.sum(nll_loss) / nll_loss.shape[0] - kl_loss = posteriors.kl() - kl_loss = torch.sum(kl_loss) / kl_loss.shape[0] - - # now the GAN part - if optimizer_idx == 0: - # generator update - if cond is None: - assert not self.disc_conditional - logits_fake = self.discriminator(reconstructions.contiguous()) - else: - assert self.disc_conditional - logits_fake = self.discriminator(torch.cat((reconstructions.contiguous(), cond), dim=1)) - g_loss = -torch.mean(logits_fake) - - try: - d_weight = self.calculate_adaptive_weight(nll_loss, g_loss, last_layer=last_layer) - except RuntimeError: - assert not self.training - d_weight = torch.tensor(0.0) - - disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start) - loss = weighted_nll_loss + self.kl_weight * kl_loss + d_weight * disc_factor * g_loss - - log = {"{}/total_loss".format(split): loss.clone().detach().mean(), - "{}/logvar".format(split): self.logvar.detach(), - "{}/kl_loss".format(split): kl_loss.detach().mean(), - "{}/nll_loss".format(split): nll_loss.detach().mean(), - "{}/rec_loss".format(split): rec_loss.detach().mean(), - "{}/d_weight".format(split): d_weight.detach(), - "{}/disc_factor".format(split): torch.tensor(disc_factor), - "{}/g_loss".format(split): g_loss.detach().mean(), - } - return loss, log - - if optimizer_idx == 1: - # second pass for discriminator update - if cond is None: - logits_real = self.discriminator(inputs.contiguous().detach()) - logits_fake = self.discriminator(reconstructions.contiguous().detach()) - else: - logits_real = self.discriminator(torch.cat((inputs.contiguous().detach(), cond), dim=1)) - logits_fake = self.discriminator(torch.cat((reconstructions.contiguous().detach(), cond), dim=1)) - - disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start) - d_loss = disc_factor * self.disc_loss(logits_real, logits_fake) - - log = {"{}/disc_loss".format(split): d_loss.clone().detach().mean(), - "{}/logits_real".format(split): logits_real.detach().mean(), - "{}/logits_fake".format(split): logits_fake.detach().mean() - } - return d_loss, log - - diff --git a/spaces/AgentVerse/agentVerse/agentverse_command/__init__.py b/spaces/AgentVerse/agentVerse/agentverse_command/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/scroller.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/scroller.js deleted file mode 100644 index 5f6bd2f00d882e55735af0a8592bfb6a9a694b0e..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/scroller.js +++ /dev/null @@ -1,2 +0,0 @@ -import Scroller from './input/scroller/Scroller.js'; -export default Scroller; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/spinner-components.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/spinner-components.js deleted file mode 100644 index f059f3eb560f8debddacfb5db161c0080274dcfa..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/spinner-components.js +++ /dev/null @@ -1,39 +0,0 @@ -import Audio from './audio/Audio.js'; -import Ball from './ball/Ball.js'; -import Bars from './bars/Bars.js'; -import Box from './box/Box.js'; -import Clock from './clock/Clock.js'; -import Cube from './cube/Cube.js'; -import Custom from './custom/Custom.js'; -import Dots from './dots/Dots.js'; -import Facebook from './facebook/Facebook.js'; -import Grid from './grid/Grid.js'; -import Los from './los/Los.js'; -import Orbit from './orbit/Orbit.js'; -import Oval from './oval/Oval.js'; -import Pie from './pie/Pie.js'; -import Puff from './puff/Puff.js'; -import Radio from './radio/Radio.js'; -import Rings from './rings/Rings.js'; -import Spinner from './spinner/Spinner.js'; - -export { - Audio, - Ball, - Bars, - Box, - Clock, - Cube, - Custom, - Dots, - Facebook, - Grid, - Los, - Orbit, - Oval, - Pie, - Puff, - Radio, - Rings, - Spinner -} \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/folder/Folder.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/folder/Folder.d.ts deleted file mode 100644 index 202d5f243587f166dbc733d2da38313ce3aa7607..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/folder/Folder.d.ts +++ /dev/null @@ -1,65 +0,0 @@ -// import * as Phaser from 'phaser'; -import Sizer from '../sizer/Sizer'; -import OpenCloseTransition from '../../../plugins/behaviors/openclosetransition/OpenCloseTransition'; - -export default Folder; - -declare namespace Folder { - - interface IConfig extends Sizer.IConfig { - background?: Phaser.GameObjects.GameObject, - - title: Phaser.GameObjects.GameObject, - - child: Phaser.GameObjects.GameObject, - customChildOrigin?: boolean, - - toggleByTarget?: Phaser.GameObjects.GameObject, - toggleClickConfig?: { - mode?: 0 | 1 | 'pointerdown' | 'pointerup' | 'press' | 'release', - clickInterval?: number, - threshold?: number, - }, - - align?: { - title?: Sizer.AlignTypes, - child?: Sizer.AlignTypes, - }, - - expand?: { - title?: boolean, - child?: boolean, - }, - - transition?: { - duration?: number, - expandCallback?: OpenCloseTransition.TransitCallbackType, - collapseCallback?: OpenCloseTransition.TransitCallbackType, - }, - - reLayoutTarget?: Phaser.GameObjects.GameObject, - - onExpandStart?: (folder: this) => void, - onExpandComplete?: (folder: this) => void, - onCollapseStart?: (folder: this) => void, - onCollapseComplete?: (folder: this) => void, - } -} - -declare class Folder extends Sizer { - constructor( - scene: Phaser.Scene, - config?: Folder.IConfig - ); - - setTransitionDuration(duration?: number): this; - transitionDuration: number; - - setExpandCallback(callback?: OpenCloseTransition.TransitCallbackType): this; - setCollapseCallback(callback?: OpenCloseTransition.TransitCallbackType): this; - - expand(duration?: number): this; - collapse(duration?: number): this; - toggle(duration?: number): this; - readonly expanded: boolean; -} \ No newline at end of file diff --git a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/text/english.py b/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/text/english.py deleted file mode 100644 index 6817392ba8a9eb830351de89fb7afc5ad72f5e42..0000000000000000000000000000000000000000 --- a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/text/english.py +++ /dev/null @@ -1,188 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - - -# Regular expression matching whitespace: - - -import re -import inflect -from unidecode import unidecode -import eng_to_ipa as ipa -_inflect = inflect.engine() -_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])') -_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)') -_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)') -_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)') -_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)') -_number_re = re.compile(r'[0-9]+') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - - -# List of (ipa, lazy ipa) pairs: -_lazy_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('æ', 'e'), - ('ɑ', 'a'), - ('ɔ', 'o'), - ('ð', 'z'), - ('θ', 's'), - ('ɛ', 'e'), - ('ɪ', 'i'), - ('ʊ', 'u'), - ('ʒ', 'ʥ'), - ('ʤ', 'ʥ'), - ('ˈ', '↓'), -]] - -# List of (ipa, lazy ipa2) pairs: -_lazy_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('ð', 'z'), - ('θ', 's'), - ('ʒ', 'ʑ'), - ('ʤ', 'dʑ'), - ('ˈ', '↓'), -]] - -# List of (ipa, ipa2) pairs -_ipa_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('ʤ', 'dʒ'), - ('ʧ', 'tʃ') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def collapse_whitespace(text): - return re.sub(r'\s+', ' ', text) - - -def _remove_commas(m): - return m.group(1).replace(',', '') - - -def _expand_decimal_point(m): - return m.group(1).replace('.', ' point ') - - -def _expand_dollars(m): - match = m.group(1) - parts = match.split('.') - if len(parts) > 2: - return match + ' dollars' # Unexpected format - dollars = int(parts[0]) if parts[0] else 0 - cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0 - if dollars and cents: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit) - elif dollars: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - return '%s %s' % (dollars, dollar_unit) - elif cents: - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s' % (cents, cent_unit) - else: - return 'zero dollars' - - -def _expand_ordinal(m): - return _inflect.number_to_words(m.group(0)) - - -def _expand_number(m): - num = int(m.group(0)) - if num > 1000 and num < 3000: - if num == 2000: - return 'two thousand' - elif num > 2000 and num < 2010: - return 'two thousand ' + _inflect.number_to_words(num % 100) - elif num % 100 == 0: - return _inflect.number_to_words(num // 100) + ' hundred' - else: - return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ') - else: - return _inflect.number_to_words(num, andword='') - - -def normalize_numbers(text): - text = re.sub(_comma_number_re, _remove_commas, text) - text = re.sub(_pounds_re, r'\1 pounds', text) - text = re.sub(_dollars_re, _expand_dollars, text) - text = re.sub(_decimal_number_re, _expand_decimal_point, text) - text = re.sub(_ordinal_re, _expand_ordinal, text) - text = re.sub(_number_re, _expand_number, text) - return text - - -def mark_dark_l(text): - return re.sub(r'l([^aeiouæɑɔəɛɪʊ ]*(?: |$))', lambda x: 'ɫ'+x.group(1), text) - - -def english_to_ipa(text): - text = unidecode(text).lower() - text = expand_abbreviations(text) - text = normalize_numbers(text) - phonemes = ipa.convert(text) - phonemes = collapse_whitespace(phonemes) - return phonemes - - -def english_to_lazy_ipa(text): - text = english_to_ipa(text) - for regex, replacement in _lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def english_to_ipa2(text): - text = english_to_ipa(text) - text = mark_dark_l(text) - for regex, replacement in _ipa_to_ipa2: - text = re.sub(regex, replacement, text) - return text.replace('...', '…') - - -def english_to_lazy_ipa2(text): - text = english_to_ipa(text) - for regex, replacement in _lazy_ipa2: - text = re.sub(regex, replacement, text) - return text diff --git a/spaces/Alpaca233/SadTalker/src/facerender/animate.py b/spaces/Alpaca233/SadTalker/src/facerender/animate.py deleted file mode 100644 index 781f5a3318a086049cc6b74393073ddda7001d5e..0000000000000000000000000000000000000000 --- a/spaces/Alpaca233/SadTalker/src/facerender/animate.py +++ /dev/null @@ -1,257 +0,0 @@ -import os -import cv2 -import yaml -import numpy as np -import warnings -from skimage import img_as_ubyte -import safetensors -import safetensors.torch -warnings.filterwarnings('ignore') - - -import imageio -import torch -import torchvision - - -from src.facerender.modules.keypoint_detector import HEEstimator, KPDetector -from src.facerender.modules.mapping import MappingNet -from src.facerender.modules.generator import OcclusionAwareGenerator, OcclusionAwareSPADEGenerator -from src.facerender.modules.make_animation import make_animation - -from pydub import AudioSegment -from src.utils.face_enhancer import enhancer_generator_with_len, enhancer_list -from src.utils.paste_pic import paste_pic -from src.utils.videoio import save_video_with_watermark - -try: - import webui # in webui - in_webui = True -except: - in_webui = False - -class AnimateFromCoeff(): - - def __init__(self, sadtalker_path, device): - - with open(sadtalker_path['facerender_yaml']) as f: - config = yaml.safe_load(f) - - generator = OcclusionAwareSPADEGenerator(**config['model_params']['generator_params'], - **config['model_params']['common_params']) - kp_extractor = KPDetector(**config['model_params']['kp_detector_params'], - **config['model_params']['common_params']) - he_estimator = HEEstimator(**config['model_params']['he_estimator_params'], - **config['model_params']['common_params']) - mapping = MappingNet(**config['model_params']['mapping_params']) - - generator.to(device) - kp_extractor.to(device) - he_estimator.to(device) - mapping.to(device) - for param in generator.parameters(): - param.requires_grad = False - for param in kp_extractor.parameters(): - param.requires_grad = False - for param in he_estimator.parameters(): - param.requires_grad = False - for param in mapping.parameters(): - param.requires_grad = False - - if sadtalker_path is not None: - if 'checkpoint' in sadtalker_path: # use safe tensor - self.load_cpk_facevid2vid_safetensor(sadtalker_path['checkpoint'], kp_detector=kp_extractor, generator=generator, he_estimator=None) - else: - self.load_cpk_facevid2vid(sadtalker_path['free_view_checkpoint'], kp_detector=kp_extractor, generator=generator, he_estimator=he_estimator) - else: - raise AttributeError("Checkpoint should be specified for video head pose estimator.") - - if sadtalker_path['mappingnet_checkpoint'] is not None: - self.load_cpk_mapping(sadtalker_path['mappingnet_checkpoint'], mapping=mapping) - else: - raise AttributeError("Checkpoint should be specified for video head pose estimator.") - - self.kp_extractor = kp_extractor - self.generator = generator - self.he_estimator = he_estimator - self.mapping = mapping - - self.kp_extractor.eval() - self.generator.eval() - self.he_estimator.eval() - self.mapping.eval() - - self.device = device - - def load_cpk_facevid2vid_safetensor(self, checkpoint_path, generator=None, - kp_detector=None, he_estimator=None, - device="cpu"): - - checkpoint = safetensors.torch.load_file(checkpoint_path) - - if generator is not None: - x_generator = {} - for k,v in checkpoint.items(): - if 'generator' in k: - x_generator[k.replace('generator.', '')] = v - generator.load_state_dict(x_generator) - if kp_detector is not None: - x_generator = {} - for k,v in checkpoint.items(): - if 'kp_extractor' in k: - x_generator[k.replace('kp_extractor.', '')] = v - kp_detector.load_state_dict(x_generator) - if he_estimator is not None: - x_generator = {} - for k,v in checkpoint.items(): - if 'he_estimator' in k: - x_generator[k.replace('he_estimator.', '')] = v - he_estimator.load_state_dict(x_generator) - - return None - - def load_cpk_facevid2vid(self, checkpoint_path, generator=None, discriminator=None, - kp_detector=None, he_estimator=None, optimizer_generator=None, - optimizer_discriminator=None, optimizer_kp_detector=None, - optimizer_he_estimator=None, device="cpu"): - checkpoint = torch.load(checkpoint_path, map_location=torch.device(device)) - if generator is not None: - generator.load_state_dict(checkpoint['generator']) - if kp_detector is not None: - kp_detector.load_state_dict(checkpoint['kp_detector']) - if he_estimator is not None: - he_estimator.load_state_dict(checkpoint['he_estimator']) - if discriminator is not None: - try: - discriminator.load_state_dict(checkpoint['discriminator']) - except: - print ('No discriminator in the state-dict. Dicriminator will be randomly initialized') - if optimizer_generator is not None: - optimizer_generator.load_state_dict(checkpoint['optimizer_generator']) - if optimizer_discriminator is not None: - try: - optimizer_discriminator.load_state_dict(checkpoint['optimizer_discriminator']) - except RuntimeError as e: - print ('No discriminator optimizer in the state-dict. Optimizer will be not initialized') - if optimizer_kp_detector is not None: - optimizer_kp_detector.load_state_dict(checkpoint['optimizer_kp_detector']) - if optimizer_he_estimator is not None: - optimizer_he_estimator.load_state_dict(checkpoint['optimizer_he_estimator']) - - return checkpoint['epoch'] - - def load_cpk_mapping(self, checkpoint_path, mapping=None, discriminator=None, - optimizer_mapping=None, optimizer_discriminator=None, device='cpu'): - checkpoint = torch.load(checkpoint_path, map_location=torch.device(device)) - if mapping is not None: - mapping.load_state_dict(checkpoint['mapping']) - if discriminator is not None: - discriminator.load_state_dict(checkpoint['discriminator']) - if optimizer_mapping is not None: - optimizer_mapping.load_state_dict(checkpoint['optimizer_mapping']) - if optimizer_discriminator is not None: - optimizer_discriminator.load_state_dict(checkpoint['optimizer_discriminator']) - - return checkpoint['epoch'] - - def generate(self, x, video_save_dir, pic_path, crop_info, enhancer=None, background_enhancer=None, preprocess='crop', img_size=256): - - source_image=x['source_image'].type(torch.FloatTensor) - source_semantics=x['source_semantics'].type(torch.FloatTensor) - target_semantics=x['target_semantics_list'].type(torch.FloatTensor) - source_image=source_image.to(self.device) - source_semantics=source_semantics.to(self.device) - target_semantics=target_semantics.to(self.device) - if 'yaw_c_seq' in x: - yaw_c_seq = x['yaw_c_seq'].type(torch.FloatTensor) - yaw_c_seq = x['yaw_c_seq'].to(self.device) - else: - yaw_c_seq = None - if 'pitch_c_seq' in x: - pitch_c_seq = x['pitch_c_seq'].type(torch.FloatTensor) - pitch_c_seq = x['pitch_c_seq'].to(self.device) - else: - pitch_c_seq = None - if 'roll_c_seq' in x: - roll_c_seq = x['roll_c_seq'].type(torch.FloatTensor) - roll_c_seq = x['roll_c_seq'].to(self.device) - else: - roll_c_seq = None - - frame_num = x['frame_num'] - - predictions_video = make_animation(source_image, source_semantics, target_semantics, - self.generator, self.kp_extractor, self.he_estimator, self.mapping, - yaw_c_seq, pitch_c_seq, roll_c_seq, use_exp = True) - - predictions_video = predictions_video.reshape((-1,)+predictions_video.shape[2:]) - predictions_video = predictions_video[:frame_num] - - video = [] - for idx in range(predictions_video.shape[0]): - image = predictions_video[idx] - image = np.transpose(image.data.cpu().numpy(), [1, 2, 0]).astype(np.float32) - video.append(image) - result = img_as_ubyte(video) - - ### the generated video is 256x256, so we keep the aspect ratio, - original_size = crop_info[0] - if original_size: - result = [ cv2.resize(result_i,(img_size, int(img_size * original_size[1]/original_size[0]) )) for result_i in result ] - - video_name = x['video_name'] + '.mp4' - path = os.path.join(video_save_dir, 'temp_'+video_name) - - imageio.mimsave(path, result, fps=float(25)) - - av_path = os.path.join(video_save_dir, video_name) - return_path = av_path - - audio_path = x['audio_path'] - audio_name = os.path.splitext(os.path.split(audio_path)[-1])[0] - new_audio_path = os.path.join(video_save_dir, audio_name+'.wav') - start_time = 0 - # cog will not keep the .mp3 filename - sound = AudioSegment.from_file(audio_path) - frames = frame_num - end_time = start_time + frames*1/25*1000 - word1=sound.set_frame_rate(16000) - word = word1[start_time:end_time] - word.export(new_audio_path, format="wav") - - save_video_with_watermark(path, new_audio_path, av_path, watermark= False) - print(f'The generated video is named {video_save_dir}/{video_name}') - - if 'full' in preprocess.lower(): - # only add watermark to the full image. - video_name_full = x['video_name'] + '_full.mp4' - full_video_path = os.path.join(video_save_dir, video_name_full) - return_path = full_video_path - paste_pic(path, pic_path, crop_info, new_audio_path, full_video_path, extended_crop= True if 'ext' in preprocess.lower() else False) - print(f'The generated video is named {video_save_dir}/{video_name_full}') - else: - full_video_path = av_path - - #### paste back then enhancers - if enhancer: - video_name_enhancer = x['video_name'] + '_enhanced.mp4' - enhanced_path = os.path.join(video_save_dir, 'temp_'+video_name_enhancer) - av_path_enhancer = os.path.join(video_save_dir, video_name_enhancer) - return_path = av_path_enhancer - - try: - enhanced_images_gen_with_len = enhancer_generator_with_len(full_video_path, method=enhancer, bg_upsampler=background_enhancer) - imageio.mimsave(enhanced_path, enhanced_images_gen_with_len, fps=float(25)) - except: - enhanced_images_gen_with_len = enhancer_list(full_video_path, method=enhancer, bg_upsampler=background_enhancer) - imageio.mimsave(enhanced_path, enhanced_images_gen_with_len, fps=float(25)) - - save_video_with_watermark(enhanced_path, new_audio_path, av_path_enhancer, watermark= False) - print(f'The generated video is named {video_save_dir}/{video_name_enhancer}') - os.remove(enhanced_path) - - os.remove(path) - os.remove(new_audio_path) - - return return_path - diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/stable_diffusion/text2img.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/stable_diffusion/text2img.md deleted file mode 100644 index 8d09602d860554f847f2936fe2198deb871c7382..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/stable_diffusion/text2img.md +++ /dev/null @@ -1,59 +0,0 @@ - - -# Text-to-image - -The Stable Diffusion model was created by researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), [Runway](https://github.com/runwayml), and [LAION](https://laion.ai/). The [`StableDiffusionPipeline`] is capable of generating photorealistic images given any text input. It's trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. - -The abstract from the paper is: - -*By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion.* - - - -Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! - -If you're interested in using one of the official checkpoints for a task, explore the [CompVis](https://huggingface.co/CompVis), [Runway](https://huggingface.co/runwayml), and [Stability AI](https://huggingface.co/stabilityai) Hub organizations! - - - -## StableDiffusionPipeline - -[[autodoc]] StableDiffusionPipeline - - all - - __call__ - - enable_attention_slicing - - disable_attention_slicing - - enable_vae_slicing - - disable_vae_slicing - - enable_xformers_memory_efficient_attention - - disable_xformers_memory_efficient_attention - - enable_vae_tiling - - disable_vae_tiling - - load_textual_inversion - - from_single_file - - load_lora_weights - - save_lora_weights - -## StableDiffusionPipelineOutput - -[[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput - -## FlaxStableDiffusionPipeline - -[[autodoc]] FlaxStableDiffusionPipeline - - all - - __call__ - -## FlaxStableDiffusionPipelineOutput - -[[autodoc]] pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/stable_diffusion_controlnet_reference.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/stable_diffusion_controlnet_reference.py deleted file mode 100644 index f52da6f5a193e4a3b311a11778174fa3417105e3..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/stable_diffusion_controlnet_reference.py +++ /dev/null @@ -1,834 +0,0 @@ -# Inspired by: https://github.com/Mikubill/sd-webui-controlnet/discussions/1236 and https://github.com/Mikubill/sd-webui-controlnet/discussions/1280 -from typing import Any, Callable, Dict, List, Optional, Tuple, Union - -import numpy as np -import PIL.Image -import torch - -from diffusers import StableDiffusionControlNetPipeline -from diffusers.models import ControlNetModel -from diffusers.models.attention import BasicTransformerBlock -from diffusers.models.unet_2d_blocks import CrossAttnDownBlock2D, CrossAttnUpBlock2D, DownBlock2D, UpBlock2D -from diffusers.pipelines.controlnet.multicontrolnet import MultiControlNetModel -from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput -from diffusers.utils import is_compiled_module, logging, randn_tensor - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> import cv2 - >>> import torch - >>> import numpy as np - >>> from PIL import Image - >>> from diffusers import UniPCMultistepScheduler - >>> from diffusers.utils import load_image - - >>> input_image = load_image("https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png") - - >>> # get canny image - >>> image = cv2.Canny(np.array(input_image), 100, 200) - >>> image = image[:, :, None] - >>> image = np.concatenate([image, image, image], axis=2) - >>> canny_image = Image.fromarray(image) - - >>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) - >>> pipe = StableDiffusionControlNetReferencePipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", - controlnet=controlnet, - safety_checker=None, - torch_dtype=torch.float16 - ).to('cuda:0') - - >>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe_controlnet.scheduler.config) - - >>> result_img = pipe(ref_image=input_image, - prompt="1girl", - image=canny_image, - num_inference_steps=20, - reference_attn=True, - reference_adain=True).images[0] - - >>> result_img.show() - ``` -""" - - -def torch_dfs(model: torch.nn.Module): - result = [model] - for child in model.children(): - result += torch_dfs(child) - return result - - -class StableDiffusionControlNetReferencePipeline(StableDiffusionControlNetPipeline): - def prepare_ref_latents(self, refimage, batch_size, dtype, device, generator, do_classifier_free_guidance): - refimage = refimage.to(device=device, dtype=dtype) - - # encode the mask image into latents space so we can concatenate it to the latents - if isinstance(generator, list): - ref_image_latents = [ - self.vae.encode(refimage[i : i + 1]).latent_dist.sample(generator=generator[i]) - for i in range(batch_size) - ] - ref_image_latents = torch.cat(ref_image_latents, dim=0) - else: - ref_image_latents = self.vae.encode(refimage).latent_dist.sample(generator=generator) - ref_image_latents = self.vae.config.scaling_factor * ref_image_latents - - # duplicate mask and ref_image_latents for each generation per prompt, using mps friendly method - if ref_image_latents.shape[0] < batch_size: - if not batch_size % ref_image_latents.shape[0] == 0: - raise ValueError( - "The passed images and the required batch size don't match. Images are supposed to be duplicated" - f" to a total batch size of {batch_size}, but {ref_image_latents.shape[0]} images were passed." - " Make sure the number of images that you pass is divisible by the total requested batch size." - ) - ref_image_latents = ref_image_latents.repeat(batch_size // ref_image_latents.shape[0], 1, 1, 1) - - ref_image_latents = torch.cat([ref_image_latents] * 2) if do_classifier_free_guidance else ref_image_latents - - # aligning device to prevent device errors when concating it with the latent model input - ref_image_latents = ref_image_latents.to(device=device, dtype=dtype) - return ref_image_latents - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]] = None, - image: Union[ - torch.FloatTensor, - PIL.Image.Image, - np.ndarray, - List[torch.FloatTensor], - List[PIL.Image.Image], - List[np.ndarray], - ] = None, - ref_image: Union[torch.FloatTensor, PIL.Image.Image] = None, - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - controlnet_conditioning_scale: Union[float, List[float]] = 1.0, - guess_mode: bool = False, - attention_auto_machine_weight: float = 1.0, - gn_auto_machine_weight: float = 1.0, - style_fidelity: float = 0.5, - reference_attn: bool = True, - reference_adain: bool = True, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. - instead. - image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`,: - `List[List[torch.FloatTensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`): - The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If - the type is specified as `Torch.FloatTensor`, it is passed to ControlNet as is. `PIL.Image.Image` can - also be accepted as an image. The dimensions of the output image defaults to `image`'s dimensions. If - height and/or width are passed, `image` is resized according to them. If multiple ControlNets are - specified in init, images must be passed as a list such that each element of the list can be correctly - batched for input to a single controlnet. - ref_image (`torch.FloatTensor`, `PIL.Image.Image`): - The Reference Control input condition. Reference Control uses this input condition to generate guidance to Unet. If - the type is specified as `Torch.FloatTensor`, it is passed to Reference Control as is. `PIL.Image.Image` can - also be accepted as an image. - height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under - `self.processor` in - [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py). - controlnet_conditioning_scale (`float` or `List[float]`, *optional*, defaults to 1.0): - The outputs of the controlnet are multiplied by `controlnet_conditioning_scale` before they are added - to the residual in the original unet. If multiple ControlNets are specified in init, you can set the - corresponding scale as a list. - guess_mode (`bool`, *optional*, defaults to `False`): - In this mode, the ControlNet encoder will try best to recognize the content of the input image even if - you remove all prompts. The `guidance_scale` between 3.0 and 5.0 is recommended. - attention_auto_machine_weight (`float`): - Weight of using reference query for self attention's context. - If attention_auto_machine_weight=1.0, use reference query for all self attention's context. - gn_auto_machine_weight (`float`): - Weight of using reference adain. If gn_auto_machine_weight=2.0, use all reference adain plugins. - style_fidelity (`float`): - style fidelity of ref_uncond_xt. If style_fidelity=1.0, control more important, - elif style_fidelity=0.0, prompt more important, else balanced. - reference_attn (`bool`): - Whether to use reference query for self attention's context. - reference_adain (`bool`): - Whether to use reference adain. - - Examples: - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - assert reference_attn or reference_adain, "`reference_attn` or `reference_adain` must be True." - - # 1. Check inputs. Raise error if not correct - self.check_inputs( - prompt, - image, - callback_steps, - negative_prompt, - prompt_embeds, - negative_prompt_embeds, - controlnet_conditioning_scale, - ) - - # 2. Define call parameters - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - controlnet = self.controlnet._orig_mod if is_compiled_module(self.controlnet) else self.controlnet - - if isinstance(controlnet, MultiControlNetModel) and isinstance(controlnet_conditioning_scale, float): - controlnet_conditioning_scale = [controlnet_conditioning_scale] * len(controlnet.nets) - - global_pool_conditions = ( - controlnet.config.global_pool_conditions - if isinstance(controlnet, ControlNetModel) - else controlnet.nets[0].config.global_pool_conditions - ) - guess_mode = guess_mode or global_pool_conditions - - # 3. Encode input prompt - text_encoder_lora_scale = ( - cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None - ) - prompt_embeds = self._encode_prompt( - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - lora_scale=text_encoder_lora_scale, - ) - - # 4. Prepare image - if isinstance(controlnet, ControlNetModel): - image = self.prepare_image( - image=image, - width=width, - height=height, - batch_size=batch_size * num_images_per_prompt, - num_images_per_prompt=num_images_per_prompt, - device=device, - dtype=controlnet.dtype, - do_classifier_free_guidance=do_classifier_free_guidance, - guess_mode=guess_mode, - ) - height, width = image.shape[-2:] - elif isinstance(controlnet, MultiControlNetModel): - images = [] - - for image_ in image: - image_ = self.prepare_image( - image=image_, - width=width, - height=height, - batch_size=batch_size * num_images_per_prompt, - num_images_per_prompt=num_images_per_prompt, - device=device, - dtype=controlnet.dtype, - do_classifier_free_guidance=do_classifier_free_guidance, - guess_mode=guess_mode, - ) - - images.append(image_) - - image = images - height, width = image[0].shape[-2:] - else: - assert False - - # 5. Preprocess reference image - ref_image = self.prepare_image( - image=ref_image, - width=width, - height=height, - batch_size=batch_size * num_images_per_prompt, - num_images_per_prompt=num_images_per_prompt, - device=device, - dtype=prompt_embeds.dtype, - ) - - # 6. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 7. Prepare latent variables - num_channels_latents = self.unet.config.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - prompt_embeds.dtype, - device, - generator, - latents, - ) - - # 8. Prepare reference latent variables - ref_image_latents = self.prepare_ref_latents( - ref_image, - batch_size * num_images_per_prompt, - prompt_embeds.dtype, - device, - generator, - do_classifier_free_guidance, - ) - - # 9. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 9. Modify self attention and group norm - MODE = "write" - uc_mask = ( - torch.Tensor([1] * batch_size * num_images_per_prompt + [0] * batch_size * num_images_per_prompt) - .type_as(ref_image_latents) - .bool() - ) - - def hacked_basic_transformer_inner_forward( - self, - hidden_states: torch.FloatTensor, - attention_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - timestep: Optional[torch.LongTensor] = None, - cross_attention_kwargs: Dict[str, Any] = None, - class_labels: Optional[torch.LongTensor] = None, - ): - if self.use_ada_layer_norm: - norm_hidden_states = self.norm1(hidden_states, timestep) - elif self.use_ada_layer_norm_zero: - norm_hidden_states, gate_msa, shift_mlp, scale_mlp, gate_mlp = self.norm1( - hidden_states, timestep, class_labels, hidden_dtype=hidden_states.dtype - ) - else: - norm_hidden_states = self.norm1(hidden_states) - - # 1. Self-Attention - cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not None else {} - if self.only_cross_attention: - attn_output = self.attn1( - norm_hidden_states, - encoder_hidden_states=encoder_hidden_states if self.only_cross_attention else None, - attention_mask=attention_mask, - **cross_attention_kwargs, - ) - else: - if MODE == "write": - self.bank.append(norm_hidden_states.detach().clone()) - attn_output = self.attn1( - norm_hidden_states, - encoder_hidden_states=encoder_hidden_states if self.only_cross_attention else None, - attention_mask=attention_mask, - **cross_attention_kwargs, - ) - if MODE == "read": - if attention_auto_machine_weight > self.attn_weight: - attn_output_uc = self.attn1( - norm_hidden_states, - encoder_hidden_states=torch.cat([norm_hidden_states] + self.bank, dim=1), - # attention_mask=attention_mask, - **cross_attention_kwargs, - ) - attn_output_c = attn_output_uc.clone() - if do_classifier_free_guidance and style_fidelity > 0: - attn_output_c[uc_mask] = self.attn1( - norm_hidden_states[uc_mask], - encoder_hidden_states=norm_hidden_states[uc_mask], - **cross_attention_kwargs, - ) - attn_output = style_fidelity * attn_output_c + (1.0 - style_fidelity) * attn_output_uc - self.bank.clear() - else: - attn_output = self.attn1( - norm_hidden_states, - encoder_hidden_states=encoder_hidden_states if self.only_cross_attention else None, - attention_mask=attention_mask, - **cross_attention_kwargs, - ) - if self.use_ada_layer_norm_zero: - attn_output = gate_msa.unsqueeze(1) * attn_output - hidden_states = attn_output + hidden_states - - if self.attn2 is not None: - norm_hidden_states = ( - self.norm2(hidden_states, timestep) if self.use_ada_layer_norm else self.norm2(hidden_states) - ) - - # 2. Cross-Attention - attn_output = self.attn2( - norm_hidden_states, - encoder_hidden_states=encoder_hidden_states, - attention_mask=encoder_attention_mask, - **cross_attention_kwargs, - ) - hidden_states = attn_output + hidden_states - - # 3. Feed-forward - norm_hidden_states = self.norm3(hidden_states) - - if self.use_ada_layer_norm_zero: - norm_hidden_states = norm_hidden_states * (1 + scale_mlp[:, None]) + shift_mlp[:, None] - - ff_output = self.ff(norm_hidden_states) - - if self.use_ada_layer_norm_zero: - ff_output = gate_mlp.unsqueeze(1) * ff_output - - hidden_states = ff_output + hidden_states - - return hidden_states - - def hacked_mid_forward(self, *args, **kwargs): - eps = 1e-6 - x = self.original_forward(*args, **kwargs) - if MODE == "write": - if gn_auto_machine_weight >= self.gn_weight: - var, mean = torch.var_mean(x, dim=(2, 3), keepdim=True, correction=0) - self.mean_bank.append(mean) - self.var_bank.append(var) - if MODE == "read": - if len(self.mean_bank) > 0 and len(self.var_bank) > 0: - var, mean = torch.var_mean(x, dim=(2, 3), keepdim=True, correction=0) - std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5 - mean_acc = sum(self.mean_bank) / float(len(self.mean_bank)) - var_acc = sum(self.var_bank) / float(len(self.var_bank)) - std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5 - x_uc = (((x - mean) / std) * std_acc) + mean_acc - x_c = x_uc.clone() - if do_classifier_free_guidance and style_fidelity > 0: - x_c[uc_mask] = x[uc_mask] - x = style_fidelity * x_c + (1.0 - style_fidelity) * x_uc - self.mean_bank = [] - self.var_bank = [] - return x - - def hack_CrossAttnDownBlock2D_forward( - self, - hidden_states: torch.FloatTensor, - temb: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - ): - eps = 1e-6 - - # TODO(Patrick, William) - attention mask is not used - output_states = () - - for i, (resnet, attn) in enumerate(zip(self.resnets, self.attentions)): - hidden_states = resnet(hidden_states, temb) - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - cross_attention_kwargs=cross_attention_kwargs, - attention_mask=attention_mask, - encoder_attention_mask=encoder_attention_mask, - return_dict=False, - )[0] - if MODE == "write": - if gn_auto_machine_weight >= self.gn_weight: - var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0) - self.mean_bank.append([mean]) - self.var_bank.append([var]) - if MODE == "read": - if len(self.mean_bank) > 0 and len(self.var_bank) > 0: - var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0) - std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5 - mean_acc = sum(self.mean_bank[i]) / float(len(self.mean_bank[i])) - var_acc = sum(self.var_bank[i]) / float(len(self.var_bank[i])) - std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5 - hidden_states_uc = (((hidden_states - mean) / std) * std_acc) + mean_acc - hidden_states_c = hidden_states_uc.clone() - if do_classifier_free_guidance and style_fidelity > 0: - hidden_states_c[uc_mask] = hidden_states[uc_mask] - hidden_states = style_fidelity * hidden_states_c + (1.0 - style_fidelity) * hidden_states_uc - - output_states = output_states + (hidden_states,) - - if MODE == "read": - self.mean_bank = [] - self.var_bank = [] - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states = output_states + (hidden_states,) - - return hidden_states, output_states - - def hacked_DownBlock2D_forward(self, hidden_states, temb=None): - eps = 1e-6 - - output_states = () - - for i, resnet in enumerate(self.resnets): - hidden_states = resnet(hidden_states, temb) - - if MODE == "write": - if gn_auto_machine_weight >= self.gn_weight: - var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0) - self.mean_bank.append([mean]) - self.var_bank.append([var]) - if MODE == "read": - if len(self.mean_bank) > 0 and len(self.var_bank) > 0: - var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0) - std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5 - mean_acc = sum(self.mean_bank[i]) / float(len(self.mean_bank[i])) - var_acc = sum(self.var_bank[i]) / float(len(self.var_bank[i])) - std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5 - hidden_states_uc = (((hidden_states - mean) / std) * std_acc) + mean_acc - hidden_states_c = hidden_states_uc.clone() - if do_classifier_free_guidance and style_fidelity > 0: - hidden_states_c[uc_mask] = hidden_states[uc_mask] - hidden_states = style_fidelity * hidden_states_c + (1.0 - style_fidelity) * hidden_states_uc - - output_states = output_states + (hidden_states,) - - if MODE == "read": - self.mean_bank = [] - self.var_bank = [] - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states = output_states + (hidden_states,) - - return hidden_states, output_states - - def hacked_CrossAttnUpBlock2D_forward( - self, - hidden_states: torch.FloatTensor, - res_hidden_states_tuple: Tuple[torch.FloatTensor, ...], - temb: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - upsample_size: Optional[int] = None, - attention_mask: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - ): - eps = 1e-6 - # TODO(Patrick, William) - attention mask is not used - for i, (resnet, attn) in enumerate(zip(self.resnets, self.attentions)): - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - hidden_states = resnet(hidden_states, temb) - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - cross_attention_kwargs=cross_attention_kwargs, - attention_mask=attention_mask, - encoder_attention_mask=encoder_attention_mask, - return_dict=False, - )[0] - - if MODE == "write": - if gn_auto_machine_weight >= self.gn_weight: - var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0) - self.mean_bank.append([mean]) - self.var_bank.append([var]) - if MODE == "read": - if len(self.mean_bank) > 0 and len(self.var_bank) > 0: - var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0) - std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5 - mean_acc = sum(self.mean_bank[i]) / float(len(self.mean_bank[i])) - var_acc = sum(self.var_bank[i]) / float(len(self.var_bank[i])) - std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5 - hidden_states_uc = (((hidden_states - mean) / std) * std_acc) + mean_acc - hidden_states_c = hidden_states_uc.clone() - if do_classifier_free_guidance and style_fidelity > 0: - hidden_states_c[uc_mask] = hidden_states[uc_mask] - hidden_states = style_fidelity * hidden_states_c + (1.0 - style_fidelity) * hidden_states_uc - - if MODE == "read": - self.mean_bank = [] - self.var_bank = [] - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, upsample_size) - - return hidden_states - - def hacked_UpBlock2D_forward(self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None): - eps = 1e-6 - for i, resnet in enumerate(self.resnets): - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - hidden_states = resnet(hidden_states, temb) - - if MODE == "write": - if gn_auto_machine_weight >= self.gn_weight: - var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0) - self.mean_bank.append([mean]) - self.var_bank.append([var]) - if MODE == "read": - if len(self.mean_bank) > 0 and len(self.var_bank) > 0: - var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0) - std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5 - mean_acc = sum(self.mean_bank[i]) / float(len(self.mean_bank[i])) - var_acc = sum(self.var_bank[i]) / float(len(self.var_bank[i])) - std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5 - hidden_states_uc = (((hidden_states - mean) / std) * std_acc) + mean_acc - hidden_states_c = hidden_states_uc.clone() - if do_classifier_free_guidance and style_fidelity > 0: - hidden_states_c[uc_mask] = hidden_states[uc_mask] - hidden_states = style_fidelity * hidden_states_c + (1.0 - style_fidelity) * hidden_states_uc - - if MODE == "read": - self.mean_bank = [] - self.var_bank = [] - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, upsample_size) - - return hidden_states - - if reference_attn: - attn_modules = [module for module in torch_dfs(self.unet) if isinstance(module, BasicTransformerBlock)] - attn_modules = sorted(attn_modules, key=lambda x: -x.norm1.normalized_shape[0]) - - for i, module in enumerate(attn_modules): - module._original_inner_forward = module.forward - module.forward = hacked_basic_transformer_inner_forward.__get__(module, BasicTransformerBlock) - module.bank = [] - module.attn_weight = float(i) / float(len(attn_modules)) - - if reference_adain: - gn_modules = [self.unet.mid_block] - self.unet.mid_block.gn_weight = 0 - - down_blocks = self.unet.down_blocks - for w, module in enumerate(down_blocks): - module.gn_weight = 1.0 - float(w) / float(len(down_blocks)) - gn_modules.append(module) - - up_blocks = self.unet.up_blocks - for w, module in enumerate(up_blocks): - module.gn_weight = float(w) / float(len(up_blocks)) - gn_modules.append(module) - - for i, module in enumerate(gn_modules): - if getattr(module, "original_forward", None) is None: - module.original_forward = module.forward - if i == 0: - # mid_block - module.forward = hacked_mid_forward.__get__(module, torch.nn.Module) - elif isinstance(module, CrossAttnDownBlock2D): - module.forward = hack_CrossAttnDownBlock2D_forward.__get__(module, CrossAttnDownBlock2D) - elif isinstance(module, DownBlock2D): - module.forward = hacked_DownBlock2D_forward.__get__(module, DownBlock2D) - elif isinstance(module, CrossAttnUpBlock2D): - module.forward = hacked_CrossAttnUpBlock2D_forward.__get__(module, CrossAttnUpBlock2D) - elif isinstance(module, UpBlock2D): - module.forward = hacked_UpBlock2D_forward.__get__(module, UpBlock2D) - module.mean_bank = [] - module.var_bank = [] - module.gn_weight *= 2 - - # 11. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # controlnet(s) inference - if guess_mode and do_classifier_free_guidance: - # Infer ControlNet only for the conditional batch. - control_model_input = latents - control_model_input = self.scheduler.scale_model_input(control_model_input, t) - controlnet_prompt_embeds = prompt_embeds.chunk(2)[1] - else: - control_model_input = latent_model_input - controlnet_prompt_embeds = prompt_embeds - - down_block_res_samples, mid_block_res_sample = self.controlnet( - control_model_input, - t, - encoder_hidden_states=controlnet_prompt_embeds, - controlnet_cond=image, - conditioning_scale=controlnet_conditioning_scale, - guess_mode=guess_mode, - return_dict=False, - ) - - if guess_mode and do_classifier_free_guidance: - # Infered ControlNet only for the conditional batch. - # To apply the output of ControlNet to both the unconditional and conditional batches, - # add 0 to the unconditional batch to keep it unchanged. - down_block_res_samples = [torch.cat([torch.zeros_like(d), d]) for d in down_block_res_samples] - mid_block_res_sample = torch.cat([torch.zeros_like(mid_block_res_sample), mid_block_res_sample]) - - # ref only part - noise = randn_tensor( - ref_image_latents.shape, generator=generator, device=device, dtype=ref_image_latents.dtype - ) - ref_xt = self.scheduler.add_noise( - ref_image_latents, - noise, - t.reshape( - 1, - ), - ) - ref_xt = self.scheduler.scale_model_input(ref_xt, t) - - MODE = "write" - self.unet( - ref_xt, - t, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - return_dict=False, - ) - - # predict the noise residual - MODE = "read" - noise_pred = self.unet( - latent_model_input, - t, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - down_block_additional_residuals=down_block_res_samples, - mid_block_additional_residual=mid_block_res_sample, - return_dict=False, - )[0] - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0] - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # If we do sequential model offloading, let's offload unet and controlnet - # manually for max memory savings - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.unet.to("cpu") - self.controlnet.to("cpu") - torch.cuda.empty_cache() - - if not output_type == "latent": - image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0] - image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) - else: - image = latents - has_nsfw_concept = None - - if has_nsfw_concept is None: - do_denormalize = [True] * image.shape[0] - else: - do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept] - - image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize) - - # Offload last model to CPU - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.final_offload_hook.offload() - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky2_2/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky2_2/__init__.py deleted file mode 100644 index 4997a2e4056bb291c557deef65957fc873ae9aa1..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky2_2/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -from ...utils import ( - OptionalDependencyNotAvailable, - is_torch_available, - is_transformers_available, -) - - -try: - if not (is_transformers_available() and is_torch_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ...utils.dummy_torch_and_transformers_objects import * -else: - from .pipeline_kandinsky2_2 import KandinskyV22Pipeline - from .pipeline_kandinsky2_2_combined import ( - KandinskyV22CombinedPipeline, - KandinskyV22Img2ImgCombinedPipeline, - KandinskyV22InpaintCombinedPipeline, - ) - from .pipeline_kandinsky2_2_controlnet import KandinskyV22ControlnetPipeline - from .pipeline_kandinsky2_2_controlnet_img2img import KandinskyV22ControlnetImg2ImgPipeline - from .pipeline_kandinsky2_2_img2img import KandinskyV22Img2ImgPipeline - from .pipeline_kandinsky2_2_inpainting import KandinskyV22InpaintPipeline - from .pipeline_kandinsky2_2_prior import KandinskyV22PriorPipeline - from .pipeline_kandinsky2_2_prior_emb2emb import KandinskyV22PriorEmb2EmbPipeline diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/spectrogram_diffusion/pipeline_spectrogram_diffusion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/spectrogram_diffusion/pipeline_spectrogram_diffusion.py deleted file mode 100644 index bb3922e77fd1c59a91180d3dc1d67faedf3a1e0c..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/spectrogram_diffusion/pipeline_spectrogram_diffusion.py +++ /dev/null @@ -1,267 +0,0 @@ -# Copyright 2022 The Music Spectrogram Diffusion Authors. -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import math -from typing import Any, Callable, List, Optional, Tuple, Union - -import numpy as np -import torch - -from ...models import T5FilmDecoder -from ...schedulers import DDPMScheduler -from ...utils import is_onnx_available, logging, randn_tensor - - -if is_onnx_available(): - from ..onnx_utils import OnnxRuntimeModel - -from ..pipeline_utils import AudioPipelineOutput, DiffusionPipeline -from .continous_encoder import SpectrogramContEncoder -from .notes_encoder import SpectrogramNotesEncoder - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -TARGET_FEATURE_LENGTH = 256 - - -class SpectrogramDiffusionPipeline(DiffusionPipeline): - r""" - Pipeline for unconditional audio generation. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods - implemented for all pipelines (downloading, saving, running on a particular device, etc.). - - Args: - notes_encoder ([`SpectrogramNotesEncoder`]): - continuous_encoder ([`SpectrogramContEncoder`]): - decoder ([`T5FilmDecoder`]): - A [`T5FilmDecoder`] to denoise the encoded audio latents. - scheduler ([`DDPMScheduler`]): - A scheduler to be used in combination with `decoder` to denoise the encoded audio latents. - melgan ([`OnnxRuntimeModel`]): - """ - _optional_components = ["melgan"] - - def __init__( - self, - notes_encoder: SpectrogramNotesEncoder, - continuous_encoder: SpectrogramContEncoder, - decoder: T5FilmDecoder, - scheduler: DDPMScheduler, - melgan: OnnxRuntimeModel if is_onnx_available() else Any, - ) -> None: - super().__init__() - - # From MELGAN - self.min_value = math.log(1e-5) # Matches MelGAN training. - self.max_value = 4.0 # Largest value for most examples - self.n_dims = 128 - - self.register_modules( - notes_encoder=notes_encoder, - continuous_encoder=continuous_encoder, - decoder=decoder, - scheduler=scheduler, - melgan=melgan, - ) - - def scale_features(self, features, output_range=(-1.0, 1.0), clip=False): - """Linearly scale features to network outputs range.""" - min_out, max_out = output_range - if clip: - features = torch.clip(features, self.min_value, self.max_value) - # Scale to [0, 1]. - zero_one = (features - self.min_value) / (self.max_value - self.min_value) - # Scale to [min_out, max_out]. - return zero_one * (max_out - min_out) + min_out - - def scale_to_features(self, outputs, input_range=(-1.0, 1.0), clip=False): - """Invert by linearly scaling network outputs to features range.""" - min_out, max_out = input_range - outputs = torch.clip(outputs, min_out, max_out) if clip else outputs - # Scale to [0, 1]. - zero_one = (outputs - min_out) / (max_out - min_out) - # Scale to [self.min_value, self.max_value]. - return zero_one * (self.max_value - self.min_value) + self.min_value - - def encode(self, input_tokens, continuous_inputs, continuous_mask): - tokens_mask = input_tokens > 0 - tokens_encoded, tokens_mask = self.notes_encoder( - encoder_input_tokens=input_tokens, encoder_inputs_mask=tokens_mask - ) - - continuous_encoded, continuous_mask = self.continuous_encoder( - encoder_inputs=continuous_inputs, encoder_inputs_mask=continuous_mask - ) - - return [(tokens_encoded, tokens_mask), (continuous_encoded, continuous_mask)] - - def decode(self, encodings_and_masks, input_tokens, noise_time): - timesteps = noise_time - if not torch.is_tensor(timesteps): - timesteps = torch.tensor([timesteps], dtype=torch.long, device=input_tokens.device) - elif torch.is_tensor(timesteps) and len(timesteps.shape) == 0: - timesteps = timesteps[None].to(input_tokens.device) - - # broadcast to batch dimension in a way that's compatible with ONNX/Core ML - timesteps = timesteps * torch.ones(input_tokens.shape[0], dtype=timesteps.dtype, device=timesteps.device) - - logits = self.decoder( - encodings_and_masks=encodings_and_masks, decoder_input_tokens=input_tokens, decoder_noise_time=timesteps - ) - return logits - - @torch.no_grad() - def __call__( - self, - input_tokens: List[List[int]], - generator: Optional[torch.Generator] = None, - num_inference_steps: int = 100, - return_dict: bool = True, - output_type: str = "numpy", - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - ) -> Union[AudioPipelineOutput, Tuple]: - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - r""" - The call function to the pipeline for generation. - - Args: - input_tokens (`List[List[int]]`): - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make - generation deterministic. - num_inference_steps (`int`, *optional*, defaults to 100): - The number of denoising steps. More denoising steps usually lead to a higher quality audio at the - expense of slower inference. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.AudioPipelineOutput`] instead of a plain tuple. - output_type (`str`, *optional*, defaults to `"numpy"`): - The output format of the generated audio. - callback (`Callable`, *optional*): - A function that calls every `callback_steps` steps during inference. The function is called with the - following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function is called. If not specified, the callback is called at - every step. - - Example: - - ```py - >>> from diffusers import SpectrogramDiffusionPipeline, MidiProcessor - - >>> pipe = SpectrogramDiffusionPipeline.from_pretrained("google/music-spectrogram-diffusion") - >>> pipe = pipe.to("cuda") - >>> processor = MidiProcessor() - - >>> # Download MIDI from: wget http://www.piano-midi.de/midis/beethoven/beethoven_hammerklavier_2.mid - >>> output = pipe(processor("beethoven_hammerklavier_2.mid")) - - >>> audio = output.audios[0] - ``` - - Returns: - [`pipelines.AudioPipelineOutput`] or `tuple`: - If `return_dict` is `True`, [`pipelines.AudioPipelineOutput`] is returned, otherwise a `tuple` is - returned where the first element is a list with the generated audio. - """ - - pred_mel = np.zeros([1, TARGET_FEATURE_LENGTH, self.n_dims], dtype=np.float32) - full_pred_mel = np.zeros([1, 0, self.n_dims], np.float32) - ones = torch.ones((1, TARGET_FEATURE_LENGTH), dtype=bool, device=self.device) - - for i, encoder_input_tokens in enumerate(input_tokens): - if i == 0: - encoder_continuous_inputs = torch.from_numpy(pred_mel[:1].copy()).to( - device=self.device, dtype=self.decoder.dtype - ) - # The first chunk has no previous context. - encoder_continuous_mask = torch.zeros((1, TARGET_FEATURE_LENGTH), dtype=bool, device=self.device) - else: - # The full song pipeline does not feed in a context feature, so the mask - # will be all 0s after the feature converter. Because we know we're - # feeding in a full context chunk from the previous prediction, set it - # to all 1s. - encoder_continuous_mask = ones - - encoder_continuous_inputs = self.scale_features( - encoder_continuous_inputs, output_range=[-1.0, 1.0], clip=True - ) - - encodings_and_masks = self.encode( - input_tokens=torch.IntTensor([encoder_input_tokens]).to(device=self.device), - continuous_inputs=encoder_continuous_inputs, - continuous_mask=encoder_continuous_mask, - ) - - # Sample encoder_continuous_inputs shaped gaussian noise to begin loop - x = randn_tensor( - shape=encoder_continuous_inputs.shape, - generator=generator, - device=self.device, - dtype=self.decoder.dtype, - ) - - # set step values - self.scheduler.set_timesteps(num_inference_steps) - - # Denoising diffusion loop - for j, t in enumerate(self.progress_bar(self.scheduler.timesteps)): - output = self.decode( - encodings_and_masks=encodings_and_masks, - input_tokens=x, - noise_time=t / self.scheduler.config.num_train_timesteps, # rescale to [0, 1) - ) - - # Compute previous output: x_t -> x_t-1 - x = self.scheduler.step(output, t, x, generator=generator).prev_sample - - mel = self.scale_to_features(x, input_range=[-1.0, 1.0]) - encoder_continuous_inputs = mel[:1] - pred_mel = mel.cpu().float().numpy() - - full_pred_mel = np.concatenate([full_pred_mel, pred_mel[:1]], axis=1) - - # call the callback, if provided - if callback is not None and i % callback_steps == 0: - callback(i, full_pred_mel) - - logger.info("Generated segment", i) - - if output_type == "numpy" and not is_onnx_available(): - raise ValueError( - "Cannot return output in 'np' format if ONNX is not available. Make sure to have ONNX installed or set 'output_type' to 'mel'." - ) - elif output_type == "numpy" and self.melgan is None: - raise ValueError( - "Cannot return output in 'np' format if melgan component is not defined. Make sure to define `self.melgan` or set 'output_type' to 'mel'." - ) - - if output_type == "numpy": - output = self.melgan(input_features=full_pred_mel.astype(np.float32)) - else: - output = full_pred_mel - - if not return_dict: - return (output,) - - return AudioPipelineOutput(audios=output) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth.py deleted file mode 100644 index 99edce7ef8575ea8b945905eb4bc176c264fb2d6..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth.py +++ /dev/null @@ -1,645 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -from typing import Any, Callable, Dict, List, Optional, Union - -import numpy as np -import torch -from transformers import CLIPTextModel, CLIPTokenizer - -from ...loaders import LoraLoaderMixin, TextualInversionLoaderMixin -from ...models import AutoencoderKL, UNet3DConditionModel -from ...schedulers import KarrasDiffusionSchedulers -from ...utils import ( - is_accelerate_available, - is_accelerate_version, - logging, - randn_tensor, - replace_example_docstring, -) -from ..pipeline_utils import DiffusionPipeline -from . import TextToVideoSDPipelineOutput - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> import torch - >>> from diffusers import TextToVideoSDPipeline - >>> from diffusers.utils import export_to_video - - >>> pipe = TextToVideoSDPipeline.from_pretrained( - ... "damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16" - ... ) - >>> pipe.enable_model_cpu_offload() - - >>> prompt = "Spiderman is surfing" - >>> video_frames = pipe(prompt).frames - >>> video_path = export_to_video(video_frames) - >>> video_path - ``` -""" - - -def tensor2vid(video: torch.Tensor, mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) -> List[np.ndarray]: - # This code is copied from https://github.com/modelscope/modelscope/blob/1509fdb973e5871f37148a4b5e5964cafd43e64d/modelscope/pipelines/multi_modal/text_to_video_synthesis_pipeline.py#L78 - # reshape to ncfhw - mean = torch.tensor(mean, device=video.device).reshape(1, -1, 1, 1, 1) - std = torch.tensor(std, device=video.device).reshape(1, -1, 1, 1, 1) - # unnormalize back to [0,1] - video = video.mul_(std).add_(mean) - video.clamp_(0, 1) - # prepare the final outputs - i, c, f, h, w = video.shape - images = video.permute(2, 3, 0, 4, 1).reshape( - f, h, i * w, c - ) # 1st (frames, h, batch_size, w, c) 2nd (frames, h, batch_size * w, c) - images = images.unbind(dim=0) # prepare a list of indvidual (consecutive frames) - images = [(image.cpu().numpy() * 255).astype("uint8") for image in images] # f h w c - return images - - -class TextToVideoSDPipeline(DiffusionPipeline, TextualInversionLoaderMixin, LoraLoaderMixin): - r""" - Pipeline for text-to-video generation. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods - implemented for all pipelines (downloading, saving, running on a particular device, etc.). - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). - tokenizer (`CLIPTokenizer`): - A [`~transformers.CLIPTokenizer`] to tokenize text. - unet ([`UNet3DConditionModel`]): - A [`UNet3DConditionModel`] to denoise the encoded video latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - """ - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet3DConditionModel, - scheduler: KarrasDiffusionSchedulers, - ): - super().__init__() - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing - def enable_vae_slicing(self): - r""" - Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to - compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. - """ - self.vae.enable_slicing() - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing - def disable_vae_slicing(self): - r""" - Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to - computing decoding in one step. - """ - self.vae.disable_slicing() - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_tiling - def enable_vae_tiling(self): - r""" - Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to - compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow - processing larger images. - """ - self.vae.enable_tiling() - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_tiling - def disable_vae_tiling(self): - r""" - Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to - computing decoding in one step. - """ - self.vae.disable_tiling() - - def enable_model_cpu_offload(self, gpu_id=0): - r""" - Offload all models to CPU to reduce memory usage with a low impact on performance. Moves one whole model at a - time to the GPU when its `forward` method is called, and the model remains in GPU until the next model runs. - Memory savings are lower than using `enable_sequential_cpu_offload`, but performance is much better due to the - iterative execution of the `unet`. - """ - if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"): - from accelerate import cpu_offload_with_hook - else: - raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.") - - device = torch.device(f"cuda:{gpu_id}") - - if self.device.type != "cpu": - self.to("cpu", silence_dtype_warnings=True) - torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist) - - hook = None - for cpu_offloaded_model in [self.text_encoder, self.unet, self.vae]: - _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook) - - # We'll offload the last model manually. - self.final_offload_hook = hook - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt - def _encode_prompt( - self, - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt=None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - lora_scale: Optional[float] = None, - ): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`, *optional*): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - lora_scale (`float`, *optional*): - A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. - """ - # set lora scale so that monkey patched LoRA - # function of text encoder can correctly access it - if lora_scale is not None and isinstance(self, LoraLoaderMixin): - self._lora_scale = lora_scale - - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - if prompt_embeds is None: - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - prompt = self.maybe_convert_prompt(prompt, self.tokenizer) - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( - text_input_ids, untruncated_ids - ): - removed_text = self.tokenizer.batch_decode( - untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] - ) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - prompt_embeds = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - prompt_embeds = prompt_embeds[0] - - prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - bs_embed, seq_len, _ = prompt_embeds.shape - # duplicate text embeddings for each generation per prompt, using mps friendly method - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance and negative_prompt_embeds is None: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif prompt is not None and type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer) - - max_length = prompt_embeds.shape[1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - negative_prompt_embeds = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - negative_prompt_embeds = negative_prompt_embeds[0] - - if do_classifier_free_guidance: - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - - negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - - return prompt_embeds - - def decode_latents(self, latents): - latents = 1 / self.vae.config.scaling_factor * latents - - batch_size, channels, num_frames, height, width = latents.shape - latents = latents.permute(0, 2, 1, 3, 4).reshape(batch_size * num_frames, channels, height, width) - - image = self.vae.decode(latents).sample - video = ( - image[None, :] - .reshape( - ( - batch_size, - num_frames, - -1, - ) - + image.shape[2:] - ) - .permute(0, 2, 1, 3, 4) - ) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - video = video.float() - return video - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.check_inputs - def check_inputs( - self, - prompt, - height, - width, - callback_steps, - negative_prompt=None, - prompt_embeds=None, - negative_prompt_embeds=None, - ): - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - def prepare_latents( - self, batch_size, num_channels_latents, num_frames, height, width, dtype, device, generator, latents=None - ): - shape = ( - batch_size, - num_channels_latents, - num_frames, - height // self.vae_scale_factor, - width // self.vae_scale_factor, - ) - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if latents is None: - latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - else: - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - @torch.no_grad() - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - prompt: Union[str, List[str]] = None, - height: Optional[int] = None, - width: Optional[int] = None, - num_frames: int = 16, - num_inference_steps: int = 50, - guidance_scale: float = 9.0, - negative_prompt: Optional[Union[str, List[str]]] = None, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "np", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - ): - r""" - The call function to the pipeline for generation. - - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`. - height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): - The height in pixels of the generated video. - width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): - The width in pixels of the generated video. - num_frames (`int`, *optional*, defaults to 16): - The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds - amounts to 2 seconds of video. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality videos at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - A higher guidance scale value encourages the model to generate images closely linked to the text - `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide what to not include in image generation. If not defined, you need to - pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies - to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make - generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor is generated by sampling using the supplied random `generator`. Latents should be of shape - `(batch_size, num_channel, num_frames, height, width)`. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not - provided, text embeddings are generated from the `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If - not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument. - output_type (`str`, *optional*, defaults to `"np"`): - The output format of the generated video. Choose between `torch.FloatTensor` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput`] instead - of a plain tuple. - callback (`Callable`, *optional*): - A function that calls every `callback_steps` steps during inference. The function is called with the - following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function is called. If not specified, the callback is called at - every step. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in - [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py). - - Examples: - - Returns: - [`~pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput`] or `tuple`: - If `return_dict` is `True`, [`~pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput`] is - returned, otherwise a `tuple` is returned where the first element is a list with the generated frames. - """ - # 0. Default height and width to unet - height = height or self.unet.config.sample_size * self.vae_scale_factor - width = width or self.unet.config.sample_size * self.vae_scale_factor - - num_images_per_prompt = 1 - - # 1. Check inputs. Raise error if not correct - self.check_inputs( - prompt, height, width, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds - ) - - # 2. Define call parameters - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - text_encoder_lora_scale = ( - cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None - ) - prompt_embeds = self._encode_prompt( - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - lora_scale=text_encoder_lora_scale, - ) - - # 4. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 5. Prepare latent variables - num_channels_latents = self.unet.config.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - num_frames, - height, - width, - prompt_embeds.dtype, - device, - generator, - latents, - ) - - # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 7. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet( - latent_model_input, - t, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - return_dict=False, - )[0] - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # reshape latents - bsz, channel, frames, width, height = latents.shape - latents = latents.permute(0, 2, 1, 3, 4).reshape(bsz * frames, channel, width, height) - noise_pred = noise_pred.permute(0, 2, 1, 3, 4).reshape(bsz * frames, channel, width, height) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # reshape latents back - latents = latents[None, :].reshape(bsz, frames, channel, width, height).permute(0, 2, 1, 3, 4) - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - if output_type == "latent": - return TextToVideoSDPipelineOutput(frames=latents) - - video_tensor = self.decode_latents(latents) - - if output_type == "pt": - video = video_tensor - else: - video = tensor2vid(video_tensor) - - # Offload last model to CPU - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.final_offload_hook.offload() - - if not return_dict: - return (video,) - - return TextToVideoSDPipelineOutput(frames=video) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/torch_utils.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/torch_utils.py deleted file mode 100644 index 99ea4d8cf1d0b04b8f43d8d7a331247822374bcf..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/torch_utils.py +++ /dev/null @@ -1,88 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -PyTorch utilities: Utilities related to PyTorch -""" -from typing import List, Optional, Tuple, Union - -from . import logging -from .import_utils import is_torch_available, is_torch_version - - -if is_torch_available(): - import torch - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -try: - from torch._dynamo import allow_in_graph as maybe_allow_in_graph -except (ImportError, ModuleNotFoundError): - - def maybe_allow_in_graph(cls): - return cls - - -def randn_tensor( - shape: Union[Tuple, List], - generator: Optional[Union[List["torch.Generator"], "torch.Generator"]] = None, - device: Optional["torch.device"] = None, - dtype: Optional["torch.dtype"] = None, - layout: Optional["torch.layout"] = None, -): - """A helper function to create random tensors on the desired `device` with the desired `dtype`. When - passing a list of generators, you can seed each batch size individually. If CPU generators are passed, the tensor - is always created on the CPU. - """ - # device on which tensor is created defaults to device - rand_device = device - batch_size = shape[0] - - layout = layout or torch.strided - device = device or torch.device("cpu") - - if generator is not None: - gen_device_type = generator.device.type if not isinstance(generator, list) else generator[0].device.type - if gen_device_type != device.type and gen_device_type == "cpu": - rand_device = "cpu" - if device != "mps": - logger.info( - f"The passed generator was created on 'cpu' even though a tensor on {device} was expected." - f" Tensors will be created on 'cpu' and then moved to {device}. Note that one can probably" - f" slighly speed up this function by passing a generator that was created on the {device} device." - ) - elif gen_device_type != device.type and gen_device_type == "cuda": - raise ValueError(f"Cannot generate a {device} tensor from a generator of type {gen_device_type}.") - - # make sure generator list of length 1 is treated like a non-list - if isinstance(generator, list) and len(generator) == 1: - generator = generator[0] - - if isinstance(generator, list): - shape = (1,) + shape[1:] - latents = [ - torch.randn(shape, generator=generator[i], device=rand_device, dtype=dtype, layout=layout) - for i in range(batch_size) - ] - latents = torch.cat(latents, dim=0).to(device) - else: - latents = torch.randn(shape, generator=generator, device=rand_device, dtype=dtype, layout=layout).to(device) - - return latents - - -def is_compiled_module(module): - """Check whether the module was compiled with torch.compile()""" - if is_torch_version("<", "2.0.0") or not hasattr(torch, "_dynamo"): - return False - return isinstance(module, torch._dynamo.eval_frame.OptimizedModule) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_config.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_config.py deleted file mode 100644 index 246dd3bf9e537f341bfdae04d83dea400d3cafb9..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_config.py +++ /dev/null @@ -1,288 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import tempfile -import unittest - -from diffusers import ( - DDIMScheduler, - DDPMScheduler, - DPMSolverMultistepScheduler, - EulerAncestralDiscreteScheduler, - EulerDiscreteScheduler, - PNDMScheduler, - logging, -) -from diffusers.configuration_utils import ConfigMixin, register_to_config -from diffusers.utils.testing_utils import CaptureLogger - - -class SampleObject(ConfigMixin): - config_name = "config.json" - - @register_to_config - def __init__( - self, - a=2, - b=5, - c=(2, 5), - d="for diffusion", - e=[1, 3], - ): - pass - - -class SampleObject2(ConfigMixin): - config_name = "config.json" - - @register_to_config - def __init__( - self, - a=2, - b=5, - c=(2, 5), - d="for diffusion", - f=[1, 3], - ): - pass - - -class SampleObject3(ConfigMixin): - config_name = "config.json" - - @register_to_config - def __init__( - self, - a=2, - b=5, - c=(2, 5), - d="for diffusion", - e=[1, 3], - f=[1, 3], - ): - pass - - -class SampleObject4(ConfigMixin): - config_name = "config.json" - - @register_to_config - def __init__( - self, - a=2, - b=5, - c=(2, 5), - d="for diffusion", - e=[1, 5], - f=[5, 4], - ): - pass - - -class ConfigTester(unittest.TestCase): - def test_load_not_from_mixin(self): - with self.assertRaises(ValueError): - ConfigMixin.load_config("dummy_path") - - def test_register_to_config(self): - obj = SampleObject() - config = obj.config - assert config["a"] == 2 - assert config["b"] == 5 - assert config["c"] == (2, 5) - assert config["d"] == "for diffusion" - assert config["e"] == [1, 3] - - # init ignore private arguments - obj = SampleObject(_name_or_path="lalala") - config = obj.config - assert config["a"] == 2 - assert config["b"] == 5 - assert config["c"] == (2, 5) - assert config["d"] == "for diffusion" - assert config["e"] == [1, 3] - - # can override default - obj = SampleObject(c=6) - config = obj.config - assert config["a"] == 2 - assert config["b"] == 5 - assert config["c"] == 6 - assert config["d"] == "for diffusion" - assert config["e"] == [1, 3] - - # can use positional arguments. - obj = SampleObject(1, c=6) - config = obj.config - assert config["a"] == 1 - assert config["b"] == 5 - assert config["c"] == 6 - assert config["d"] == "for diffusion" - assert config["e"] == [1, 3] - - def test_save_load(self): - obj = SampleObject() - config = obj.config - - assert config["a"] == 2 - assert config["b"] == 5 - assert config["c"] == (2, 5) - assert config["d"] == "for diffusion" - assert config["e"] == [1, 3] - - with tempfile.TemporaryDirectory() as tmpdirname: - obj.save_config(tmpdirname) - new_obj = SampleObject.from_config(SampleObject.load_config(tmpdirname)) - new_config = new_obj.config - - # unfreeze configs - config = dict(config) - new_config = dict(new_config) - - assert config.pop("c") == (2, 5) # instantiated as tuple - assert new_config.pop("c") == [2, 5] # saved & loaded as list because of json - config.pop("_use_default_values") - assert config == new_config - - def test_load_ddim_from_pndm(self): - logger = logging.get_logger("diffusers.configuration_utils") - # 30 for warning - logger.setLevel(30) - - with CaptureLogger(logger) as cap_logger: - ddim = DDIMScheduler.from_pretrained( - "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="scheduler" - ) - - assert ddim.__class__ == DDIMScheduler - # no warning should be thrown - assert cap_logger.out == "" - - def test_load_euler_from_pndm(self): - logger = logging.get_logger("diffusers.configuration_utils") - # 30 for warning - logger.setLevel(30) - - with CaptureLogger(logger) as cap_logger: - euler = EulerDiscreteScheduler.from_pretrained( - "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="scheduler" - ) - - assert euler.__class__ == EulerDiscreteScheduler - # no warning should be thrown - assert cap_logger.out == "" - - def test_load_euler_ancestral_from_pndm(self): - logger = logging.get_logger("diffusers.configuration_utils") - # 30 for warning - logger.setLevel(30) - - with CaptureLogger(logger) as cap_logger: - euler = EulerAncestralDiscreteScheduler.from_pretrained( - "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="scheduler" - ) - - assert euler.__class__ == EulerAncestralDiscreteScheduler - # no warning should be thrown - assert cap_logger.out == "" - - def test_load_pndm(self): - logger = logging.get_logger("diffusers.configuration_utils") - # 30 for warning - logger.setLevel(30) - - with CaptureLogger(logger) as cap_logger: - pndm = PNDMScheduler.from_pretrained( - "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="scheduler" - ) - - assert pndm.__class__ == PNDMScheduler - # no warning should be thrown - assert cap_logger.out == "" - - def test_overwrite_config_on_load(self): - logger = logging.get_logger("diffusers.configuration_utils") - # 30 for warning - logger.setLevel(30) - - with CaptureLogger(logger) as cap_logger: - ddpm = DDPMScheduler.from_pretrained( - "hf-internal-testing/tiny-stable-diffusion-torch", - subfolder="scheduler", - prediction_type="sample", - beta_end=8, - ) - - with CaptureLogger(logger) as cap_logger_2: - ddpm_2 = DDPMScheduler.from_pretrained("google/ddpm-celebahq-256", beta_start=88) - - assert ddpm.__class__ == DDPMScheduler - assert ddpm.config.prediction_type == "sample" - assert ddpm.config.beta_end == 8 - assert ddpm_2.config.beta_start == 88 - - # no warning should be thrown - assert cap_logger.out == "" - assert cap_logger_2.out == "" - - def test_load_dpmsolver(self): - logger = logging.get_logger("diffusers.configuration_utils") - # 30 for warning - logger.setLevel(30) - - with CaptureLogger(logger) as cap_logger: - dpm = DPMSolverMultistepScheduler.from_pretrained( - "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="scheduler" - ) - - assert dpm.__class__ == DPMSolverMultistepScheduler - # no warning should be thrown - assert cap_logger.out == "" - - def test_use_default_values(self): - # let's first save a config that should be in the form - # a=2, - # b=5, - # c=(2, 5), - # d="for diffusion", - # e=[1, 3], - - config = SampleObject() - - config_dict = {k: v for k, v in config.config.items() if not k.startswith("_")} - - # make sure that default config has all keys in `_use_default_values` - assert set(config_dict.keys()) == set(config.config._use_default_values) - - with tempfile.TemporaryDirectory() as tmpdirname: - config.save_config(tmpdirname) - - # now loading it with SampleObject2 should put f into `_use_default_values` - config = SampleObject2.from_config(tmpdirname) - - assert "f" in config._use_default_values - assert config.f == [1, 3] - - # now loading the config, should **NOT** use [1, 3] for `f`, but the default [1, 4] value - # **BECAUSE** it is part of `config._use_default_values` - new_config = SampleObject4.from_config(config.config) - assert new_config.f == [5, 4] - - config.config._use_default_values.pop() - new_config_2 = SampleObject4.from_config(config.config) - assert new_config_2.f == [1, 3] - - # Nevertheless "e" should still be correctly loaded to [1, 3] from SampleObject2 instead of defaulting to [1, 5] - assert new_config_2.e == [1, 3] diff --git a/spaces/Andy1621/uniformer_image_detection/configs/fpg/mask_rcnn_r50_fpg-chn128_crop640_50e_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/fpg/mask_rcnn_r50_fpg-chn128_crop640_50e_coco.py deleted file mode 100644 index baa4a5affc9b3ead0080d993b14f0d00392c2de5..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/fpg/mask_rcnn_r50_fpg-chn128_crop640_50e_coco.py +++ /dev/null @@ -1,10 +0,0 @@ -_base_ = 'mask_rcnn_r50_fpg_crop640_50e_coco.py' - -model = dict( - neck=dict(out_channels=128, inter_channels=128), - rpn_head=dict(in_channels=128), - roi_head=dict( - bbox_roi_extractor=dict(out_channels=128), - bbox_head=dict(in_channels=128), - mask_roi_extractor=dict(out_channels=128), - mask_head=dict(in_channels=128))) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_1x_coco.py deleted file mode 100644 index 6e1c5d0cadfb9fb3a4f8645e28a8e67fc499e900..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_1x_coco.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = '../mask_rcnn/mask_rcnn_r101_fpn_1x_coco.py' -model = dict( - backbone=dict( - norm_cfg=dict(type='SyncBN', requires_grad=True), norm_eval=False)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/README.md b/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/README.md deleted file mode 100644 index 66f3dc286f066c50ef54e98de036ef0f5056e246..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/README.md +++ /dev/null @@ -1,69 +0,0 @@ -# Pyramid Scene Parsing Network - -## Introduction - - - -```latex -@inproceedings{zhao2017pspnet, - title={Pyramid Scene Parsing Network}, - author={Zhao, Hengshuang and Shi, Jianping and Qi, Xiaojuan and Wang, Xiaogang and Jia, Jiaya}, - booktitle={CVPR}, - year={2017} -} -``` - -## Results and models - -### Cityscapes - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | --------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ---------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| PSPNet | R-50-D8 | 512x1024 | 40000 | 6.1 | 4.07 | 77.85 | 79.18 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338.log.json) | -| PSPNet | R-101-D8 | 512x1024 | 40000 | 9.6 | 2.68 | 78.34 | 79.74 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x1024_40k_cityscapes/pspnet_r101-d8_512x1024_40k_cityscapes_20200604_232751-467e7cf4.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x1024_40k_cityscapes/pspnet_r101-d8_512x1024_40k_cityscapes_20200604_232751.log.json) | -| PSPNet | R-50-D8 | 769x769 | 40000 | 6.9 | 1.76 | 78.26 | 79.88 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r50-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_769x769_40k_cityscapes/pspnet_r50-d8_769x769_40k_cityscapes_20200606_112725-86638686.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_769x769_40k_cityscapes/pspnet_r50-d8_769x769_40k_cityscapes_20200606_112725.log.json) | -| PSPNet | R-101-D8 | 769x769 | 40000 | 10.9 | 1.15 | 79.08 | 80.28 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_769x769_40k_cityscapes/pspnet_r101-d8_769x769_40k_cityscapes_20200606_112753-61c6f5be.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_769x769_40k_cityscapes/pspnet_r101-d8_769x769_40k_cityscapes_20200606_112753.log.json) | -| PSPNet | R-18-D8 | 512x1024 | 80000 | 1.7 | 15.71 | 74.87 | 76.04 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r18-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r18-d8_512x1024_80k_cityscapes/pspnet_r18-d8_512x1024_80k_cityscapes_20201225_021458-09ffa746.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r18-d8_512x1024_80k_cityscapes/pspnet_r18-d8_512x1024_80k_cityscapes-20201225_021458.log.json) | -| PSPNet | R-50-D8 | 512x1024 | 80000 | - | - | 78.55 | 79.79 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r50-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x1024_80k_cityscapes/pspnet_r50-d8_512x1024_80k_cityscapes_20200606_112131-2376f12b.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x1024_80k_cityscapes/pspnet_r50-d8_512x1024_80k_cityscapes_20200606_112131.log.json) | -| PSPNet | R-101-D8 | 512x1024 | 80000 | - | - | 79.76 | 81.01 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x1024_80k_cityscapes/pspnet_r101-d8_512x1024_80k_cityscapes_20200606_112211-e1e1100f.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x1024_80k_cityscapes/pspnet_r101-d8_512x1024_80k_cityscapes_20200606_112211.log.json) | -| PSPNet | R-18-D8 | 769x769 | 80000 | 1.9 | 6.20 | 75.90 | 77.86 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r18-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r18-d8_769x769_80k_cityscapes/pspnet_r18-d8_769x769_80k_cityscapes_20201225_021458-3deefc62.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r18-d8_769x769_80k_cityscapes/pspnet_r18-d8_769x769_80k_cityscapes-20201225_021458.log.json) | -| PSPNet | R-50-D8 | 769x769 | 80000 | - | - | 79.59 | 80.69 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r50-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_769x769_80k_cityscapes/pspnet_r50-d8_769x769_80k_cityscapes_20200606_210121-5ccf03dd.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_769x769_80k_cityscapes/pspnet_r50-d8_769x769_80k_cityscapes_20200606_210121.log.json) | -| PSPNet | R-101-D8 | 769x769 | 80000 | - | - | 79.77 | 81.06 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_769x769_80k_cityscapes/pspnet_r101-d8_769x769_80k_cityscapes_20200606_225055-dba412fa.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_769x769_80k_cityscapes/pspnet_r101-d8_769x769_80k_cityscapes_20200606_225055.log.json) | -| PSPNet | R-18b-D8 | 512x1024 | 80000 | 1.5 | 16.28 | 74.23 | 75.79 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r18b-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r18b-d8_512x1024_80k_cityscapes/pspnet_r18b-d8_512x1024_80k_cityscapes_20201226_063116-26928a60.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r18b-d8_512x1024_80k_cityscapes/pspnet_r18b-d8_512x1024_80k_cityscapes-20201226_063116.log.json) | -| PSPNet | R-50b-D8 | 512x1024 | 80000 | 6.0 | 4.30 | 78.22 | 79.46 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r50b-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50b-d8_512x1024_80k_cityscapes/pspnet_r50b-d8_512x1024_80k_cityscapes_20201225_094315-6344287a.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50b-d8_512x1024_80k_cityscapes/pspnet_r50b-d8_512x1024_80k_cityscapes-20201225_094315.log.json) | -| PSPNet | R-101b-D8 | 512x1024 | 80000 | 9.5 | 2.76 | 79.69 | 80.79 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101b-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101b-d8_512x1024_80k_cityscapes/pspnet_r101b-d8_512x1024_80k_cityscapes_20201226_170012-3a4d38ab.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101b-d8_512x1024_80k_cityscapes/pspnet_r101b-d8_512x1024_80k_cityscapes-20201226_170012.log.json) | -| PSPNet | R-18b-D8 | 769x769 | 80000 | 1.7 | 6.41 | 74.92 | 76.90 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r18b-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r18b-d8_769x769_80k_cityscapes/pspnet_r18b-d8_769x769_80k_cityscapes_20201226_080942-bf98d186.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r18b-d8_769x769_80k_cityscapes/pspnet_r18b-d8_769x769_80k_cityscapes-20201226_080942.log.json) | -| PSPNet | R-50b-D8 | 769x769 | 80000 | 6.8 | 1.88 | 78.50 | 79.96 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r50b-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50b-d8_769x769_80k_cityscapes/pspnet_r50b-d8_769x769_80k_cityscapes_20201225_094316-4c643cf6.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50b-d8_769x769_80k_cityscapes/pspnet_r50b-d8_769x769_80k_cityscapes-20201225_094316.log.json) | -| PSPNet | R-101b-D8 | 769x769 | 80000 | 10.8 | 1.17 | 78.87 | 80.04 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101b-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101b-d8_769x769_80k_cityscapes/pspnet_r101b-d8_769x769_80k_cityscapes_20201226_171823-f0e7c293.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101b-d8_769x769_80k_cityscapes/pspnet_r101b-d8_769x769_80k_cityscapes-20201226_171823.log.json) | - -### ADE20K - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ----------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| PSPNet | R-50-D8 | 512x512 | 80000 | 8.5 | 23.53 | 41.13 | 41.94 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r50-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x512_80k_ade20k/pspnet_r50-d8_512x512_80k_ade20k_20200615_014128-15a8b914.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x512_80k_ade20k/pspnet_r50-d8_512x512_80k_ade20k_20200615_014128.log.json) | -| PSPNet | R-101-D8 | 512x512 | 80000 | 12 | 15.30 | 43.57 | 44.35 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x512_80k_ade20k/pspnet_r101-d8_512x512_80k_ade20k_20200614_031423-b6e782f0.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x512_80k_ade20k/pspnet_r101-d8_512x512_80k_ade20k_20200614_031423.log.json) | -| PSPNet | R-50-D8 | 512x512 | 160000 | - | - | 42.48 | 43.44 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r50-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x512_160k_ade20k/pspnet_r50-d8_512x512_160k_ade20k_20200615_184358-1890b0bd.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x512_160k_ade20k/pspnet_r50-d8_512x512_160k_ade20k_20200615_184358.log.json) | -| PSPNet | R-101-D8 | 512x512 | 160000 | - | - | 44.39 | 45.35 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x512_160k_ade20k/pspnet_r101-d8_512x512_160k_ade20k_20200615_100650-967c316f.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x512_160k_ade20k/pspnet_r101-d8_512x512_160k_ade20k_20200615_100650.log.json) | - -### Pascal VOC 2012 + Aug - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| PSPNet | R-50-D8 | 512x512 | 20000 | 6.1 | 23.59 | 76.78 | 77.61 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r50-d8_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x512_20k_voc12aug/pspnet_r50-d8_512x512_20k_voc12aug_20200617_101958-ed5dfbd9.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x512_20k_voc12aug/pspnet_r50-d8_512x512_20k_voc12aug_20200617_101958.log.json) | -| PSPNet | R-101-D8 | 512x512 | 20000 | 9.6 | 15.02 | 78.47 | 79.25 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x512_20k_voc12aug/pspnet_r101-d8_512x512_20k_voc12aug_20200617_102003-4aef3c9a.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x512_20k_voc12aug/pspnet_r101-d8_512x512_20k_voc12aug_20200617_102003.log.json) | -| PSPNet | R-50-D8 | 512x512 | 40000 | - | - | 77.29 | 78.48 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r50-d8_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x512_40k_voc12aug/pspnet_r50-d8_512x512_40k_voc12aug_20200613_161222-ae9c1b8c.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x512_40k_voc12aug/pspnet_r50-d8_512x512_40k_voc12aug_20200613_161222.log.json) | -| PSPNet | R-101-D8 | 512x512 | 40000 | - | - | 78.52 | 79.57 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x512_40k_voc12aug/pspnet_r101-d8_512x512_40k_voc12aug_20200613_161222-bc933b18.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_512x512_40k_voc12aug/pspnet_r101-d8_512x512_40k_voc12aug_20200613_161222.log.json) | - -### Pascal Context - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| PSPNet | R-101-D8 | 480x480 | 40000 | 8.8 | 9.68 | 46.60 | 47.78 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_480x480_40k_pascal_context.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_480x480_40k_pascal_context/pspnet_r101-d8_480x480_40k_pascal_context_20200911_211210-bf0f5d7c.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_480x480_40k_pascal_context/pspnet_r101-d8_480x480_40k_pascal_context-20200911_211210.log.json) | -| PSPNet | R-101-D8 | 480x480 | 80000 | - | - | 46.03 | 47.15 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_480x480_80k_pascal_context.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_480x480_80k_pascal_context/pspnet_r101-d8_480x480_80k_pascal_context_20200911_190530-c86d6233.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_480x480_80k_pascal_context/pspnet_r101-d8_480x480_80k_pascal_context-20200911_190530.log.json) | - -### Pascal Context 59 - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| PSPNet | R-101-D8 | 480x480 | 40000 | - | - | 52.02 | 53.54 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_480x480_40k_pascal_context_59.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_480x480_40k_pascal_context_59/pspnet_r101-d8_480x480_40k_pascal_context_59_20210416_114524-86d44cd4.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_480x480_40k_pascal_context_59/pspnet_r101-d8_480x480_40k_pascal_context_59-20210416_114524.log.json) | -| PSPNet | R-101-D8 | 480x480 | 80000 | - | - | 52.47 | 53.99 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet/pspnet_r101-d8_480x480_80k_pascal_context_59.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_480x480_80k_pascal_context_59/pspnet_r101-d8_480x480_80k_pascal_context_59_20210416_114418-fa6caaa2.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r101-d8_480x480_80k_pascal_context_59/pspnet_r101-d8_480x480_80k_pascal_context_59-20210416_114418.log.json) | diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/models/diffusion/ddpm.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/models/diffusion/ddpm.py deleted file mode 100644 index f71a44af48c8cba8e97849b7e6813b3e6f9fe83c..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/models/diffusion/ddpm.py +++ /dev/null @@ -1,1797 +0,0 @@ -""" -wild mixture of -https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py -https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py -https://github.com/CompVis/taming-transformers --- merci -""" - -import torch -import torch.nn as nn -import numpy as np -import pytorch_lightning as pl -from torch.optim.lr_scheduler import LambdaLR -from einops import rearrange, repeat -from contextlib import contextmanager, nullcontext -from functools import partial -import itertools -from tqdm import tqdm -from torchvision.utils import make_grid -from pytorch_lightning.utilities.distributed import rank_zero_only -from omegaconf import ListConfig - -from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config -from ldm.modules.ema import LitEma -from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution -from ldm.models.autoencoder import IdentityFirstStage, AutoencoderKL -from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like -from ldm.models.diffusion.ddim import DDIMSampler - - -__conditioning_keys__ = {'concat': 'c_concat', - 'crossattn': 'c_crossattn', - 'adm': 'y'} - - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - - -def uniform_on_device(r1, r2, shape, device): - return (r1 - r2) * torch.rand(*shape, device=device) + r2 - - -class DDPM(pl.LightningModule): - # classic DDPM with Gaussian diffusion, in image space - def __init__(self, - unet_config, - timesteps=1000, - beta_schedule="linear", - loss_type="l2", - ckpt_path=None, - ignore_keys=[], - load_only_unet=False, - monitor="val/loss", - use_ema=True, - first_stage_key="image", - image_size=256, - channels=3, - log_every_t=100, - clip_denoised=True, - linear_start=1e-4, - linear_end=2e-2, - cosine_s=8e-3, - given_betas=None, - original_elbo_weight=0., - v_posterior=0., # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta - l_simple_weight=1., - conditioning_key=None, - parameterization="eps", # all assuming fixed variance schedules - scheduler_config=None, - use_positional_encodings=False, - learn_logvar=False, - logvar_init=0., - make_it_fit=False, - ucg_training=None, - reset_ema=False, - reset_num_ema_updates=False, - ): - super().__init__() - assert parameterization in ["eps", "x0", "v"], 'currently only supporting "eps" and "x0" and "v"' - self.parameterization = parameterization - print(f"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode") - self.cond_stage_model = None - self.clip_denoised = clip_denoised - self.log_every_t = log_every_t - self.first_stage_key = first_stage_key - self.image_size = image_size # try conv? - self.channels = channels - self.use_positional_encodings = use_positional_encodings - self.model = DiffusionWrapper(unet_config, conditioning_key) - count_params(self.model, verbose=True) - self.use_ema = use_ema - if self.use_ema: - self.model_ema = LitEma(self.model) - print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.") - - self.use_scheduler = scheduler_config is not None - if self.use_scheduler: - self.scheduler_config = scheduler_config - - self.v_posterior = v_posterior - self.original_elbo_weight = original_elbo_weight - self.l_simple_weight = l_simple_weight - - if monitor is not None: - self.monitor = monitor - self.make_it_fit = make_it_fit - if reset_ema: assert exists(ckpt_path) - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys, only_model=load_only_unet) - if reset_ema: - assert self.use_ema - print(f"Resetting ema to pure model weights. This is useful when restoring from an ema-only checkpoint.") - self.model_ema = LitEma(self.model) - if reset_num_ema_updates: - print(" +++++++++++ WARNING: RESETTING NUM_EMA UPDATES TO ZERO +++++++++++ ") - assert self.use_ema - self.model_ema.reset_num_updates() - - self.register_schedule(given_betas=given_betas, beta_schedule=beta_schedule, timesteps=timesteps, - linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s) - - self.loss_type = loss_type - - self.learn_logvar = learn_logvar - logvar = torch.full(fill_value=logvar_init, size=(self.num_timesteps,)) - if self.learn_logvar: - self.logvar = nn.Parameter(self.logvar, requires_grad=True) - else: - self.register_buffer('logvar', logvar) - - self.ucg_training = ucg_training or dict() - if self.ucg_training: - self.ucg_prng = np.random.RandomState() - - def register_schedule(self, given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - if exists(given_betas): - betas = given_betas - else: - betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end, - cosine_s=cosine_s) - alphas = 1. - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1]) - - timesteps, = betas.shape - self.num_timesteps = int(timesteps) - self.linear_start = linear_start - self.linear_end = linear_end - assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep' - - to_torch = partial(torch.tensor, dtype=torch.float32) - - self.register_buffer('betas', to_torch(betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1))) - - # calculations for posterior q(x_{t-1} | x_t, x_0) - posterior_variance = (1 - self.v_posterior) * betas * (1. - alphas_cumprod_prev) / ( - 1. - alphas_cumprod) + self.v_posterior * betas - # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t) - self.register_buffer('posterior_variance', to_torch(posterior_variance)) - # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain - self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20)))) - self.register_buffer('posterior_mean_coef1', to_torch( - betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod))) - self.register_buffer('posterior_mean_coef2', to_torch( - (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod))) - - if self.parameterization == "eps": - lvlb_weights = self.betas ** 2 / ( - 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod)) - elif self.parameterization == "x0": - lvlb_weights = 0.5 * np.sqrt(torch.Tensor(alphas_cumprod)) / (2. * 1 - torch.Tensor(alphas_cumprod)) - elif self.parameterization == "v": - lvlb_weights = torch.ones_like(self.betas ** 2 / ( - 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod))) - else: - raise NotImplementedError("mu not supported") - lvlb_weights[0] = lvlb_weights[1] - self.register_buffer('lvlb_weights', lvlb_weights, persistent=False) - assert not torch.isnan(self.lvlb_weights).all() - - @contextmanager - def ema_scope(self, context=None): - if self.use_ema: - self.model_ema.store(self.model.parameters()) - self.model_ema.copy_to(self.model) - if context is not None: - print(f"{context}: Switched to EMA weights") - try: - yield None - finally: - if self.use_ema: - self.model_ema.restore(self.model.parameters()) - if context is not None: - print(f"{context}: Restored training weights") - - @torch.no_grad() - def init_from_ckpt(self, path, ignore_keys=list(), only_model=False): - sd = torch.load(path, map_location="cpu") - if "state_dict" in list(sd.keys()): - sd = sd["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - if self.make_it_fit: - n_params = len([name for name, _ in - itertools.chain(self.named_parameters(), - self.named_buffers())]) - for name, param in tqdm( - itertools.chain(self.named_parameters(), - self.named_buffers()), - desc="Fitting old weights to new weights", - total=n_params - ): - if not name in sd: - continue - old_shape = sd[name].shape - new_shape = param.shape - assert len(old_shape) == len(new_shape) - if len(new_shape) > 2: - # we only modify first two axes - assert new_shape[2:] == old_shape[2:] - # assumes first axis corresponds to output dim - if not new_shape == old_shape: - new_param = param.clone() - old_param = sd[name] - if len(new_shape) == 1: - for i in range(new_param.shape[0]): - new_param[i] = old_param[i % old_shape[0]] - elif len(new_shape) >= 2: - for i in range(new_param.shape[0]): - for j in range(new_param.shape[1]): - new_param[i, j] = old_param[i % old_shape[0], j % old_shape[1]] - - n_used_old = torch.ones(old_shape[1]) - for j in range(new_param.shape[1]): - n_used_old[j % old_shape[1]] += 1 - n_used_new = torch.zeros(new_shape[1]) - for j in range(new_param.shape[1]): - n_used_new[j] = n_used_old[j % old_shape[1]] - - n_used_new = n_used_new[None, :] - while len(n_used_new.shape) < len(new_shape): - n_used_new = n_used_new.unsqueeze(-1) - new_param /= n_used_new - - sd[name] = new_param - - missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( - sd, strict=False) - print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys:\n {missing}") - if len(unexpected) > 0: - print(f"\nUnexpected Keys:\n {unexpected}") - - def q_mean_variance(self, x_start, t): - """ - Get the distribution q(x_t | x_0). - :param x_start: the [N x C x ...] tensor of noiseless inputs. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :return: A tuple (mean, variance, log_variance), all of x_start's shape. - """ - mean = (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start) - variance = extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape) - log_variance = extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape) - return mean, variance, log_variance - - def predict_start_from_noise(self, x_t, t, noise): - return ( - extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise - ) - - def predict_start_from_z_and_v(self, x_t, t, v): - # self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) - # self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) - return ( - extract_into_tensor(self.sqrt_alphas_cumprod, t, x_t.shape) * x_t - - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_t.shape) * v - ) - - def predict_eps_from_z_and_v(self, x_t, t, v): - return ( - extract_into_tensor(self.sqrt_alphas_cumprod, t, x_t.shape) * v + - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_t.shape) * x_t - ) - - def q_posterior(self, x_start, x_t, t): - posterior_mean = ( - extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start + - extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t - ) - posterior_variance = extract_into_tensor(self.posterior_variance, t, x_t.shape) - posterior_log_variance_clipped = extract_into_tensor(self.posterior_log_variance_clipped, t, x_t.shape) - return posterior_mean, posterior_variance, posterior_log_variance_clipped - - def p_mean_variance(self, x, t, clip_denoised: bool): - model_out = self.model(x, t) - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - if clip_denoised: - x_recon.clamp_(-1., 1.) - - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, t, clip_denoised=True, repeat_noise=False): - b, *_, device = *x.shape, x.device - model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, clip_denoised=clip_denoised) - noise = noise_like(x.shape, device, repeat_noise) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def p_sample_loop(self, shape, return_intermediates=False): - device = self.betas.device - b = shape[0] - img = torch.randn(shape, device=device) - intermediates = [img] - for i in tqdm(reversed(range(0, self.num_timesteps)), desc='Sampling t', total=self.num_timesteps): - img = self.p_sample(img, torch.full((b,), i, device=device, dtype=torch.long), - clip_denoised=self.clip_denoised) - if i % self.log_every_t == 0 or i == self.num_timesteps - 1: - intermediates.append(img) - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, batch_size=16, return_intermediates=False): - image_size = self.image_size - channels = self.channels - return self.p_sample_loop((batch_size, channels, image_size, image_size), - return_intermediates=return_intermediates) - - def q_sample(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise) - - def get_v(self, x, noise, t): - return ( - extract_into_tensor(self.sqrt_alphas_cumprod, t, x.shape) * noise - - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x.shape) * x - ) - - def get_loss(self, pred, target, mean=True): - if self.loss_type == 'l1': - loss = (target - pred).abs() - if mean: - loss = loss.mean() - elif self.loss_type == 'l2': - if mean: - loss = torch.nn.functional.mse_loss(target, pred) - else: - loss = torch.nn.functional.mse_loss(target, pred, reduction='none') - else: - raise NotImplementedError("unknown loss type '{loss_type}'") - - return loss - - def p_losses(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - model_out = self.model(x_noisy, t) - - loss_dict = {} - if self.parameterization == "eps": - target = noise - elif self.parameterization == "x0": - target = x_start - elif self.parameterization == "v": - target = self.get_v(x_start, noise, t) - else: - raise NotImplementedError(f"Parameterization {self.parameterization} not yet supported") - - loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3]) - - log_prefix = 'train' if self.training else 'val' - - loss_dict.update({f'{log_prefix}/loss_simple': loss.mean()}) - loss_simple = loss.mean() * self.l_simple_weight - - loss_vlb = (self.lvlb_weights[t] * loss).mean() - loss_dict.update({f'{log_prefix}/loss_vlb': loss_vlb}) - - loss = loss_simple + self.original_elbo_weight * loss_vlb - - loss_dict.update({f'{log_prefix}/loss': loss}) - - return loss, loss_dict - - def forward(self, x, *args, **kwargs): - # b, c, h, w, device, img_size, = *x.shape, x.device, self.image_size - # assert h == img_size and w == img_size, f'height and width of image must be {img_size}' - t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long() - return self.p_losses(x, t, *args, **kwargs) - - def get_input(self, batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - x = rearrange(x, 'b h w c -> b c h w') - x = x.to(memory_format=torch.contiguous_format).float() - return x - - def shared_step(self, batch): - x = self.get_input(batch, self.first_stage_key) - loss, loss_dict = self(x) - return loss, loss_dict - - def training_step(self, batch, batch_idx): - for k in self.ucg_training: - p = self.ucg_training[k]["p"] - val = self.ucg_training[k]["val"] - if val is None: - val = "" - for i in range(len(batch[k])): - if self.ucg_prng.choice(2, p=[1 - p, p]): - batch[k][i] = val - - loss, loss_dict = self.shared_step(batch) - - self.log_dict(loss_dict, prog_bar=True, - logger=True, on_step=True, on_epoch=True) - - self.log("global_step", self.global_step, - prog_bar=True, logger=True, on_step=True, on_epoch=False) - - if self.use_scheduler: - lr = self.optimizers().param_groups[0]['lr'] - self.log('lr_abs', lr, prog_bar=True, logger=True, on_step=True, on_epoch=False) - - return loss - - @torch.no_grad() - def validation_step(self, batch, batch_idx): - _, loss_dict_no_ema = self.shared_step(batch) - with self.ema_scope(): - _, loss_dict_ema = self.shared_step(batch) - loss_dict_ema = {key + '_ema': loss_dict_ema[key] for key in loss_dict_ema} - self.log_dict(loss_dict_no_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True) - self.log_dict(loss_dict_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True) - - def on_train_batch_end(self, *args, **kwargs): - if self.use_ema: - self.model_ema(self.model) - - def _get_rows_from_list(self, samples): - n_imgs_per_row = len(samples) - denoise_grid = rearrange(samples, 'n b c h w -> b n c h w') - denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') - denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) - return denoise_grid - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=2, sample=True, return_keys=None, **kwargs): - log = dict() - x = self.get_input(batch, self.first_stage_key) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - x = x.to(self.device)[:N] - log["inputs"] = x - - # get diffusion row - diffusion_row = list() - x_start = x[:n_row] - - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(x_start) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - diffusion_row.append(x_noisy) - - log["diffusion_row"] = self._get_rows_from_list(diffusion_row) - - if sample: - # get denoise row - with self.ema_scope("Plotting"): - samples, denoise_row = self.sample(batch_size=N, return_intermediates=True) - - log["samples"] = samples - log["denoise_row"] = self._get_rows_from_list(denoise_row) - - if return_keys: - if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0: - return log - else: - return {key: log[key] for key in return_keys} - return log - - def configure_optimizers(self): - lr = self.learning_rate - params = list(self.model.parameters()) - if self.learn_logvar: - params = params + [self.logvar] - opt = torch.optim.AdamW(params, lr=lr) - return opt - - -class LatentDiffusion(DDPM): - """main class""" - - def __init__(self, - first_stage_config, - cond_stage_config, - num_timesteps_cond=None, - cond_stage_key="image", - cond_stage_trainable=False, - concat_mode=True, - cond_stage_forward=None, - conditioning_key=None, - scale_factor=1.0, - scale_by_std=False, - force_null_conditioning=False, - *args, **kwargs): - self.force_null_conditioning = force_null_conditioning - self.num_timesteps_cond = default(num_timesteps_cond, 1) - self.scale_by_std = scale_by_std - assert self.num_timesteps_cond <= kwargs['timesteps'] - # for backwards compatibility after implementation of DiffusionWrapper - if conditioning_key is None: - conditioning_key = 'concat' if concat_mode else 'crossattn' - if cond_stage_config == '__is_unconditional__' and not self.force_null_conditioning: - conditioning_key = None - ckpt_path = kwargs.pop("ckpt_path", None) - reset_ema = kwargs.pop("reset_ema", False) - reset_num_ema_updates = kwargs.pop("reset_num_ema_updates", False) - ignore_keys = kwargs.pop("ignore_keys", []) - super().__init__(conditioning_key=conditioning_key, *args, **kwargs) - self.concat_mode = concat_mode - self.cond_stage_trainable = cond_stage_trainable - self.cond_stage_key = cond_stage_key - try: - self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1 - except: - self.num_downs = 0 - if not scale_by_std: - self.scale_factor = scale_factor - else: - self.register_buffer('scale_factor', torch.tensor(scale_factor)) - self.instantiate_first_stage(first_stage_config) - self.instantiate_cond_stage(cond_stage_config) - self.cond_stage_forward = cond_stage_forward - self.clip_denoised = False - self.bbox_tokenizer = None - - self.restarted_from_ckpt = False - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys) - self.restarted_from_ckpt = True - if reset_ema: - assert self.use_ema - print( - f"Resetting ema to pure model weights. This is useful when restoring from an ema-only checkpoint.") - self.model_ema = LitEma(self.model) - if reset_num_ema_updates: - print(" +++++++++++ WARNING: RESETTING NUM_EMA UPDATES TO ZERO +++++++++++ ") - assert self.use_ema - self.model_ema.reset_num_updates() - - def make_cond_schedule(self, ): - self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long) - ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long() - self.cond_ids[:self.num_timesteps_cond] = ids - - @rank_zero_only - @torch.no_grad() - def on_train_batch_start(self, batch, batch_idx, dataloader_idx): - # only for very first batch - if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt: - assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously' - # set rescale weight to 1./std of encodings - print("### USING STD-RESCALING ###") - x = super().get_input(batch, self.first_stage_key) - x = x.to(self.device) - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - del self.scale_factor - self.register_buffer('scale_factor', 1. / z.flatten().std()) - print(f"setting self.scale_factor to {self.scale_factor}") - print("### USING STD-RESCALING ###") - - def register_schedule(self, - given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s) - - self.shorten_cond_schedule = self.num_timesteps_cond > 1 - if self.shorten_cond_schedule: - self.make_cond_schedule() - - def instantiate_first_stage(self, config): - model = instantiate_from_config(config) - self.first_stage_model = model.eval() - self.first_stage_model.train = disabled_train - for param in self.first_stage_model.parameters(): - param.requires_grad = False - - def instantiate_cond_stage(self, config): - if not self.cond_stage_trainable: - if config == "__is_first_stage__": - print("Using first stage also as cond stage.") - self.cond_stage_model = self.first_stage_model - elif config == "__is_unconditional__": - print(f"Training {self.__class__.__name__} as an unconditional model.") - self.cond_stage_model = None - # self.be_unconditional = True - else: - model = instantiate_from_config(config) - self.cond_stage_model = model.eval() - self.cond_stage_model.train = disabled_train - for param in self.cond_stage_model.parameters(): - param.requires_grad = False - else: - assert config != '__is_first_stage__' - assert config != '__is_unconditional__' - model = instantiate_from_config(config) - self.cond_stage_model = model - - def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False): - denoise_row = [] - for zd in tqdm(samples, desc=desc): - denoise_row.append(self.decode_first_stage(zd.to(self.device), - force_not_quantize=force_no_decoder_quantization)) - n_imgs_per_row = len(denoise_row) - denoise_row = torch.stack(denoise_row) # n_log_step, n_row, C, H, W - denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w') - denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') - denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) - return denoise_grid - - def get_first_stage_encoding(self, encoder_posterior): - if isinstance(encoder_posterior, DiagonalGaussianDistribution): - z = encoder_posterior.sample() - elif isinstance(encoder_posterior, torch.Tensor): - z = encoder_posterior - else: - raise NotImplementedError(f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented") - return self.scale_factor * z - - def get_learned_conditioning(self, c): - if self.cond_stage_forward is None: - if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode): - c = self.cond_stage_model.encode(c) - if isinstance(c, DiagonalGaussianDistribution): - c = c.mode() - else: - c = self.cond_stage_model(c) - else: - assert hasattr(self.cond_stage_model, self.cond_stage_forward) - c = getattr(self.cond_stage_model, self.cond_stage_forward)(c) - return c - - def meshgrid(self, h, w): - y = torch.arange(0, h).view(h, 1, 1).repeat(1, w, 1) - x = torch.arange(0, w).view(1, w, 1).repeat(h, 1, 1) - - arr = torch.cat([y, x], dim=-1) - return arr - - def delta_border(self, h, w): - """ - :param h: height - :param w: width - :return: normalized distance to image border, - wtith min distance = 0 at border and max dist = 0.5 at image center - """ - lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2) - arr = self.meshgrid(h, w) / lower_right_corner - dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0] - dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0] - edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0] - return edge_dist - - def get_weighting(self, h, w, Ly, Lx, device): - weighting = self.delta_border(h, w) - weighting = torch.clip(weighting, self.split_input_params["clip_min_weight"], - self.split_input_params["clip_max_weight"], ) - weighting = weighting.view(1, h * w, 1).repeat(1, 1, Ly * Lx).to(device) - - if self.split_input_params["tie_braker"]: - L_weighting = self.delta_border(Ly, Lx) - L_weighting = torch.clip(L_weighting, - self.split_input_params["clip_min_tie_weight"], - self.split_input_params["clip_max_tie_weight"]) - - L_weighting = L_weighting.view(1, 1, Ly * Lx).to(device) - weighting = weighting * L_weighting - return weighting - - def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1): # todo load once not every time, shorten code - """ - :param x: img of size (bs, c, h, w) - :return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1]) - """ - bs, nc, h, w = x.shape - - # number of crops in image - Ly = (h - kernel_size[0]) // stride[0] + 1 - Lx = (w - kernel_size[1]) // stride[1] + 1 - - if uf == 1 and df == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params) - - weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx)) - - elif uf > 1 and df == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf), - dilation=1, padding=0, - stride=(stride[0] * uf, stride[1] * uf)) - fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2) - - weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h * uf, w * uf) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx)) - - elif df > 1 and uf == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df), - dilation=1, padding=0, - stride=(stride[0] // df, stride[1] // df)) - fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2) - - weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h // df, w // df) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx)) - - else: - raise NotImplementedError - - return fold, unfold, normalization, weighting - - @torch.no_grad() - def get_input(self, batch, k, return_first_stage_outputs=False, force_c_encode=False, - cond_key=None, return_original_cond=False, bs=None, return_x=False): - x = super().get_input(batch, k) - if bs is not None: - x = x[:bs] - x = x.to(self.device) - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - - if self.model.conditioning_key is not None and not self.force_null_conditioning: - if cond_key is None: - cond_key = self.cond_stage_key - if cond_key != self.first_stage_key: - if cond_key in ['caption', 'coordinates_bbox', "txt"]: - xc = batch[cond_key] - elif cond_key in ['class_label', 'cls']: - xc = batch - else: - xc = super().get_input(batch, cond_key).to(self.device) - else: - xc = x - if not self.cond_stage_trainable or force_c_encode: - if isinstance(xc, dict) or isinstance(xc, list): - c = self.get_learned_conditioning(xc) - else: - c = self.get_learned_conditioning(xc.to(self.device)) - else: - c = xc - if bs is not None: - c = c[:bs] - - if self.use_positional_encodings: - pos_x, pos_y = self.compute_latent_shifts(batch) - ckey = __conditioning_keys__[self.model.conditioning_key] - c = {ckey: c, 'pos_x': pos_x, 'pos_y': pos_y} - - else: - c = None - xc = None - if self.use_positional_encodings: - pos_x, pos_y = self.compute_latent_shifts(batch) - c = {'pos_x': pos_x, 'pos_y': pos_y} - out = [z, c] - if return_first_stage_outputs: - xrec = self.decode_first_stage(z) - out.extend([x, xrec]) - if return_x: - out.extend([x]) - if return_original_cond: - out.append(xc) - return out - - @torch.no_grad() - def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): - if predict_cids: - if z.dim() == 4: - z = torch.argmax(z.exp(), dim=1).long() - z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) - z = rearrange(z, 'b h w c -> b c h w').contiguous() - - z = 1. / self.scale_factor * z - return self.first_stage_model.decode(z) - - @torch.no_grad() - def encode_first_stage(self, x): - return self.first_stage_model.encode(x) - - def shared_step(self, batch, **kwargs): - x, c = self.get_input(batch, self.first_stage_key) - loss = self(x, c) - return loss - - def forward(self, x, c, *args, **kwargs): - t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long() - if self.model.conditioning_key is not None: - assert c is not None - if self.cond_stage_trainable: - c = self.get_learned_conditioning(c) - if self.shorten_cond_schedule: # TODO: drop this option - tc = self.cond_ids[t].to(self.device) - c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float())) - return self.p_losses(x, c, t, *args, **kwargs) - - def apply_model(self, x_noisy, t, cond, return_ids=False): - if isinstance(cond, dict): - # hybrid case, cond is expected to be a dict - pass - else: - if not isinstance(cond, list): - cond = [cond] - key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn' - cond = {key: cond} - - x_recon = self.model(x_noisy, t, **cond) - - if isinstance(x_recon, tuple) and not return_ids: - return x_recon[0] - else: - return x_recon - - def _predict_eps_from_xstart(self, x_t, t, pred_xstart): - return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \ - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) - - def _prior_bpd(self, x_start): - """ - Get the prior KL term for the variational lower-bound, measured in - bits-per-dim. - This term can't be optimized, as it only depends on the encoder. - :param x_start: the [N x C x ...] tensor of inputs. - :return: a batch of [N] KL values (in bits), one per batch element. - """ - batch_size = x_start.shape[0] - t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device) - qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t) - kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0) - return mean_flat(kl_prior) / np.log(2.0) - - def p_losses(self, x_start, cond, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - model_output = self.apply_model(x_noisy, t, cond) - - loss_dict = {} - prefix = 'train' if self.training else 'val' - - if self.parameterization == "x0": - target = x_start - elif self.parameterization == "eps": - target = noise - elif self.parameterization == "v": - target = self.get_v(x_start, noise, t) - else: - raise NotImplementedError() - - loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3]) - loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()}) - - logvar_t = self.logvar[t].to(self.device) - loss = loss_simple / torch.exp(logvar_t) + logvar_t - # loss = loss_simple / torch.exp(self.logvar) + self.logvar - if self.learn_logvar: - loss_dict.update({f'{prefix}/loss_gamma': loss.mean()}) - loss_dict.update({'logvar': self.logvar.data.mean()}) - - loss = self.l_simple_weight * loss.mean() - - loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3)) - loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean() - loss_dict.update({f'{prefix}/loss_vlb': loss_vlb}) - loss += (self.original_elbo_weight * loss_vlb) - loss_dict.update({f'{prefix}/loss': loss}) - - return loss, loss_dict - - def p_mean_variance(self, x, c, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False, - return_x0=False, score_corrector=None, corrector_kwargs=None): - t_in = t - model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids) - - if score_corrector is not None: - assert self.parameterization == "eps" - model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs) - - if return_codebook_ids: - model_out, logits = model_out - - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - else: - raise NotImplementedError() - - if clip_denoised: - x_recon.clamp_(-1., 1.) - if quantize_denoised: - x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon) - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - if return_codebook_ids: - return model_mean, posterior_variance, posterior_log_variance, logits - elif return_x0: - return model_mean, posterior_variance, posterior_log_variance, x_recon - else: - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False, - return_codebook_ids=False, quantize_denoised=False, return_x0=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None): - b, *_, device = *x.shape, x.device - outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised, - return_codebook_ids=return_codebook_ids, - quantize_denoised=quantize_denoised, - return_x0=return_x0, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - if return_codebook_ids: - raise DeprecationWarning("Support dropped.") - model_mean, _, model_log_variance, logits = outputs - elif return_x0: - model_mean, _, model_log_variance, x0 = outputs - else: - model_mean, _, model_log_variance = outputs - - noise = noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - - if return_codebook_ids: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1) - if return_x0: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0 - else: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def progressive_denoising(self, cond, shape, verbose=True, callback=None, quantize_denoised=False, - img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0., - score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None, - log_every_t=None): - if not log_every_t: - log_every_t = self.log_every_t - timesteps = self.num_timesteps - if batch_size is not None: - b = batch_size if batch_size is not None else shape[0] - shape = [batch_size] + list(shape) - else: - b = batch_size = shape[0] - if x_T is None: - img = torch.randn(shape, device=self.device) - else: - img = x_T - intermediates = [] - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation', - total=timesteps) if verbose else reversed( - range(0, timesteps)) - if type(temperature) == float: - temperature = [temperature] * timesteps - - for i in iterator: - ts = torch.full((b,), i, device=self.device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img, x0_partial = self.p_sample(img, cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised, return_x0=True, - temperature=temperature[i], noise_dropout=noise_dropout, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - if mask is not None: - assert x0 is not None - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(x0_partial) - if callback: callback(i) - if img_callback: img_callback(img, i) - return img, intermediates - - @torch.no_grad() - def p_sample_loop(self, cond, shape, return_intermediates=False, - x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, start_T=None, - log_every_t=None): - - if not log_every_t: - log_every_t = self.log_every_t - device = self.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - intermediates = [img] - if timesteps is None: - timesteps = self.num_timesteps - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed( - range(0, timesteps)) - - if mask is not None: - assert x0 is not None - assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match - - for i in iterator: - ts = torch.full((b,), i, device=device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img = self.p_sample(img, cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised) - if mask is not None: - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(img) - if callback: callback(i) - if img_callback: img_callback(img, i) - - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None, - verbose=True, timesteps=None, quantize_denoised=False, - mask=None, x0=None, shape=None, **kwargs): - if shape is None: - shape = (batch_size, self.channels, self.image_size, self.image_size) - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - return self.p_sample_loop(cond, - shape, - return_intermediates=return_intermediates, x_T=x_T, - verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised, - mask=mask, x0=x0) - - @torch.no_grad() - def sample_log(self, cond, batch_size, ddim, ddim_steps, **kwargs): - if ddim: - ddim_sampler = DDIMSampler(self) - shape = (self.channels, self.image_size, self.image_size) - samples, intermediates = ddim_sampler.sample(ddim_steps, batch_size, - shape, cond, verbose=False, **kwargs) - - else: - samples, intermediates = self.sample(cond=cond, batch_size=batch_size, - return_intermediates=True, **kwargs) - - return samples, intermediates - - @torch.no_grad() - def get_unconditional_conditioning(self, batch_size, null_label=None): - if null_label is not None: - xc = null_label - if isinstance(xc, ListConfig): - xc = list(xc) - if isinstance(xc, dict) or isinstance(xc, list): - c = self.get_learned_conditioning(xc) - else: - if hasattr(xc, "to"): - xc = xc.to(self.device) - c = self.get_learned_conditioning(xc) - else: - if self.cond_stage_key in ["class_label", "cls"]: - xc = self.cond_stage_model.get_unconditional_conditioning(batch_size, device=self.device) - return self.get_learned_conditioning(xc) - else: - raise NotImplementedError("todo") - if isinstance(c, list): # in case the encoder gives us a list - for i in range(len(c)): - c[i] = repeat(c[i], '1 ... -> b ...', b=batch_size).to(self.device) - else: - c = repeat(c, '1 ... -> b ...', b=batch_size).to(self.device) - return c - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=50, ddim_eta=0., return_keys=None, - quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True, - plot_diffusion_rows=True, unconditional_guidance_scale=1., unconditional_guidance_label=None, - use_ema_scope=True, - **kwargs): - ema_scope = self.ema_scope if use_ema_scope else nullcontext - use_ddim = ddim_steps is not None - - log = dict() - z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, - return_first_stage_outputs=True, - force_c_encode=True, - return_original_cond=True, - bs=N) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - log["inputs"] = x - log["reconstruction"] = xrec - if self.model.conditioning_key is not None: - if hasattr(self.cond_stage_model, "decode"): - xc = self.cond_stage_model.decode(c) - log["conditioning"] = xc - elif self.cond_stage_key in ["caption", "txt"]: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25) - log["conditioning"] = xc - elif self.cond_stage_key in ['class_label', "cls"]: - try: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25) - log['conditioning'] = xc - except KeyError: - # probably no "human_label" in batch - pass - elif isimage(xc): - log["conditioning"] = xc - if ismap(xc): - log["original_conditioning"] = self.to_rgb(xc) - - if plot_diffusion_rows: - # get diffusion row - diffusion_row = list() - z_start = z[:n_row] - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(z_start) - z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise) - diffusion_row.append(self.decode_first_stage(z_noisy)) - - diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W - diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w') - diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w') - diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0]) - log["diffusion_row"] = diffusion_grid - - if sample: - # get denoise row - with ema_scope("Sampling"): - samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True) - x_samples = self.decode_first_stage(samples) - log["samples"] = x_samples - if plot_denoise_rows: - denoise_grid = self._get_denoise_row_from_list(z_denoise_row) - log["denoise_row"] = denoise_grid - - if quantize_denoised and not isinstance(self.first_stage_model, AutoencoderKL) and not isinstance( - self.first_stage_model, IdentityFirstStage): - # also display when quantizing x0 while sampling - with ema_scope("Plotting Quantized Denoised"): - samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta, - quantize_denoised=True) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True, - # quantize_denoised=True) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_x0_quantized"] = x_samples - - if unconditional_guidance_scale > 1.0: - uc = self.get_unconditional_conditioning(N, unconditional_guidance_label) - if self.model.conditioning_key == "crossattn-adm": - uc = {"c_crossattn": [uc], "c_adm": c["c_adm"]} - with ema_scope("Sampling with classifier-free guidance"): - samples_cfg, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=uc, - ) - x_samples_cfg = self.decode_first_stage(samples_cfg) - log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg - - if inpaint: - # make a simple center square - b, h, w = z.shape[0], z.shape[2], z.shape[3] - mask = torch.ones(N, h, w).to(self.device) - # zeros will be filled in - mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0. - mask = mask[:, None, ...] - with ema_scope("Plotting Inpaint"): - samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, eta=ddim_eta, - ddim_steps=ddim_steps, x0=z[:N], mask=mask) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_inpainting"] = x_samples - log["mask"] = mask - - # outpaint - mask = 1. - mask - with ema_scope("Plotting Outpaint"): - samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, eta=ddim_eta, - ddim_steps=ddim_steps, x0=z[:N], mask=mask) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_outpainting"] = x_samples - - if plot_progressive_rows: - with ema_scope("Plotting Progressives"): - img, progressives = self.progressive_denoising(c, - shape=(self.channels, self.image_size, self.image_size), - batch_size=N) - prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation") - log["progressive_row"] = prog_row - - if return_keys: - if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0: - return log - else: - return {key: log[key] for key in return_keys} - return log - - def configure_optimizers(self): - lr = self.learning_rate - params = list(self.model.parameters()) - if self.cond_stage_trainable: - print(f"{self.__class__.__name__}: Also optimizing conditioner params!") - params = params + list(self.cond_stage_model.parameters()) - if self.learn_logvar: - print('Diffusion model optimizing logvar') - params.append(self.logvar) - opt = torch.optim.AdamW(params, lr=lr) - if self.use_scheduler: - assert 'target' in self.scheduler_config - scheduler = instantiate_from_config(self.scheduler_config) - - print("Setting up LambdaLR scheduler...") - scheduler = [ - { - 'scheduler': LambdaLR(opt, lr_lambda=scheduler.schedule), - 'interval': 'step', - 'frequency': 1 - }] - return [opt], scheduler - return opt - - @torch.no_grad() - def to_rgb(self, x): - x = x.float() - if not hasattr(self, "colorize"): - self.colorize = torch.randn(3, x.shape[1], 1, 1).to(x) - x = nn.functional.conv2d(x, weight=self.colorize) - x = 2. * (x - x.min()) / (x.max() - x.min()) - 1. - return x - - -class DiffusionWrapper(pl.LightningModule): - def __init__(self, diff_model_config, conditioning_key): - super().__init__() - self.sequential_cross_attn = diff_model_config.pop("sequential_crossattn", False) - self.diffusion_model = instantiate_from_config(diff_model_config) - self.conditioning_key = conditioning_key - assert self.conditioning_key in [None, 'concat', 'crossattn', 'hybrid', 'adm', 'hybrid-adm', 'crossattn-adm'] - - def forward(self, x, t, c_concat: list = None, c_crossattn: list = None, c_adm=None): - if self.conditioning_key is None: - out = self.diffusion_model(x, t) - elif self.conditioning_key == 'concat': - xc = torch.cat([x] + c_concat, dim=1) - out = self.diffusion_model(xc, t) - elif self.conditioning_key == 'crossattn': - if not self.sequential_cross_attn: - cc = torch.cat(c_crossattn, 1) - else: - cc = c_crossattn - out = self.diffusion_model(x, t, context=cc) - elif self.conditioning_key == 'hybrid': - xc = torch.cat([x] + c_concat, dim=1) - cc = torch.cat(c_crossattn, 1) - out = self.diffusion_model(xc, t, context=cc) - elif self.conditioning_key == 'hybrid-adm': - assert c_adm is not None - xc = torch.cat([x] + c_concat, dim=1) - cc = torch.cat(c_crossattn, 1) - out = self.diffusion_model(xc, t, context=cc, y=c_adm) - elif self.conditioning_key == 'crossattn-adm': - assert c_adm is not None - cc = torch.cat(c_crossattn, 1) - out = self.diffusion_model(x, t, context=cc, y=c_adm) - elif self.conditioning_key == 'adm': - cc = c_crossattn[0] - out = self.diffusion_model(x, t, y=cc) - else: - raise NotImplementedError() - - return out - - -class LatentUpscaleDiffusion(LatentDiffusion): - def __init__(self, *args, low_scale_config, low_scale_key="LR", noise_level_key=None, **kwargs): - super().__init__(*args, **kwargs) - # assumes that neither the cond_stage nor the low_scale_model contain trainable params - assert not self.cond_stage_trainable - self.instantiate_low_stage(low_scale_config) - self.low_scale_key = low_scale_key - self.noise_level_key = noise_level_key - - def instantiate_low_stage(self, config): - model = instantiate_from_config(config) - self.low_scale_model = model.eval() - self.low_scale_model.train = disabled_train - for param in self.low_scale_model.parameters(): - param.requires_grad = False - - @torch.no_grad() - def get_input(self, batch, k, cond_key=None, bs=None, log_mode=False): - if not log_mode: - z, c = super().get_input(batch, k, force_c_encode=True, bs=bs) - else: - z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True, - force_c_encode=True, return_original_cond=True, bs=bs) - x_low = batch[self.low_scale_key][:bs] - x_low = rearrange(x_low, 'b h w c -> b c h w') - x_low = x_low.to(memory_format=torch.contiguous_format).float() - zx, noise_level = self.low_scale_model(x_low) - if self.noise_level_key is not None: - # get noise level from batch instead, e.g. when extracting a custom noise level for bsr - raise NotImplementedError('TODO') - - all_conds = {"c_concat": [zx], "c_crossattn": [c], "c_adm": noise_level} - if log_mode: - # TODO: maybe disable if too expensive - x_low_rec = self.low_scale_model.decode(zx) - return z, all_conds, x, xrec, xc, x_low, x_low_rec, noise_level - return z, all_conds - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None, - plot_denoise_rows=False, plot_progressive_rows=True, plot_diffusion_rows=True, - unconditional_guidance_scale=1., unconditional_guidance_label=None, use_ema_scope=True, - **kwargs): - ema_scope = self.ema_scope if use_ema_scope else nullcontext - use_ddim = ddim_steps is not None - - log = dict() - z, c, x, xrec, xc, x_low, x_low_rec, noise_level = self.get_input(batch, self.first_stage_key, bs=N, - log_mode=True) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - log["inputs"] = x - log["reconstruction"] = xrec - log["x_lr"] = x_low - log[f"x_lr_rec_@noise_levels{'-'.join(map(lambda x: str(x), list(noise_level.cpu().numpy())))}"] = x_low_rec - if self.model.conditioning_key is not None: - if hasattr(self.cond_stage_model, "decode"): - xc = self.cond_stage_model.decode(c) - log["conditioning"] = xc - elif self.cond_stage_key in ["caption", "txt"]: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25) - log["conditioning"] = xc - elif self.cond_stage_key in ['class_label', 'cls']: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25) - log['conditioning'] = xc - elif isimage(xc): - log["conditioning"] = xc - if ismap(xc): - log["original_conditioning"] = self.to_rgb(xc) - - if plot_diffusion_rows: - # get diffusion row - diffusion_row = list() - z_start = z[:n_row] - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(z_start) - z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise) - diffusion_row.append(self.decode_first_stage(z_noisy)) - - diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W - diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w') - diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w') - diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0]) - log["diffusion_row"] = diffusion_grid - - if sample: - # get denoise row - with ema_scope("Sampling"): - samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True) - x_samples = self.decode_first_stage(samples) - log["samples"] = x_samples - if plot_denoise_rows: - denoise_grid = self._get_denoise_row_from_list(z_denoise_row) - log["denoise_row"] = denoise_grid - - if unconditional_guidance_scale > 1.0: - uc_tmp = self.get_unconditional_conditioning(N, unconditional_guidance_label) - # TODO explore better "unconditional" choices for the other keys - # maybe guide away from empty text label and highest noise level and maximally degraded zx? - uc = dict() - for k in c: - if k == "c_crossattn": - assert isinstance(c[k], list) and len(c[k]) == 1 - uc[k] = [uc_tmp] - elif k == "c_adm": # todo: only run with text-based guidance? - assert isinstance(c[k], torch.Tensor) - #uc[k] = torch.ones_like(c[k]) * self.low_scale_model.max_noise_level - uc[k] = c[k] - elif isinstance(c[k], list): - uc[k] = [c[k][i] for i in range(len(c[k]))] - else: - uc[k] = c[k] - - with ema_scope("Sampling with classifier-free guidance"): - samples_cfg, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=uc, - ) - x_samples_cfg = self.decode_first_stage(samples_cfg) - log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg - - if plot_progressive_rows: - with ema_scope("Plotting Progressives"): - img, progressives = self.progressive_denoising(c, - shape=(self.channels, self.image_size, self.image_size), - batch_size=N) - prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation") - log["progressive_row"] = prog_row - - return log - - -class LatentFinetuneDiffusion(LatentDiffusion): - """ - Basis for different finetunas, such as inpainting or depth2image - To disable finetuning mode, set finetune_keys to None - """ - - def __init__(self, - concat_keys: tuple, - finetune_keys=("model.diffusion_model.input_blocks.0.0.weight", - "model_ema.diffusion_modelinput_blocks00weight" - ), - keep_finetune_dims=4, - # if model was trained without concat mode before and we would like to keep these channels - c_concat_log_start=None, # to log reconstruction of c_concat codes - c_concat_log_end=None, - *args, **kwargs - ): - ckpt_path = kwargs.pop("ckpt_path", None) - ignore_keys = kwargs.pop("ignore_keys", list()) - super().__init__(*args, **kwargs) - self.finetune_keys = finetune_keys - self.concat_keys = concat_keys - self.keep_dims = keep_finetune_dims - self.c_concat_log_start = c_concat_log_start - self.c_concat_log_end = c_concat_log_end - if exists(self.finetune_keys): assert exists(ckpt_path), 'can only finetune from a given checkpoint' - if exists(ckpt_path): - self.init_from_ckpt(ckpt_path, ignore_keys) - - def init_from_ckpt(self, path, ignore_keys=list(), only_model=False): - sd = torch.load(path, map_location="cpu") - if "state_dict" in list(sd.keys()): - sd = sd["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - - # make it explicit, finetune by including extra input channels - if exists(self.finetune_keys) and k in self.finetune_keys: - new_entry = None - for name, param in self.named_parameters(): - if name in self.finetune_keys: - print( - f"modifying key '{name}' and keeping its original {self.keep_dims} (channels) dimensions only") - new_entry = torch.zeros_like(param) # zero init - assert exists(new_entry), 'did not find matching parameter to modify' - new_entry[:, :self.keep_dims, ...] = sd[k] - sd[k] = new_entry - - missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( - sd, strict=False) - print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys: {missing}") - if len(unexpected) > 0: - print(f"Unexpected Keys: {unexpected}") - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None, - quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True, - plot_diffusion_rows=True, unconditional_guidance_scale=1., unconditional_guidance_label=None, - use_ema_scope=True, - **kwargs): - ema_scope = self.ema_scope if use_ema_scope else nullcontext - use_ddim = ddim_steps is not None - - log = dict() - z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, bs=N, return_first_stage_outputs=True) - c_cat, c = c["c_concat"][0], c["c_crossattn"][0] - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - log["inputs"] = x - log["reconstruction"] = xrec - if self.model.conditioning_key is not None: - if hasattr(self.cond_stage_model, "decode"): - xc = self.cond_stage_model.decode(c) - log["conditioning"] = xc - elif self.cond_stage_key in ["caption", "txt"]: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25) - log["conditioning"] = xc - elif self.cond_stage_key in ['class_label', 'cls']: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25) - log['conditioning'] = xc - elif isimage(xc): - log["conditioning"] = xc - if ismap(xc): - log["original_conditioning"] = self.to_rgb(xc) - - if not (self.c_concat_log_start is None and self.c_concat_log_end is None): - log["c_concat_decoded"] = self.decode_first_stage(c_cat[:, self.c_concat_log_start:self.c_concat_log_end]) - - if plot_diffusion_rows: - # get diffusion row - diffusion_row = list() - z_start = z[:n_row] - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(z_start) - z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise) - diffusion_row.append(self.decode_first_stage(z_noisy)) - - diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W - diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w') - diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w') - diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0]) - log["diffusion_row"] = diffusion_grid - - if sample: - # get denoise row - with ema_scope("Sampling"): - samples, z_denoise_row = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]}, - batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True) - x_samples = self.decode_first_stage(samples) - log["samples"] = x_samples - if plot_denoise_rows: - denoise_grid = self._get_denoise_row_from_list(z_denoise_row) - log["denoise_row"] = denoise_grid - - if unconditional_guidance_scale > 1.0: - uc_cross = self.get_unconditional_conditioning(N, unconditional_guidance_label) - uc_cat = c_cat - uc_full = {"c_concat": [uc_cat], "c_crossattn": [uc_cross]} - with ema_scope("Sampling with classifier-free guidance"): - samples_cfg, _ = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]}, - batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=uc_full, - ) - x_samples_cfg = self.decode_first_stage(samples_cfg) - log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg - - return log - - -class LatentInpaintDiffusion(LatentFinetuneDiffusion): - """ - can either run as pure inpainting model (only concat mode) or with mixed conditionings, - e.g. mask as concat and text via cross-attn. - To disable finetuning mode, set finetune_keys to None - """ - - def __init__(self, - concat_keys=("mask", "masked_image"), - masked_image_key="masked_image", - *args, **kwargs - ): - super().__init__(concat_keys, *args, **kwargs) - self.masked_image_key = masked_image_key - assert self.masked_image_key in concat_keys - - @torch.no_grad() - def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False): - # note: restricted to non-trainable encoders currently - assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for inpainting' - z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True, - force_c_encode=True, return_original_cond=True, bs=bs) - - assert exists(self.concat_keys) - c_cat = list() - for ck in self.concat_keys: - cc = rearrange(batch[ck], 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float() - if bs is not None: - cc = cc[:bs] - cc = cc.to(self.device) - bchw = z.shape - if ck != self.masked_image_key: - cc = torch.nn.functional.interpolate(cc, size=bchw[-2:]) - else: - cc = self.get_first_stage_encoding(self.encode_first_stage(cc)) - c_cat.append(cc) - c_cat = torch.cat(c_cat, dim=1) - all_conds = {"c_concat": [c_cat], "c_crossattn": [c]} - if return_first_stage_outputs: - return z, all_conds, x, xrec, xc - return z, all_conds - - @torch.no_grad() - def log_images(self, *args, **kwargs): - log = super(LatentInpaintDiffusion, self).log_images(*args, **kwargs) - log["masked_image"] = rearrange(args[0]["masked_image"], - 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float() - return log - - -class LatentDepth2ImageDiffusion(LatentFinetuneDiffusion): - """ - condition on monocular depth estimation - """ - - def __init__(self, depth_stage_config, concat_keys=("midas_in",), *args, **kwargs): - super().__init__(concat_keys=concat_keys, *args, **kwargs) - self.depth_model = instantiate_from_config(depth_stage_config) - self.depth_stage_key = concat_keys[0] - - @torch.no_grad() - def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False): - # note: restricted to non-trainable encoders currently - assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for depth2img' - z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True, - force_c_encode=True, return_original_cond=True, bs=bs) - - assert exists(self.concat_keys) - assert len(self.concat_keys) == 1 - c_cat = list() - for ck in self.concat_keys: - cc = batch[ck] - if bs is not None: - cc = cc[:bs] - cc = cc.to(self.device) - cc = self.depth_model(cc) - cc = torch.nn.functional.interpolate( - cc, - size=z.shape[2:], - mode="bicubic", - align_corners=False, - ) - - depth_min, depth_max = torch.amin(cc, dim=[1, 2, 3], keepdim=True), torch.amax(cc, dim=[1, 2, 3], - keepdim=True) - cc = 2. * (cc - depth_min) / (depth_max - depth_min + 0.001) - 1. - c_cat.append(cc) - c_cat = torch.cat(c_cat, dim=1) - all_conds = {"c_concat": [c_cat], "c_crossattn": [c]} - if return_first_stage_outputs: - return z, all_conds, x, xrec, xc - return z, all_conds - - @torch.no_grad() - def log_images(self, *args, **kwargs): - log = super().log_images(*args, **kwargs) - depth = self.depth_model(args[0][self.depth_stage_key]) - depth_min, depth_max = torch.amin(depth, dim=[1, 2, 3], keepdim=True), \ - torch.amax(depth, dim=[1, 2, 3], keepdim=True) - log["depth"] = 2. * (depth - depth_min) / (depth_max - depth_min) - 1. - return log - - -class LatentUpscaleFinetuneDiffusion(LatentFinetuneDiffusion): - """ - condition on low-res image (and optionally on some spatial noise augmentation) - """ - def __init__(self, concat_keys=("lr",), reshuffle_patch_size=None, - low_scale_config=None, low_scale_key=None, *args, **kwargs): - super().__init__(concat_keys=concat_keys, *args, **kwargs) - self.reshuffle_patch_size = reshuffle_patch_size - self.low_scale_model = None - if low_scale_config is not None: - print("Initializing a low-scale model") - assert exists(low_scale_key) - self.instantiate_low_stage(low_scale_config) - self.low_scale_key = low_scale_key - - def instantiate_low_stage(self, config): - model = instantiate_from_config(config) - self.low_scale_model = model.eval() - self.low_scale_model.train = disabled_train - for param in self.low_scale_model.parameters(): - param.requires_grad = False - - @torch.no_grad() - def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False): - # note: restricted to non-trainable encoders currently - assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for upscaling-ft' - z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True, - force_c_encode=True, return_original_cond=True, bs=bs) - - assert exists(self.concat_keys) - assert len(self.concat_keys) == 1 - # optionally make spatial noise_level here - c_cat = list() - noise_level = None - for ck in self.concat_keys: - cc = batch[ck] - cc = rearrange(cc, 'b h w c -> b c h w') - if exists(self.reshuffle_patch_size): - assert isinstance(self.reshuffle_patch_size, int) - cc = rearrange(cc, 'b c (p1 h) (p2 w) -> b (p1 p2 c) h w', - p1=self.reshuffle_patch_size, p2=self.reshuffle_patch_size) - if bs is not None: - cc = cc[:bs] - cc = cc.to(self.device) - if exists(self.low_scale_model) and ck == self.low_scale_key: - cc, noise_level = self.low_scale_model(cc) - c_cat.append(cc) - c_cat = torch.cat(c_cat, dim=1) - if exists(noise_level): - all_conds = {"c_concat": [c_cat], "c_crossattn": [c], "c_adm": noise_level} - else: - all_conds = {"c_concat": [c_cat], "c_crossattn": [c]} - if return_first_stage_outputs: - return z, all_conds, x, xrec, xc - return z, all_conds - - @torch.no_grad() - def log_images(self, *args, **kwargs): - log = super().log_images(*args, **kwargs) - log["lr"] = rearrange(args[0]["lr"], 'b h w c -> b c h w') - return log diff --git a/spaces/Arthur678/vits-uma-genshin-honkai/README.md b/spaces/Arthur678/vits-uma-genshin-honkai/README.md deleted file mode 100644 index 1c0aa069bfd980b6b45bb2bf62ff74bd9b0b61c2..0000000000000000000000000000000000000000 --- a/spaces/Arthur678/vits-uma-genshin-honkai/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -license: apache-2.0 -title: ' vits-uma-genshin-honkai' -sdk: gradio -sdk_version: 3.7 -emoji: 🐨 -colorTo: yellow -pinned: false -app_file: app.py -duplicated_from: ikechan8370/vits-uma-genshin-honkai ---- diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/colorama/tests/ansitowin32_test.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/colorama/tests/ansitowin32_test.py deleted file mode 100644 index 91ca551f97b4576c680711e826a1855fb944c872..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/colorama/tests/ansitowin32_test.py +++ /dev/null @@ -1,294 +0,0 @@ -# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. -from io import StringIO, TextIOWrapper -from unittest import TestCase, main -try: - from contextlib import ExitStack -except ImportError: - # python 2 - from contextlib2 import ExitStack - -try: - from unittest.mock import MagicMock, Mock, patch -except ImportError: - from mock import MagicMock, Mock, patch - -from ..ansitowin32 import AnsiToWin32, StreamWrapper -from ..win32 import ENABLE_VIRTUAL_TERMINAL_PROCESSING -from .utils import osname - - -class StreamWrapperTest(TestCase): - - def testIsAProxy(self): - mockStream = Mock() - wrapper = StreamWrapper(mockStream, None) - self.assertTrue( wrapper.random_attr is mockStream.random_attr ) - - def testDelegatesWrite(self): - mockStream = Mock() - mockConverter = Mock() - wrapper = StreamWrapper(mockStream, mockConverter) - wrapper.write('hello') - self.assertTrue(mockConverter.write.call_args, (('hello',), {})) - - def testDelegatesContext(self): - mockConverter = Mock() - s = StringIO() - with StreamWrapper(s, mockConverter) as fp: - fp.write(u'hello') - self.assertTrue(s.closed) - - def testProxyNoContextManager(self): - mockStream = MagicMock() - mockStream.__enter__.side_effect = AttributeError() - mockConverter = Mock() - with self.assertRaises(AttributeError) as excinfo: - with StreamWrapper(mockStream, mockConverter) as wrapper: - wrapper.write('hello') - - def test_closed_shouldnt_raise_on_closed_stream(self): - stream = StringIO() - stream.close() - wrapper = StreamWrapper(stream, None) - self.assertEqual(wrapper.closed, True) - - def test_closed_shouldnt_raise_on_detached_stream(self): - stream = TextIOWrapper(StringIO()) - stream.detach() - wrapper = StreamWrapper(stream, None) - self.assertEqual(wrapper.closed, True) - -class AnsiToWin32Test(TestCase): - - def testInit(self): - mockStdout = Mock() - auto = Mock() - stream = AnsiToWin32(mockStdout, autoreset=auto) - self.assertEqual(stream.wrapped, mockStdout) - self.assertEqual(stream.autoreset, auto) - - @patch('colorama.ansitowin32.winterm', None) - @patch('colorama.ansitowin32.winapi_test', lambda *_: True) - def testStripIsTrueOnWindows(self): - with osname('nt'): - mockStdout = Mock() - stream = AnsiToWin32(mockStdout) - self.assertTrue(stream.strip) - - def testStripIsFalseOffWindows(self): - with osname('posix'): - mockStdout = Mock(closed=False) - stream = AnsiToWin32(mockStdout) - self.assertFalse(stream.strip) - - def testWriteStripsAnsi(self): - mockStdout = Mock() - stream = AnsiToWin32(mockStdout) - stream.wrapped = Mock() - stream.write_and_convert = Mock() - stream.strip = True - - stream.write('abc') - - self.assertFalse(stream.wrapped.write.called) - self.assertEqual(stream.write_and_convert.call_args, (('abc',), {})) - - def testWriteDoesNotStripAnsi(self): - mockStdout = Mock() - stream = AnsiToWin32(mockStdout) - stream.wrapped = Mock() - stream.write_and_convert = Mock() - stream.strip = False - stream.convert = False - - stream.write('abc') - - self.assertFalse(stream.write_and_convert.called) - self.assertEqual(stream.wrapped.write.call_args, (('abc',), {})) - - def assert_autoresets(self, convert, autoreset=True): - stream = AnsiToWin32(Mock()) - stream.convert = convert - stream.reset_all = Mock() - stream.autoreset = autoreset - stream.winterm = Mock() - - stream.write('abc') - - self.assertEqual(stream.reset_all.called, autoreset) - - def testWriteAutoresets(self): - self.assert_autoresets(convert=True) - self.assert_autoresets(convert=False) - self.assert_autoresets(convert=True, autoreset=False) - self.assert_autoresets(convert=False, autoreset=False) - - def testWriteAndConvertWritesPlainText(self): - stream = AnsiToWin32(Mock()) - stream.write_and_convert( 'abc' ) - self.assertEqual( stream.wrapped.write.call_args, (('abc',), {}) ) - - def testWriteAndConvertStripsAllValidAnsi(self): - stream = AnsiToWin32(Mock()) - stream.call_win32 = Mock() - data = [ - 'abc\033[mdef', - 'abc\033[0mdef', - 'abc\033[2mdef', - 'abc\033[02mdef', - 'abc\033[002mdef', - 'abc\033[40mdef', - 'abc\033[040mdef', - 'abc\033[0;1mdef', - 'abc\033[40;50mdef', - 'abc\033[50;30;40mdef', - 'abc\033[Adef', - 'abc\033[0Gdef', - 'abc\033[1;20;128Hdef', - ] - for datum in data: - stream.wrapped.write.reset_mock() - stream.write_and_convert( datum ) - self.assertEqual( - [args[0] for args in stream.wrapped.write.call_args_list], - [ ('abc',), ('def',) ] - ) - - def testWriteAndConvertSkipsEmptySnippets(self): - stream = AnsiToWin32(Mock()) - stream.call_win32 = Mock() - stream.write_and_convert( '\033[40m\033[41m' ) - self.assertFalse( stream.wrapped.write.called ) - - def testWriteAndConvertCallsWin32WithParamsAndCommand(self): - stream = AnsiToWin32(Mock()) - stream.convert = True - stream.call_win32 = Mock() - stream.extract_params = Mock(return_value='params') - data = { - 'abc\033[adef': ('a', 'params'), - 'abc\033[;;bdef': ('b', 'params'), - 'abc\033[0cdef': ('c', 'params'), - 'abc\033[;;0;;Gdef': ('G', 'params'), - 'abc\033[1;20;128Hdef': ('H', 'params'), - } - for datum, expected in data.items(): - stream.call_win32.reset_mock() - stream.write_and_convert( datum ) - self.assertEqual( stream.call_win32.call_args[0], expected ) - - def test_reset_all_shouldnt_raise_on_closed_orig_stdout(self): - stream = StringIO() - converter = AnsiToWin32(stream) - stream.close() - - converter.reset_all() - - def test_wrap_shouldnt_raise_on_closed_orig_stdout(self): - stream = StringIO() - stream.close() - with \ - patch("colorama.ansitowin32.os.name", "nt"), \ - patch("colorama.ansitowin32.winapi_test", lambda: True): - converter = AnsiToWin32(stream) - self.assertTrue(converter.strip) - self.assertFalse(converter.convert) - - def test_wrap_shouldnt_raise_on_missing_closed_attr(self): - with \ - patch("colorama.ansitowin32.os.name", "nt"), \ - patch("colorama.ansitowin32.winapi_test", lambda: True): - converter = AnsiToWin32(object()) - self.assertTrue(converter.strip) - self.assertFalse(converter.convert) - - def testExtractParams(self): - stream = AnsiToWin32(Mock()) - data = { - '': (0,), - ';;': (0,), - '2': (2,), - ';;002;;': (2,), - '0;1': (0, 1), - ';;003;;456;;': (3, 456), - '11;22;33;44;55': (11, 22, 33, 44, 55), - } - for datum, expected in data.items(): - self.assertEqual(stream.extract_params('m', datum), expected) - - def testCallWin32UsesLookup(self): - listener = Mock() - stream = AnsiToWin32(listener) - stream.win32_calls = { - 1: (lambda *_, **__: listener(11),), - 2: (lambda *_, **__: listener(22),), - 3: (lambda *_, **__: listener(33),), - } - stream.call_win32('m', (3, 1, 99, 2)) - self.assertEqual( - [a[0][0] for a in listener.call_args_list], - [33, 11, 22] ) - - def test_osc_codes(self): - mockStdout = Mock() - stream = AnsiToWin32(mockStdout, convert=True) - with patch('colorama.ansitowin32.winterm') as winterm: - data = [ - '\033]0\x07', # missing arguments - '\033]0;foo\x08', # wrong OSC command - '\033]0;colorama_test_title\x07', # should work - '\033]1;colorama_test_title\x07', # wrong set command - '\033]2;colorama_test_title\x07', # should work - '\033]' + ';' * 64 + '\x08', # see issue #247 - ] - for code in data: - stream.write(code) - self.assertEqual(winterm.set_title.call_count, 2) - - def test_native_windows_ansi(self): - with ExitStack() as stack: - def p(a, b): - stack.enter_context(patch(a, b, create=True)) - # Pretend to be on Windows - p("colorama.ansitowin32.os.name", "nt") - p("colorama.ansitowin32.winapi_test", lambda: True) - p("colorama.win32.winapi_test", lambda: True) - p("colorama.winterm.win32.windll", "non-None") - p("colorama.winterm.get_osfhandle", lambda _: 1234) - - # Pretend that our mock stream has native ANSI support - p( - "colorama.winterm.win32.GetConsoleMode", - lambda _: ENABLE_VIRTUAL_TERMINAL_PROCESSING, - ) - SetConsoleMode = Mock() - p("colorama.winterm.win32.SetConsoleMode", SetConsoleMode) - - stdout = Mock() - stdout.closed = False - stdout.isatty.return_value = True - stdout.fileno.return_value = 1 - - # Our fake console says it has native vt support, so AnsiToWin32 should - # enable that support and do nothing else. - stream = AnsiToWin32(stdout) - SetConsoleMode.assert_called_with(1234, ENABLE_VIRTUAL_TERMINAL_PROCESSING) - self.assertFalse(stream.strip) - self.assertFalse(stream.convert) - self.assertFalse(stream.should_wrap()) - - # Now let's pretend we're on an old Windows console, that doesn't have - # native ANSI support. - p("colorama.winterm.win32.GetConsoleMode", lambda _: 0) - SetConsoleMode = Mock() - p("colorama.winterm.win32.SetConsoleMode", SetConsoleMode) - - stream = AnsiToWin32(stdout) - SetConsoleMode.assert_called_with(1234, ENABLE_VIRTUAL_TERMINAL_PROCESSING) - self.assertTrue(stream.strip) - self.assertTrue(stream.convert) - self.assertTrue(stream.should_wrap()) - - -if __name__ == '__main__': - main() diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/repr.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/repr.py deleted file mode 100644 index f284bcafa6ab2e1c9ae51be54107836e68cfb0d3..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/repr.py +++ /dev/null @@ -1,149 +0,0 @@ -import inspect -from functools import partial -from typing import ( - Any, - Callable, - Iterable, - List, - Optional, - Tuple, - Type, - TypeVar, - Union, - overload, -) - -T = TypeVar("T") - - -Result = Iterable[Union[Any, Tuple[Any], Tuple[str, Any], Tuple[str, Any, Any]]] -RichReprResult = Result - - -class ReprError(Exception): - """An error occurred when attempting to build a repr.""" - - -@overload -def auto(cls: Optional[Type[T]]) -> Type[T]: - ... - - -@overload -def auto(*, angular: bool = False) -> Callable[[Type[T]], Type[T]]: - ... - - -def auto( - cls: Optional[Type[T]] = None, *, angular: Optional[bool] = None -) -> Union[Type[T], Callable[[Type[T]], Type[T]]]: - """Class decorator to create __repr__ from __rich_repr__""" - - def do_replace(cls: Type[T], angular: Optional[bool] = None) -> Type[T]: - def auto_repr(self: T) -> str: - """Create repr string from __rich_repr__""" - repr_str: List[str] = [] - append = repr_str.append - - angular: bool = getattr(self.__rich_repr__, "angular", False) # type: ignore[attr-defined] - for arg in self.__rich_repr__(): # type: ignore[attr-defined] - if isinstance(arg, tuple): - if len(arg) == 1: - append(repr(arg[0])) - else: - key, value, *default = arg - if key is None: - append(repr(value)) - else: - if default and default[0] == value: - continue - append(f"{key}={value!r}") - else: - append(repr(arg)) - if angular: - return f"<{self.__class__.__name__} {' '.join(repr_str)}>" - else: - return f"{self.__class__.__name__}({', '.join(repr_str)})" - - def auto_rich_repr(self: Type[T]) -> Result: - """Auto generate __rich_rep__ from signature of __init__""" - try: - signature = inspect.signature(self.__init__) - for name, param in signature.parameters.items(): - if param.kind == param.POSITIONAL_ONLY: - yield getattr(self, name) - elif param.kind in ( - param.POSITIONAL_OR_KEYWORD, - param.KEYWORD_ONLY, - ): - if param.default == param.empty: - yield getattr(self, param.name) - else: - yield param.name, getattr(self, param.name), param.default - except Exception as error: - raise ReprError( - f"Failed to auto generate __rich_repr__; {error}" - ) from None - - if not hasattr(cls, "__rich_repr__"): - auto_rich_repr.__doc__ = "Build a rich repr" - cls.__rich_repr__ = auto_rich_repr # type: ignore[attr-defined] - - auto_repr.__doc__ = "Return repr(self)" - cls.__repr__ = auto_repr # type: ignore[assignment] - if angular is not None: - cls.__rich_repr__.angular = angular # type: ignore[attr-defined] - return cls - - if cls is None: - return partial(do_replace, angular=angular) - else: - return do_replace(cls, angular=angular) - - -@overload -def rich_repr(cls: Optional[Type[T]]) -> Type[T]: - ... - - -@overload -def rich_repr(*, angular: bool = False) -> Callable[[Type[T]], Type[T]]: - ... - - -def rich_repr( - cls: Optional[Type[T]] = None, *, angular: bool = False -) -> Union[Type[T], Callable[[Type[T]], Type[T]]]: - if cls is None: - return auto(angular=angular) - else: - return auto(cls) - - -if __name__ == "__main__": - - @auto - class Foo: - def __rich_repr__(self) -> Result: - yield "foo" - yield "bar", {"shopping": ["eggs", "ham", "pineapple"]} - yield "buy", "hand sanitizer" - - foo = Foo() - from pip._vendor.rich.console import Console - - console = Console() - - console.rule("Standard repr") - console.print(foo) - - console.print(foo, width=60) - console.print(foo, width=30) - - console.rule("Angular repr") - Foo.__rich_repr__.angular = True # type: ignore[attr-defined] - - console.print(foo) - - console.print(foo, width=60) - console.print(foo, width=30) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/_msvccompiler.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/_msvccompiler.py deleted file mode 100644 index 729c2dd5217528d7b3f9220cc2c7981f95c6f6e1..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/_msvccompiler.py +++ /dev/null @@ -1,572 +0,0 @@ -"""distutils._msvccompiler - -Contains MSVCCompiler, an implementation of the abstract CCompiler class -for Microsoft Visual Studio 2015. - -The module is compatible with VS 2015 and later. You can find legacy support -for older versions in distutils.msvc9compiler and distutils.msvccompiler. -""" - -# Written by Perry Stoll -# hacked by Robin Becker and Thomas Heller to do a better job of -# finding DevStudio (through the registry) -# ported to VS 2005 and VS 2008 by Christian Heimes -# ported to VS 2015 by Steve Dower - -import os -import subprocess -import contextlib -import warnings -import unittest.mock as mock - -with contextlib.suppress(ImportError): - import winreg - -from distutils.errors import ( - DistutilsExecError, - DistutilsPlatformError, - CompileError, - LibError, - LinkError, -) -from distutils.ccompiler import CCompiler, gen_lib_options -from distutils import log -from distutils.util import get_platform - -from itertools import count - - -def _find_vc2015(): - try: - key = winreg.OpenKeyEx( - winreg.HKEY_LOCAL_MACHINE, - r"Software\Microsoft\VisualStudio\SxS\VC7", - access=winreg.KEY_READ | winreg.KEY_WOW64_32KEY, - ) - except OSError: - log.debug("Visual C++ is not registered") - return None, None - - best_version = 0 - best_dir = None - with key: - for i in count(): - try: - v, vc_dir, vt = winreg.EnumValue(key, i) - except OSError: - break - if v and vt == winreg.REG_SZ and os.path.isdir(vc_dir): - try: - version = int(float(v)) - except (ValueError, TypeError): - continue - if version >= 14 and version > best_version: - best_version, best_dir = version, vc_dir - return best_version, best_dir - - -def _find_vc2017(): - """Returns "15, path" based on the result of invoking vswhere.exe - If no install is found, returns "None, None" - - The version is returned to avoid unnecessarily changing the function - result. It may be ignored when the path is not None. - - If vswhere.exe is not available, by definition, VS 2017 is not - installed. - """ - root = os.environ.get("ProgramFiles(x86)") or os.environ.get("ProgramFiles") - if not root: - return None, None - - try: - path = subprocess.check_output( - [ - os.path.join( - root, "Microsoft Visual Studio", "Installer", "vswhere.exe" - ), - "-latest", - "-prerelease", - "-requires", - "Microsoft.VisualStudio.Component.VC.Tools.x86.x64", - "-property", - "installationPath", - "-products", - "*", - ], - encoding="mbcs", - errors="strict", - ).strip() - except (subprocess.CalledProcessError, OSError, UnicodeDecodeError): - return None, None - - path = os.path.join(path, "VC", "Auxiliary", "Build") - if os.path.isdir(path): - return 15, path - - return None, None - - -PLAT_SPEC_TO_RUNTIME = { - 'x86': 'x86', - 'x86_amd64': 'x64', - 'x86_arm': 'arm', - 'x86_arm64': 'arm64', -} - - -def _find_vcvarsall(plat_spec): - # bpo-38597: Removed vcruntime return value - _, best_dir = _find_vc2017() - - if not best_dir: - best_version, best_dir = _find_vc2015() - - if not best_dir: - log.debug("No suitable Visual C++ version found") - return None, None - - vcvarsall = os.path.join(best_dir, "vcvarsall.bat") - if not os.path.isfile(vcvarsall): - log.debug("%s cannot be found", vcvarsall) - return None, None - - return vcvarsall, None - - -def _get_vc_env(plat_spec): - if os.getenv("DISTUTILS_USE_SDK"): - return {key.lower(): value for key, value in os.environ.items()} - - vcvarsall, _ = _find_vcvarsall(plat_spec) - if not vcvarsall: - raise DistutilsPlatformError("Unable to find vcvarsall.bat") - - try: - out = subprocess.check_output( - f'cmd /u /c "{vcvarsall}" {plat_spec} && set', - stderr=subprocess.STDOUT, - ).decode('utf-16le', errors='replace') - except subprocess.CalledProcessError as exc: - log.error(exc.output) - raise DistutilsPlatformError(f"Error executing {exc.cmd}") - - env = { - key.lower(): value - for key, _, value in (line.partition('=') for line in out.splitlines()) - if key and value - } - - return env - - -def _find_exe(exe, paths=None): - """Return path to an MSVC executable program. - - Tries to find the program in several places: first, one of the - MSVC program search paths from the registry; next, the directories - in the PATH environment variable. If any of those work, return an - absolute path that is known to exist. If none of them work, just - return the original program name, 'exe'. - """ - if not paths: - paths = os.getenv('path').split(os.pathsep) - for p in paths: - fn = os.path.join(os.path.abspath(p), exe) - if os.path.isfile(fn): - return fn - return exe - - -# A map keyed by get_platform() return values to values accepted by -# 'vcvarsall.bat'. Always cross-compile from x86 to work with the -# lighter-weight MSVC installs that do not include native 64-bit tools. -PLAT_TO_VCVARS = { - 'win32': 'x86', - 'win-amd64': 'x86_amd64', - 'win-arm32': 'x86_arm', - 'win-arm64': 'x86_arm64', -} - - -class MSVCCompiler(CCompiler): - """Concrete class that implements an interface to Microsoft Visual C++, - as defined by the CCompiler abstract class.""" - - compiler_type = 'msvc' - - # Just set this so CCompiler's constructor doesn't barf. We currently - # don't use the 'set_executables()' bureaucracy provided by CCompiler, - # as it really isn't necessary for this sort of single-compiler class. - # Would be nice to have a consistent interface with UnixCCompiler, - # though, so it's worth thinking about. - executables = {} - - # Private class data (need to distinguish C from C++ source for compiler) - _c_extensions = ['.c'] - _cpp_extensions = ['.cc', '.cpp', '.cxx'] - _rc_extensions = ['.rc'] - _mc_extensions = ['.mc'] - - # Needed for the filename generation methods provided by the - # base class, CCompiler. - src_extensions = _c_extensions + _cpp_extensions + _rc_extensions + _mc_extensions - res_extension = '.res' - obj_extension = '.obj' - static_lib_extension = '.lib' - shared_lib_extension = '.dll' - static_lib_format = shared_lib_format = '%s%s' - exe_extension = '.exe' - - def __init__(self, verbose=0, dry_run=0, force=0): - super().__init__(verbose, dry_run, force) - # target platform (.plat_name is consistent with 'bdist') - self.plat_name = None - self.initialized = False - - @classmethod - def _configure(cls, vc_env): - """ - Set class-level include/lib dirs. - """ - cls.include_dirs = cls._parse_path(vc_env.get('include', '')) - cls.library_dirs = cls._parse_path(vc_env.get('lib', '')) - - @staticmethod - def _parse_path(val): - return [dir.rstrip(os.sep) for dir in val.split(os.pathsep) if dir] - - def initialize(self, plat_name=None): - # multi-init means we would need to check platform same each time... - assert not self.initialized, "don't init multiple times" - if plat_name is None: - plat_name = get_platform() - # sanity check for platforms to prevent obscure errors later. - if plat_name not in PLAT_TO_VCVARS: - raise DistutilsPlatformError( - f"--plat-name must be one of {tuple(PLAT_TO_VCVARS)}" - ) - - # Get the vcvarsall.bat spec for the requested platform. - plat_spec = PLAT_TO_VCVARS[plat_name] - - vc_env = _get_vc_env(plat_spec) - if not vc_env: - raise DistutilsPlatformError( - "Unable to find a compatible " "Visual Studio installation." - ) - self._configure(vc_env) - - self._paths = vc_env.get('path', '') - paths = self._paths.split(os.pathsep) - self.cc = _find_exe("cl.exe", paths) - self.linker = _find_exe("link.exe", paths) - self.lib = _find_exe("lib.exe", paths) - self.rc = _find_exe("rc.exe", paths) # resource compiler - self.mc = _find_exe("mc.exe", paths) # message compiler - self.mt = _find_exe("mt.exe", paths) # message compiler - - self.preprocess_options = None - # bpo-38597: Always compile with dynamic linking - # Future releases of Python 3.x will include all past - # versions of vcruntime*.dll for compatibility. - self.compile_options = ['/nologo', '/O2', '/W3', '/GL', '/DNDEBUG', '/MD'] - - self.compile_options_debug = [ - '/nologo', - '/Od', - '/MDd', - '/Zi', - '/W3', - '/D_DEBUG', - ] - - ldflags = ['/nologo', '/INCREMENTAL:NO', '/LTCG'] - - ldflags_debug = ['/nologo', '/INCREMENTAL:NO', '/LTCG', '/DEBUG:FULL'] - - self.ldflags_exe = [*ldflags, '/MANIFEST:EMBED,ID=1'] - self.ldflags_exe_debug = [*ldflags_debug, '/MANIFEST:EMBED,ID=1'] - self.ldflags_shared = [ - *ldflags, - '/DLL', - '/MANIFEST:EMBED,ID=2', - '/MANIFESTUAC:NO', - ] - self.ldflags_shared_debug = [ - *ldflags_debug, - '/DLL', - '/MANIFEST:EMBED,ID=2', - '/MANIFESTUAC:NO', - ] - self.ldflags_static = [*ldflags] - self.ldflags_static_debug = [*ldflags_debug] - - self._ldflags = { - (CCompiler.EXECUTABLE, None): self.ldflags_exe, - (CCompiler.EXECUTABLE, False): self.ldflags_exe, - (CCompiler.EXECUTABLE, True): self.ldflags_exe_debug, - (CCompiler.SHARED_OBJECT, None): self.ldflags_shared, - (CCompiler.SHARED_OBJECT, False): self.ldflags_shared, - (CCompiler.SHARED_OBJECT, True): self.ldflags_shared_debug, - (CCompiler.SHARED_LIBRARY, None): self.ldflags_static, - (CCompiler.SHARED_LIBRARY, False): self.ldflags_static, - (CCompiler.SHARED_LIBRARY, True): self.ldflags_static_debug, - } - - self.initialized = True - - # -- Worker methods ------------------------------------------------ - - @property - def out_extensions(self): - return { - **super().out_extensions, - **{ - ext: self.res_extension - for ext in self._rc_extensions + self._mc_extensions - }, - } - - def compile( # noqa: C901 - self, - sources, - output_dir=None, - macros=None, - include_dirs=None, - debug=0, - extra_preargs=None, - extra_postargs=None, - depends=None, - ): - - if not self.initialized: - self.initialize() - compile_info = self._setup_compile( - output_dir, macros, include_dirs, sources, depends, extra_postargs - ) - macros, objects, extra_postargs, pp_opts, build = compile_info - - compile_opts = extra_preargs or [] - compile_opts.append('/c') - if debug: - compile_opts.extend(self.compile_options_debug) - else: - compile_opts.extend(self.compile_options) - - add_cpp_opts = False - - for obj in objects: - try: - src, ext = build[obj] - except KeyError: - continue - if debug: - # pass the full pathname to MSVC in debug mode, - # this allows the debugger to find the source file - # without asking the user to browse for it - src = os.path.abspath(src) - - if ext in self._c_extensions: - input_opt = "/Tc" + src - elif ext in self._cpp_extensions: - input_opt = "/Tp" + src - add_cpp_opts = True - elif ext in self._rc_extensions: - # compile .RC to .RES file - input_opt = src - output_opt = "/fo" + obj - try: - self.spawn([self.rc] + pp_opts + [output_opt, input_opt]) - except DistutilsExecError as msg: - raise CompileError(msg) - continue - elif ext in self._mc_extensions: - # Compile .MC to .RC file to .RES file. - # * '-h dir' specifies the directory for the - # generated include file - # * '-r dir' specifies the target directory of the - # generated RC file and the binary message resource - # it includes - # - # For now (since there are no options to change this), - # we use the source-directory for the include file and - # the build directory for the RC file and message - # resources. This works at least for win32all. - h_dir = os.path.dirname(src) - rc_dir = os.path.dirname(obj) - try: - # first compile .MC to .RC and .H file - self.spawn([self.mc, '-h', h_dir, '-r', rc_dir, src]) - base, _ = os.path.splitext(os.path.basename(src)) - rc_file = os.path.join(rc_dir, base + '.rc') - # then compile .RC to .RES file - self.spawn([self.rc, "/fo" + obj, rc_file]) - - except DistutilsExecError as msg: - raise CompileError(msg) - continue - else: - # how to handle this file? - raise CompileError(f"Don't know how to compile {src} to {obj}") - - args = [self.cc] + compile_opts + pp_opts - if add_cpp_opts: - args.append('/EHsc') - args.append(input_opt) - args.append("/Fo" + obj) - args.extend(extra_postargs) - - try: - self.spawn(args) - except DistutilsExecError as msg: - raise CompileError(msg) - - return objects - - def create_static_lib( - self, objects, output_libname, output_dir=None, debug=0, target_lang=None - ): - - if not self.initialized: - self.initialize() - objects, output_dir = self._fix_object_args(objects, output_dir) - output_filename = self.library_filename(output_libname, output_dir=output_dir) - - if self._need_link(objects, output_filename): - lib_args = objects + ['/OUT:' + output_filename] - if debug: - pass # XXX what goes here? - try: - log.debug('Executing "%s" %s', self.lib, ' '.join(lib_args)) - self.spawn([self.lib] + lib_args) - except DistutilsExecError as msg: - raise LibError(msg) - else: - log.debug("skipping %s (up-to-date)", output_filename) - - def link( - self, - target_desc, - objects, - output_filename, - output_dir=None, - libraries=None, - library_dirs=None, - runtime_library_dirs=None, - export_symbols=None, - debug=0, - extra_preargs=None, - extra_postargs=None, - build_temp=None, - target_lang=None, - ): - - if not self.initialized: - self.initialize() - objects, output_dir = self._fix_object_args(objects, output_dir) - fixed_args = self._fix_lib_args(libraries, library_dirs, runtime_library_dirs) - libraries, library_dirs, runtime_library_dirs = fixed_args - - if runtime_library_dirs: - self.warn( - "I don't know what to do with 'runtime_library_dirs': " - + str(runtime_library_dirs) - ) - - lib_opts = gen_lib_options(self, library_dirs, runtime_library_dirs, libraries) - if output_dir is not None: - output_filename = os.path.join(output_dir, output_filename) - - if self._need_link(objects, output_filename): - ldflags = self._ldflags[target_desc, debug] - - export_opts = ["/EXPORT:" + sym for sym in (export_symbols or [])] - - ld_args = ( - ldflags + lib_opts + export_opts + objects + ['/OUT:' + output_filename] - ) - - # The MSVC linker generates .lib and .exp files, which cannot be - # suppressed by any linker switches. The .lib files may even be - # needed! Make sure they are generated in the temporary build - # directory. Since they have different names for debug and release - # builds, they can go into the same directory. - build_temp = os.path.dirname(objects[0]) - if export_symbols is not None: - (dll_name, dll_ext) = os.path.splitext( - os.path.basename(output_filename) - ) - implib_file = os.path.join(build_temp, self.library_filename(dll_name)) - ld_args.append('/IMPLIB:' + implib_file) - - if extra_preargs: - ld_args[:0] = extra_preargs - if extra_postargs: - ld_args.extend(extra_postargs) - - output_dir = os.path.dirname(os.path.abspath(output_filename)) - self.mkpath(output_dir) - try: - log.debug('Executing "%s" %s', self.linker, ' '.join(ld_args)) - self.spawn([self.linker] + ld_args) - except DistutilsExecError as msg: - raise LinkError(msg) - else: - log.debug("skipping %s (up-to-date)", output_filename) - - def spawn(self, cmd): - env = dict(os.environ, PATH=self._paths) - with self._fallback_spawn(cmd, env) as fallback: - return super().spawn(cmd, env=env) - return fallback.value - - @contextlib.contextmanager - def _fallback_spawn(self, cmd, env): - """ - Discovered in pypa/distutils#15, some tools monkeypatch the compiler, - so the 'env' kwarg causes a TypeError. Detect this condition and - restore the legacy, unsafe behavior. - """ - bag = type('Bag', (), {})() - try: - yield bag - except TypeError as exc: - if "unexpected keyword argument 'env'" not in str(exc): - raise - else: - return - warnings.warn("Fallback spawn triggered. Please update distutils monkeypatch.") - with mock.patch.dict('os.environ', env): - bag.value = super().spawn(cmd) - - # -- Miscellaneous methods ----------------------------------------- - # These are all used by the 'gen_lib_options() function, in - # ccompiler.py. - - def library_dir_option(self, dir): - return "/LIBPATH:" + dir - - def runtime_library_dir_option(self, dir): - raise DistutilsPlatformError( - "don't know how to set runtime library search path for MSVC" - ) - - def library_option(self, lib): - return self.library_filename(lib) - - def find_library_file(self, dirs, lib, debug=0): - # Prefer a debugging library if found (and requested), but deal - # with it if we don't have one. - if debug: - try_names = [lib + "_d", lib] - else: - try_names = [lib] - for dir in dirs: - for name in try_names: - libfile = os.path.join(dir, self.library_filename(name)) - if os.path.isfile(libfile): - return libfile - else: - # Oops, didn't find it in *any* of 'dirs' - return None diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/custom_solver.py b/spaces/Awiny/Image2Paragraph/models/grit_src/grit/custom_solver.py deleted file mode 100644 index 87f7d61ed756acf9326b7ab4097a989a9e6c7532..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/custom_solver.py +++ /dev/null @@ -1,88 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -# Modified by Jialian Wu from https://github.com/facebookresearch/Detic/blob/main/detic/custom_solver.py -import itertools -from typing import Any, Callable, Dict, Iterable, List, Set, Type, Union -import torch - -from detectron2.config import CfgNode - -from detectron2.solver.build import maybe_add_gradient_clipping - - -def build_custom_optimizer(cfg: CfgNode, model: torch.nn.Module) -> torch.optim.Optimizer: - params: List[Dict[str, Any]] = [] - memo: Set[torch.nn.parameter.Parameter] = set() - optimizer_type = cfg.SOLVER.OPTIMIZER - - for key, value in model.named_parameters(recurse=True): - if not value.requires_grad: - continue - # Avoid duplicating parameters - if value in memo: - continue - memo.add(value) - lr = cfg.SOLVER.BASE_LR - weight_decay = cfg.SOLVER.WEIGHT_DECAY - - if cfg.SOLVER.VIT_LAYER_DECAY: - lr = lr * get_vit_lr_decay_rate(key, cfg.SOLVER.VIT_LAYER_DECAY_RATE, cfg.MODEL.VIT_LAYERS) - - param = {"params": [value], "lr": lr} - if optimizer_type != 'ADAMW': - param['weight_decay'] = weight_decay - params += [param] - - def maybe_add_full_model_gradient_clipping(optim): # optim: the optimizer class - # detectron2 doesn't have full model gradient clipping now - clip_norm_val = cfg.SOLVER.CLIP_GRADIENTS.CLIP_VALUE - enable = ( - cfg.SOLVER.CLIP_GRADIENTS.ENABLED - and cfg.SOLVER.CLIP_GRADIENTS.CLIP_TYPE == "full_model" - and clip_norm_val > 0.0 - ) - - class FullModelGradientClippingOptimizer(optim): - def step(self, closure=None): - all_params = itertools.chain(*[x["params"] for x in self.param_groups]) - torch.nn.utils.clip_grad_norm_(all_params, clip_norm_val) - super().step(closure=closure) - - return FullModelGradientClippingOptimizer if enable else optim - - - if optimizer_type == 'SGD': - optimizer = maybe_add_full_model_gradient_clipping(torch.optim.SGD)( - params, cfg.SOLVER.BASE_LR, momentum=cfg.SOLVER.MOMENTUM, - nesterov=cfg.SOLVER.NESTEROV - ) - elif optimizer_type == 'ADAMW': - optimizer = maybe_add_full_model_gradient_clipping(torch.optim.AdamW)( - params, cfg.SOLVER.BASE_LR, - weight_decay=cfg.SOLVER.WEIGHT_DECAY - ) - else: - raise NotImplementedError(f"no optimizer type {optimizer_type}") - if not cfg.SOLVER.CLIP_GRADIENTS.CLIP_TYPE == "full_model": - optimizer = maybe_add_gradient_clipping(cfg, optimizer) - return optimizer - - -def get_vit_lr_decay_rate(name, lr_decay_rate=1.0, num_layers=12): - """ - Calculate lr decay rate for different ViT blocks. - Args: - name (string): parameter name. - lr_decay_rate (float): base lr decay rate. - num_layers (int): number of ViT blocks. - - Returns: - lr decay rate for the given parameter. - """ - layer_id = num_layers + 1 - if name.startswith("backbone"): - if ".pos_embed" in name or ".patch_embed" in name: - layer_id = 0 - elif ".blocks." in name and ".residual." not in name: - layer_id = int(name[name.find(".blocks.") :].split(".")[2]) + 1 - - return lr_decay_rate ** (num_layers + 1 - layer_id) \ No newline at end of file diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/panoptic_evaluation.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/panoptic_evaluation.py deleted file mode 100644 index 9fb3462b7f9abf6feaa499976bfed526ebd17e31..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/panoptic_evaluation.py +++ /dev/null @@ -1,199 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import contextlib -import io -import itertools -import json -import logging -import numpy as np -import os -import tempfile -from collections import OrderedDict -from typing import Optional -from PIL import Image -from tabulate import tabulate - -from detectron2.data import MetadataCatalog -from detectron2.utils import comm -from detectron2.utils.file_io import PathManager - -from .evaluator import DatasetEvaluator - -logger = logging.getLogger(__name__) - - -class COCOPanopticEvaluator(DatasetEvaluator): - """ - Evaluate Panoptic Quality metrics on COCO using PanopticAPI. - It saves panoptic segmentation prediction in `output_dir` - - It contains a synchronize call and has to be called from all workers. - """ - - def __init__(self, dataset_name: str, output_dir: Optional[str] = None): - """ - Args: - dataset_name: name of the dataset - output_dir: output directory to save results for evaluation. - """ - self._metadata = MetadataCatalog.get(dataset_name) - self._thing_contiguous_id_to_dataset_id = { - v: k for k, v in self._metadata.thing_dataset_id_to_contiguous_id.items() - } - self._stuff_contiguous_id_to_dataset_id = { - v: k for k, v in self._metadata.stuff_dataset_id_to_contiguous_id.items() - } - - self._output_dir = output_dir - if self._output_dir is not None: - PathManager.mkdirs(self._output_dir) - - def reset(self): - self._predictions = [] - - def _convert_category_id(self, segment_info): - isthing = segment_info.pop("isthing", None) - if isthing is None: - # the model produces panoptic category id directly. No more conversion needed - return segment_info - if isthing is True: - segment_info["category_id"] = self._thing_contiguous_id_to_dataset_id[ - segment_info["category_id"] - ] - else: - segment_info["category_id"] = self._stuff_contiguous_id_to_dataset_id[ - segment_info["category_id"] - ] - return segment_info - - def process(self, inputs, outputs): - from panopticapi.utils import id2rgb - - for input, output in zip(inputs, outputs): - panoptic_img, segments_info = output["panoptic_seg"] - panoptic_img = panoptic_img.cpu().numpy() - if segments_info is None: - # If "segments_info" is None, we assume "panoptic_img" is a - # H*W int32 image storing the panoptic_id in the format of - # category_id * label_divisor + instance_id. We reserve -1 for - # VOID label, and add 1 to panoptic_img since the official - # evaluation script uses 0 for VOID label. - label_divisor = self._metadata.label_divisor - segments_info = [] - for panoptic_label in np.unique(panoptic_img): - if panoptic_label == -1: - # VOID region. - continue - pred_class = panoptic_label // label_divisor - isthing = ( - pred_class in self._metadata.thing_dataset_id_to_contiguous_id.values() - ) - segments_info.append( - { - "id": int(panoptic_label) + 1, - "category_id": int(pred_class), - "isthing": bool(isthing), - } - ) - # Official evaluation script uses 0 for VOID label. - panoptic_img += 1 - - file_name = os.path.basename(input["file_name"]) - file_name_png = os.path.splitext(file_name)[0] + ".png" - with io.BytesIO() as out: - Image.fromarray(id2rgb(panoptic_img)).save(out, format="PNG") - segments_info = [self._convert_category_id(x) for x in segments_info] - self._predictions.append( - { - "image_id": input["image_id"], - "file_name": file_name_png, - "png_string": out.getvalue(), - "segments_info": segments_info, - } - ) - - def evaluate(self): - comm.synchronize() - - self._predictions = comm.gather(self._predictions) - self._predictions = list(itertools.chain(*self._predictions)) - if not comm.is_main_process(): - return - - # PanopticApi requires local files - gt_json = PathManager.get_local_path(self._metadata.panoptic_json) - gt_folder = PathManager.get_local_path(self._metadata.panoptic_root) - - with tempfile.TemporaryDirectory(prefix="panoptic_eval") as pred_dir: - logger.info("Writing all panoptic predictions to {} ...".format(pred_dir)) - for p in self._predictions: - with open(os.path.join(pred_dir, p["file_name"]), "wb") as f: - f.write(p.pop("png_string")) - - with open(gt_json, "r") as f: - json_data = json.load(f) - json_data["annotations"] = self._predictions - - output_dir = self._output_dir or pred_dir - predictions_json = os.path.join(output_dir, "predictions.json") - with PathManager.open(predictions_json, "w") as f: - f.write(json.dumps(json_data)) - - from panopticapi.evaluation import pq_compute - - with contextlib.redirect_stdout(io.StringIO()): - pq_res = pq_compute( - gt_json, - PathManager.get_local_path(predictions_json), - gt_folder=gt_folder, - pred_folder=pred_dir, - ) - - res = {} - res["PQ"] = 100 * pq_res["All"]["pq"] - res["SQ"] = 100 * pq_res["All"]["sq"] - res["RQ"] = 100 * pq_res["All"]["rq"] - res["PQ_th"] = 100 * pq_res["Things"]["pq"] - res["SQ_th"] = 100 * pq_res["Things"]["sq"] - res["RQ_th"] = 100 * pq_res["Things"]["rq"] - res["PQ_st"] = 100 * pq_res["Stuff"]["pq"] - res["SQ_st"] = 100 * pq_res["Stuff"]["sq"] - res["RQ_st"] = 100 * pq_res["Stuff"]["rq"] - - results = OrderedDict({"panoptic_seg": res}) - _print_panoptic_results(pq_res) - - return results - - -def _print_panoptic_results(pq_res): - headers = ["", "PQ", "SQ", "RQ", "#categories"] - data = [] - for name in ["All", "Things", "Stuff"]: - row = [name] + [pq_res[name][k] * 100 for k in ["pq", "sq", "rq"]] + [pq_res[name]["n"]] - data.append(row) - table = tabulate( - data, headers=headers, tablefmt="pipe", floatfmt=".3f", stralign="center", numalign="center" - ) - logger.info("Panoptic Evaluation Results:\n" + table) - - -if __name__ == "__main__": - from detectron2.utils.logger import setup_logger - - logger = setup_logger() - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("--gt-json") - parser.add_argument("--gt-dir") - parser.add_argument("--pred-json") - parser.add_argument("--pred-dir") - args = parser.parse_args() - - from panopticapi.evaluation import pq_compute - - with contextlib.redirect_stdout(io.StringIO()): - pq_res = pq_compute( - args.gt_json, args.pred_json, gt_folder=args.gt_dir, pred_folder=args.pred_dir - ) - _print_panoptic_results(pq_res) diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/data/test_transforms.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/data/test_transforms.py deleted file mode 100644 index 382048e533708dec3fabf89528564ebc2ad4c83f..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/data/test_transforms.py +++ /dev/null @@ -1,268 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -import logging -import numpy as np -import unittest -from unittest import mock -import torch -from PIL import Image, ImageOps -from torch.nn import functional as F - -from detectron2.config import get_cfg -from detectron2.data import detection_utils -from detectron2.data import transforms as T -from detectron2.utils.logger import setup_logger - -logger = logging.getLogger(__name__) - - -def polygon_allclose(poly1, poly2): - """ - Test whether two polygons are the same. - Both arguments are nx2 numpy arrays. - """ - # ABCD and CDAB are the same polygon. So it's important to check after rolling - for k in range(len(poly1)): - rolled_poly1 = np.roll(poly1, k, axis=0) - if np.allclose(rolled_poly1, poly2): - return True - return False - - -class TestTransforms(unittest.TestCase): - def setUp(self): - setup_logger() - - def test_apply_rotated_boxes(self): - np.random.seed(125) - cfg = get_cfg() - is_train = True - augs = detection_utils.build_augmentation(cfg, is_train) - image = np.random.rand(200, 300) - image, transforms = T.apply_augmentations(augs, image) - image_shape = image.shape[:2] # h, w - assert image_shape == (800, 1200) - annotation = {"bbox": [179, 97, 62, 40, -56]} - - boxes = np.array([annotation["bbox"]], dtype=np.float64) # boxes.shape = (1, 5) - transformed_bbox = transforms.apply_rotated_box(boxes)[0] - - expected_bbox = np.array([484, 388, 248, 160, 56], dtype=np.float64) - err_msg = "transformed_bbox = {}, expected {}".format(transformed_bbox, expected_bbox) - assert np.allclose(transformed_bbox, expected_bbox), err_msg - - def test_resize_and_crop(self): - np.random.seed(125) - min_scale = 0.2 - max_scale = 2.0 - target_height = 1100 - target_width = 1000 - resize_aug = T.ResizeScale(min_scale, max_scale, target_height, target_width) - fixed_size_crop_aug = T.FixedSizeCrop((target_height, target_width)) - hflip_aug = T.RandomFlip() - augs = [resize_aug, fixed_size_crop_aug, hflip_aug] - original_image = np.random.rand(900, 800) - image, transforms = T.apply_augmentations(augs, original_image) - image_shape = image.shape[:2] # h, w - self.assertEqual((1100, 1000), image_shape) - - boxes = np.array( - [[91, 46, 144, 111], [523, 251, 614, 295]], - dtype=np.float64, - ) - transformed_bboxs = transforms.apply_box(boxes) - expected_bboxs = np.array( - [ - [895.42, 33.42666667, 933.91125, 80.66], - [554.0825, 182.39333333, 620.17125, 214.36666667], - ], - dtype=np.float64, - ) - err_msg = "transformed_bbox = {}, expected {}".format(transformed_bboxs, expected_bboxs) - self.assertTrue(np.allclose(transformed_bboxs, expected_bboxs), err_msg) - - polygon = np.array([[91, 46], [144, 46], [144, 111], [91, 111]]) - transformed_polygons = transforms.apply_polygons([polygon]) - expected_polygon = np.array([[934.0, 33.0], [934.0, 80.0], [896.0, 80.0], [896.0, 33.0]]) - self.assertEqual(1, len(transformed_polygons)) - err_msg = "transformed_polygon = {}, expected {}".format( - transformed_polygons[0], expected_polygon - ) - self.assertTrue(polygon_allclose(transformed_polygons[0], expected_polygon), err_msg) - - def test_apply_rotated_boxes_unequal_scaling_factor(self): - np.random.seed(125) - h, w = 400, 200 - newh, neww = 800, 800 - image = np.random.rand(h, w) - augs = [] - augs.append(T.Resize(shape=(newh, neww))) - image, transforms = T.apply_augmentations(augs, image) - image_shape = image.shape[:2] # h, w - assert image_shape == (newh, neww) - - boxes = np.array( - [ - [150, 100, 40, 20, 0], - [150, 100, 40, 20, 30], - [150, 100, 40, 20, 90], - [150, 100, 40, 20, -90], - ], - dtype=np.float64, - ) - transformed_boxes = transforms.apply_rotated_box(boxes) - - expected_bboxes = np.array( - [ - [600, 200, 160, 40, 0], - [600, 200, 144.22205102, 52.91502622, 49.10660535], - [600, 200, 80, 80, 90], - [600, 200, 80, 80, -90], - ], - dtype=np.float64, - ) - err_msg = "transformed_boxes = {}, expected {}".format(transformed_boxes, expected_bboxes) - assert np.allclose(transformed_boxes, expected_bboxes), err_msg - - def test_print_augmentation(self): - t = T.RandomCrop("relative", (100, 100)) - self.assertEqual(str(t), "RandomCrop(crop_type='relative', crop_size=(100, 100))") - - t0 = T.RandomFlip(prob=0.5) - self.assertEqual(str(t0), "RandomFlip(prob=0.5)") - - t1 = T.RandomFlip() - self.assertEqual(str(t1), "RandomFlip()") - - t = T.AugmentationList([t0, t1]) - self.assertEqual(str(t), f"AugmentationList[{t0}, {t1}]") - - def test_random_apply_prob_out_of_range_check(self): - test_probabilities = {0.0: True, 0.5: True, 1.0: True, -0.01: False, 1.01: False} - - for given_probability, is_valid in test_probabilities.items(): - if not is_valid: - self.assertRaises(AssertionError, T.RandomApply, None, prob=given_probability) - else: - T.RandomApply(T.NoOpTransform(), prob=given_probability) - - def test_random_apply_wrapping_aug_probability_occured_evaluation(self): - transform_mock = mock.MagicMock(name="MockTransform", spec=T.Augmentation) - image_mock = mock.MagicMock(name="MockImage") - random_apply = T.RandomApply(transform_mock, prob=0.001) - - with mock.patch.object(random_apply, "_rand_range", return_value=0.0001): - transform = random_apply.get_transform(image_mock) - transform_mock.get_transform.assert_called_once_with(image_mock) - self.assertIsNot(transform, transform_mock) - - def test_random_apply_wrapping_std_transform_probability_occured_evaluation(self): - transform_mock = mock.MagicMock(name="MockTransform", spec=T.Transform) - image_mock = mock.MagicMock(name="MockImage") - random_apply = T.RandomApply(transform_mock, prob=0.001) - - with mock.patch.object(random_apply, "_rand_range", return_value=0.0001): - transform = random_apply.get_transform(image_mock) - self.assertIs(transform, transform_mock) - - def test_random_apply_probability_not_occured_evaluation(self): - transform_mock = mock.MagicMock(name="MockTransform", spec=T.Augmentation) - image_mock = mock.MagicMock(name="MockImage") - random_apply = T.RandomApply(transform_mock, prob=0.001) - - with mock.patch.object(random_apply, "_rand_range", return_value=0.9): - transform = random_apply.get_transform(image_mock) - transform_mock.get_transform.assert_not_called() - self.assertIsInstance(transform, T.NoOpTransform) - - def test_augmentation_input_args(self): - input_shape = (100, 100) - output_shape = (50, 50) - - # define two augmentations with different args - class TG1(T.Augmentation): - def get_transform(self, image, sem_seg): - return T.ResizeTransform( - input_shape[0], input_shape[1], output_shape[0], output_shape[1] - ) - - class TG2(T.Augmentation): - def get_transform(self, image): - assert image.shape[:2] == output_shape # check that TG1 is applied - return T.HFlipTransform(output_shape[1]) - - image = np.random.rand(*input_shape).astype("float32") - sem_seg = (np.random.rand(*input_shape) < 0.5).astype("uint8") - inputs = T.AugInput(image, sem_seg=sem_seg) # provide two args - tfms = inputs.apply_augmentations([TG1(), TG2()]) - self.assertIsInstance(tfms[0], T.ResizeTransform) - self.assertIsInstance(tfms[1], T.HFlipTransform) - self.assertTrue(inputs.image.shape[:2] == output_shape) - self.assertTrue(inputs.sem_seg.shape[:2] == output_shape) - - class TG3(T.Augmentation): - def get_transform(self, image, nonexist): - pass - - with self.assertRaises(AttributeError): - inputs.apply_augmentations([TG3()]) - - def test_augmentation_list(self): - input_shape = (100, 100) - image = np.random.rand(*input_shape).astype("float32") - sem_seg = (np.random.rand(*input_shape) < 0.5).astype("uint8") - inputs = T.AugInput(image, sem_seg=sem_seg) # provide two args - - augs = T.AugmentationList([T.RandomFlip(), T.Resize(20)]) - _ = T.AugmentationList([augs, T.Resize(30)])(inputs) - # 3 in latest fvcore (flattened transformlist), 2 in older - # self.assertEqual(len(tfms), 3) - - def test_color_transforms(self): - rand_img = np.random.random((100, 100, 3)) * 255 - rand_img = rand_img.astype("uint8") - - # Test no-op - noop_transform = T.ColorTransform(lambda img: img) - self.assertTrue(np.array_equal(rand_img, noop_transform.apply_image(rand_img))) - - # Test a ImageOps operation - magnitude = np.random.randint(0, 256) - solarize_transform = T.PILColorTransform(lambda img: ImageOps.solarize(img, magnitude)) - expected_img = ImageOps.solarize(Image.fromarray(rand_img), magnitude) - self.assertTrue(np.array_equal(expected_img, solarize_transform.apply_image(rand_img))) - - def test_resize_transform(self): - input_shapes = [(100, 100), (100, 100, 1), (100, 100, 3)] - output_shapes = [(200, 200), (200, 200, 1), (200, 200, 3)] - for in_shape, out_shape in zip(input_shapes, output_shapes): - in_img = np.random.randint(0, 255, size=in_shape, dtype=np.uint8) - tfm = T.ResizeTransform(in_shape[0], in_shape[1], out_shape[0], out_shape[1]) - out_img = tfm.apply_image(in_img) - self.assertEqual(out_img.shape, out_shape) - - def test_resize_shorted_edge_scriptable(self): - def f(image): - newh, neww = T.ResizeShortestEdge.get_output_shape( - image.shape[-2], image.shape[-1], 80, 133 - ) - return F.interpolate(image.unsqueeze(0), size=(newh, neww)) - - input = torch.randn(3, 10, 10) - script_f = torch.jit.script(f) - self.assertTrue(torch.allclose(f(input), script_f(input))) - - # generalize to new shapes - input = torch.randn(3, 8, 100) - self.assertTrue(torch.allclose(f(input), script_f(input))) - - def test_extent_transform(self): - input_shapes = [(100, 100), (100, 100, 1), (100, 100, 3)] - src_rect = (20, 20, 80, 80) - output_shapes = [(200, 200), (200, 200, 1), (200, 200, 3)] - for in_shape, out_shape in zip(input_shapes, output_shapes): - in_img = np.random.randint(0, 255, size=in_shape, dtype=np.uint8) - tfm = T.ExtentTransform(src_rect, out_shape[:2]) - out_img = tfm.apply_image(in_img) - self.assertTrue(out_img.shape == out_shape) diff --git a/spaces/Bart92/RVC_HF/infer/modules/uvr5/preprocess.py b/spaces/Bart92/RVC_HF/infer/modules/uvr5/preprocess.py deleted file mode 100644 index 19f11110ea822eeb140fb885c600536290a1adff..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/infer/modules/uvr5/preprocess.py +++ /dev/null @@ -1,346 +0,0 @@ -import os -import logging - -logger = logging.getLogger(__name__) - -import librosa -import numpy as np -import soundfile as sf -import torch - -from infer.lib.uvr5_pack.lib_v5 import nets_61968KB as Nets -from infer.lib.uvr5_pack.lib_v5 import spec_utils -from infer.lib.uvr5_pack.lib_v5.model_param_init import ModelParameters -from infer.lib.uvr5_pack.lib_v5.nets_new import CascadedNet -from infer.lib.uvr5_pack.utils import inference - - -class AudioPre: - def __init__(self, agg, model_path, device, is_half): - self.model_path = model_path - self.device = device - self.data = { - # Processing Options - "postprocess": False, - "tta": False, - # Constants - "window_size": 512, - "agg": agg, - "high_end_process": "mirroring", - } - mp = ModelParameters("infer/lib/uvr5_pack/lib_v5/modelparams/4band_v2.json") - model = Nets.CascadedASPPNet(mp.param["bins"] * 2) - cpk = torch.load(model_path, map_location="cpu") - model.load_state_dict(cpk) - model.eval() - if is_half: - model = model.half().to(device) - else: - model = model.to(device) - - self.mp = mp - self.model = model - - def _path_audio_(self, music_file, ins_root=None, vocal_root=None, format="flac"): - if ins_root is None and vocal_root is None: - return "No save root." - name = os.path.basename(music_file) - if ins_root is not None: - os.makedirs(ins_root, exist_ok=True) - if vocal_root is not None: - os.makedirs(vocal_root, exist_ok=True) - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - bands_n = len(self.mp.param["band"]) - # print(bands_n) - for d in range(bands_n, 0, -1): - bp = self.mp.param["band"][d] - if d == bands_n: # high-end band - ( - X_wave[d], - _, - ) = librosa.core.load( # 理论上librosa读取可能对某些音频有bug,应该上ffmpeg读取,但是太麻烦了弃坑 - music_file, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - if X_wave[d].ndim == 1: - X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]]) - else: # lower bands - X_wave[d] = librosa.core.resample( - X_wave[d + 1], - self.mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - # Stft of wave source - X_spec_s[d] = spec_utils.wave_to_spectrogram_mt( - X_wave[d], - bp["hl"], - bp["n_fft"], - self.mp.param["mid_side"], - self.mp.param["mid_side_b2"], - self.mp.param["reverse"], - ) - # pdb.set_trace() - if d == bands_n and self.data["high_end_process"] != "none": - input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + ( - self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"] - ) - input_high_end = X_spec_s[d][ - :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, : - ] - - X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp) - aggresive_set = float(self.data["agg"] / 100) - aggressiveness = { - "value": aggresive_set, - "split_bin": self.mp.param["band"][1]["crop_stop"], - } - with torch.no_grad(): - pred, X_mag, X_phase = inference( - X_spec_m, self.device, self.model, aggressiveness, self.data - ) - # Postprocess - if self.data["postprocess"]: - pred_inv = np.clip(X_mag - pred, 0, np.inf) - pred = spec_utils.mask_silence(pred, pred_inv) - y_spec_m = pred * X_phase - v_spec_m = X_spec_m - y_spec_m - - if ins_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], y_spec_m, input_high_end, self.mp - ) - wav_instrument = spec_utils.cmb_spectrogram_to_wave( - y_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp) - logger.info("%s instruments done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - ins_root, - "instrument_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) # - else: - path = os.path.join( - ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - if vocal_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], v_spec_m, input_high_end, self.mp - ) - wav_vocals = spec_utils.cmb_spectrogram_to_wave( - v_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp) - logger.info("%s vocals done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - vocal_root, - "vocal_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - else: - path = os.path.join( - vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - - -class AudioPreDeEcho: - def __init__(self, agg, model_path, device, is_half): - self.model_path = model_path - self.device = device - self.data = { - # Processing Options - "postprocess": False, - "tta": False, - # Constants - "window_size": 512, - "agg": agg, - "high_end_process": "mirroring", - } - mp = ModelParameters("infer/lib/uvr5_pack/lib_v5/modelparams/4band_v3.json") - nout = 64 if "DeReverb" in model_path else 48 - model = CascadedNet(mp.param["bins"] * 2, nout) - cpk = torch.load(model_path, map_location="cpu") - model.load_state_dict(cpk) - model.eval() - if is_half: - model = model.half().to(device) - else: - model = model.to(device) - - self.mp = mp - self.model = model - - def _path_audio_( - self, music_file, vocal_root=None, ins_root=None, format="flac" - ): # 3个VR模型vocal和ins是反的 - if ins_root is None and vocal_root is None: - return "No save root." - name = os.path.basename(music_file) - if ins_root is not None: - os.makedirs(ins_root, exist_ok=True) - if vocal_root is not None: - os.makedirs(vocal_root, exist_ok=True) - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - bands_n = len(self.mp.param["band"]) - # print(bands_n) - for d in range(bands_n, 0, -1): - bp = self.mp.param["band"][d] - if d == bands_n: # high-end band - ( - X_wave[d], - _, - ) = librosa.core.load( # 理论上librosa读取可能对某些音频有bug,应该上ffmpeg读取,但是太麻烦了弃坑 - music_file, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - if X_wave[d].ndim == 1: - X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]]) - else: # lower bands - X_wave[d] = librosa.core.resample( - X_wave[d + 1], - self.mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - # Stft of wave source - X_spec_s[d] = spec_utils.wave_to_spectrogram_mt( - X_wave[d], - bp["hl"], - bp["n_fft"], - self.mp.param["mid_side"], - self.mp.param["mid_side_b2"], - self.mp.param["reverse"], - ) - # pdb.set_trace() - if d == bands_n and self.data["high_end_process"] != "none": - input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + ( - self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"] - ) - input_high_end = X_spec_s[d][ - :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, : - ] - - X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp) - aggresive_set = float(self.data["agg"] / 100) - aggressiveness = { - "value": aggresive_set, - "split_bin": self.mp.param["band"][1]["crop_stop"], - } - with torch.no_grad(): - pred, X_mag, X_phase = inference( - X_spec_m, self.device, self.model, aggressiveness, self.data - ) - # Postprocess - if self.data["postprocess"]: - pred_inv = np.clip(X_mag - pred, 0, np.inf) - pred = spec_utils.mask_silence(pred, pred_inv) - y_spec_m = pred * X_phase - v_spec_m = X_spec_m - y_spec_m - - if ins_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], y_spec_m, input_high_end, self.mp - ) - wav_instrument = spec_utils.cmb_spectrogram_to_wave( - y_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp) - logger.info("%s instruments done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - ins_root, - "instrument_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) # - else: - path = os.path.join( - ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - if vocal_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], v_spec_m, input_high_end, self.mp - ) - wav_vocals = spec_utils.cmb_spectrogram_to_wave( - v_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp) - logger.info("%s vocals done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - vocal_root, - "vocal_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - else: - path = os.path.join( - vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) diff --git a/spaces/Benson/text-generation/Examples/Descarga C.md b/spaces/Benson/text-generation/Examples/Descarga C.md deleted file mode 100644 index e9fdbc80e085176ab72680160addf28862a260d6..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descarga C.md +++ /dev/null @@ -1,91 +0,0 @@ - -

Cómo descargar e instalar C++ en Windows

-

C++ es un lenguaje de programación popular que ha evolucionado a partir de C y ha añadido características orientadas a objetos, genéricas y funcionales. Está diseñado para la programación de sistemas, software integrado y sistemas grandes, con rendimiento, eficiencia y flexibilidad como sus objetivos. C++ soporta programación orientada a objetos, lo que ayuda a modular y mantener un programa de manera eficiente. C++ también tiene otras características como espacio de nombres, sobrecarga del operador, manejo de errores y excepciones, y una biblioteca de conceptos.

-

descarga c++


Downloadhttps://bltlly.com/2v6JC5



-

Si quieres aprender C++ o utilizarlo para tus proyectos, necesitas tener un compilador de C++ y un entorno de desarrollo integrado (IDE) instalado en tu ordenador. En este artículo, te mostraremos cómo descargar e instalar C++ en Windows usando Visual Studio, que es uno de los IDEs más populares para el desarrollo de C++. Visual Studio proporciona un conjunto completo de herramientas para crear, depurar, probar e implementar aplicaciones C++.

-

Antes de comenzar a instalar Visual Studio y C++, asegúrese de que su equipo cumple con los requisitos del sistema . También necesita aplicar las últimas actualizaciones de Windows, reiniciar su computadora y liberar espacio en disco.

-

Paso 1: Descargar Visual Studio Installer

-

El primer paso es descargar el instalador de Visual Studio desde el sitio web de Microsoft. El instalador es una aplicación ligera que te permite elegir e instalar las características que necesitas para Visual Studio.

-

Para descargar el instalador, vaya a la página de descarga Visual Studio y seleccione la edición de Visual Studio que desee. Puede elegir entre ediciones Community, Professional o Enterprise. Para este tutorial, usaremos la edición Comunidad, que es gratuita para estudiantes, colaboradores de código abierto y desarrolladores individuales.

- -

Haga doble clic en el archivo de arranque para ejecutarlo. Si recibe un aviso de Control de cuenta de usuario, elija Sí para permitirlo. Se le pedirá que acepte los Términos de licencia de Microsoft y la Declaración de privacidad de Microsoft. Elija Continue.

-

Paso 2: Elija cargas de trabajo para el desarrollo de C++

-

El instalador le presentará una lista de cargas de trabajo, que son grupos de opciones relacionadas para áreas de desarrollo específicas. El soporte para C++ es ahora parte de cargas de trabajo opcionales que no están instaladas por defecto.

-

-

Para el desarrollo de C++, debe seleccionar Desarrollo de escritorio con carga de trabajo de C++. Esta carga de trabajo incluye funciones como:

- -

Para seleccionar el Desarrollo de escritorio con carga de trabajo de C++, marque la casilla junto a él. También puede ampliar la carga de trabajo para ver los componentes opcionales que puede instalar o deseleccionar. Por ejemplo, puede elegir instalar soporte para desarrollo de Linux con C++ o Windows 10 SDK (10.0.19041.0).

-

Después de seleccionar la carga de trabajo y los componentes que desea, haga clic en el botón Instalar en la esquina inferior derecha del instalador. El instalador le mostrará el progreso y el estado de la instalación. Dependiendo de la velocidad de Internet y la configuración del equipo, esto puede tomar algún tiempo.

-

Paso 3: Instalar y lanzar Visual Studio

-

Cuando se complete la instalación, verá un mensaje que dice "Instalación exitosa!" Ahora puede iniciar Visual Studio haciendo clic en el botón Iniciar en el instalador o buscándolo en el menú Inicio.

- -

Después de iniciar sesión, se le pedirá que elija un tema de color y un perfil de configuración de desarrollo. Puede elegir entre temas de Luz, Oscuridad o Azul y desde configuraciones de desarrollo General, C#, C++, Python o Web. Para este tutorial, elegiremos el tema Oscuro y la configuración de desarrollo de C++.

-

Visual Studio se abrirá y le mostrará una página de inicio con varias opciones. Para crear un nuevo proyecto, haga clic en el botón Crear un nuevo proyecto.

-

Paso 4: Escribir y ejecutar un programa simple de C++

-

Para escribir y ejecutar un simple programa de C++, necesitas crear un proyecto que contenga tus archivos de código fuente y otros recursos. Un proyecto también especifica cómo construir y ejecutar su programa usando varias herramientas y configuraciones.

-

Para crear un nuevo proyecto, siga estos pasos:

-
    -
  1. En la ventana Crear un nuevo proyecto, busque "C++" en el cuadro de búsqueda y seleccione "Aplicación de consola" en la lista de plantillas. Haga clic en Siguiente.
  2. -
  3. En la ventana Configurar su nuevo proyecto, introduzca un nombre para su proyecto (como HelloWorld) y elija una ubicación para guardarlo. También puede cambiar otras opciones como el nombre de la solución, la plataforma de destino y el estándar de idioma. Haga clic en Crear.
  4. -
  5. Visual Studio creará un nuevo proyecto y lo abrirá en la ventana principal. Verá un panel del Explorador de soluciones en el lado derecho que muestra los archivos y carpetas en su proyecto. También verá un panel del editor que muestra el código fuente de su archivo main.cpp.
  6. -
-

El archivo.cpp principal contiene un simple programa C++ que imprime "Hello World!" en la consola. El código se ve así:

-
#include 
-usando namespace std; int main()    cout << "Hello World!  n";  
-

Para construir y ejecutar su programa, siga estos pasos:

-
    -
  1. Haga clic en el menú Build y seleccione Build Solution (o pulse Ctrl+Shift+B). Esto compilará su código fuente en un archivo ejecutable usando el conjunto de herramientas del compilador MSVC.
  2. - -
  3. Debería ver un mensaje que diga "¡Hola mundo!" en la ventana de la consola. Pulse cualquier tecla para cerrarlo.
  4. -
-

Conclusión

-

En este artículo, le hemos mostrado cómo descargar e instalar C++ en Windows usando Visual Studio. También le hemos mostrado cómo crear, construir y ejecutar un programa simple de C++ usando las herramientas de Visual Studio.

-

C++ es un lenguaje de programación potente y versátil que puede utilizarse para diversos fines. Si quieres saber más sobre C++, puedes consultar algunos de estos recursos:

- -

¿Cuáles son algunas de las nuevas características de C++20?

-

C++20 es la última versión del estándar C++ que se publicó en 2020. Introduce muchas nuevas características y mejoras en el lenguaje, como:

-

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Apk Mod De Netflix.md b/spaces/Benson/text-generation/Examples/Descargar Apk Mod De Netflix.md deleted file mode 100644 index e86b6ed682f816e158f4e07ea8cb1fc225a40960..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Apk Mod De Netflix.md +++ /dev/null @@ -1,154 +0,0 @@ - -

Descargar Mod APK de Netflix: Cómo ver contenido premium gratis

-

Netflix es una de las plataformas de streaming más populares del mundo, ofreciendo una amplia gama de películas, programas de televisión, documentales y contenido original. Sin embargo, no todos pueden permitirse el lujo de pagar una suscripción a Netflix, o acceder a todo el contenido disponible en diferentes regiones. Es por eso que algunas personas buscan formas de descargar APK mod de Netflix, que son versiones modificadas de la aplicación oficial que permiten a los usuarios ver contenido premium de forma gratuita. Pero, ¿cómo descargar e instalar un mod APK de Netflix en su dispositivo? Y cuáles son los riesgos y beneficios de usar una aplicación de este tipo? En este artículo, vamos a responder a estas preguntas y más, para que pueda disfrutar de la transmisión ilimitada sin romper el banco.

-

¿Qué es Netflix y por qué es tan popular?

-

Netflix es una compañía estadounidense que proporciona servicios de transmisión en línea para varios tipos de medios, como películas, programas de televisión, documentales, anime y producciones originales. Netflix fue fundada en 1997 como un servicio de alquiler de DVD, pero más tarde se expandió a la transmisión en línea en 2007. Desde entonces, Netflix ha crecido hasta convertirse en una de las compañías de entretenimiento más grandes e influyentes del mundo, con más de 200 millones de suscriptores en más de 190 países.

-

descargar apk mod de netflix


Download Ziphttps://bltlly.com/2v6Luo



-

Características y beneficios de Netflix

-

Algunas de las características y beneficios que hacen que Netflix sea tan popular entre los usuarios son:

- -

Planes de suscripción y precios de Netflix

-

Para acceder al contenido de Netflix, los usuarios deben registrarse en un plan de suscripción que se adapte a sus necesidades y presupuesto. Netflix ofrece cuatro planes de suscripción: Básico, Estándar, Premium y Ultra. Las principales diferencias entre estos planes son el número de pantallas que se pueden utilizar simultáneamente, la calidad de vídeo (SD, HD o 4K), y la disponibilidad de HDR y Dolby Visión. La siguiente tabla muestra los detalles de cada plan:

- - -Plan -Pantallas -Calidad -HDR/Dolby Visión -Precio (USD) - - -Básico -1 -SD -No -$8.99/mes - - -Estándar -2 -HD -No -$13.99/mes - - -Premium -4 -4K -Sí -$17.99/mes - - -Ultra -4 -4K+ -Sí -$19.99/mes - - -

Tenga en cuenta que los precios pueden variar dependiendo del país y la región del usuario. Los usuarios también pueden optar por una prueba gratuita durante un período de tiempo limitado antes de comprometerse con un plan.

-

¿Qué es un mod APK y por qué lo necesita?

-

Un mod APK es una versión modificada de un archivo de paquete de aplicaciones de Android (APK), que es el formato utilizado para distribuir e instalar aplicaciones en dispositivos Android. Un mod APK puede tener diferentes características y funciones que la aplicación original, tales como la eliminación de anuncios, desbloquear contenido premium, agregar opciones adicionales, mejorar el rendimiento, etc. Un mod APK suele ser creado por tercerosdesarrolladores de fiestas o hackers que modifican el código fuente de la aplicación original.

-

Mod APK definición y ventajas

- - -

Riesgos y desafíos de usar mod APKs

-

Sin embargo, el uso de un mod APK también viene con algunos riesgos y desafíos, tales como:

- -

Por lo tanto, los usuarios deben tener cuidado y precaución al descargar e instalar APK mod, y solo usarlos de fuentes confiables y de buena reputación.

-

Cómo descargar e instalar Netflix mod APK en su dispositivo

-

Si desea descargar e instalar un mod APK de Netflix en su dispositivo, tendrá que seguir estos pasos:

-

Paso 1: Encontrar una fuente confiable para el archivo mod APK

- -

Paso 2: Habilitar fuentes desconocidas en la configuración del dispositivo

-

El siguiente paso es habilitar fuentes desconocidas en la configuración del dispositivo, lo que le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto, tendrá que ir a la configuración del dispositivo, luego la seguridad o la privacidad, y luego activar la opción para fuentes desconocidas. También es posible que necesite conceder permiso a su navegador o administrador de archivos para instalar aplicaciones de fuentes desconocidas.

-

-

Paso 3: Descargar e instalar el archivo mod APK

-

El tercer paso es descargar e instalar el archivo APK mod en su dispositivo. Puede hacer esto haciendo clic en el enlace de descarga o en el botón de la fuente que ha elegido, y luego esperar a que el archivo se descargue en su dispositivo. Una vez descargado el archivo, puede abrirlo con su administrador de archivos o navegador, luego toque en instalar. Es posible que necesite aceptar algunos permisos o advertencias antes de instalar la aplicación.

-

Paso 4: Inicie la aplicación y disfrute de streaming ilimitado

-

El paso final es lanzar la aplicación y disfrutar de la transmisión ilimitada de contenido de Netflix de forma gratuita. Puede hacer esto abriendo la aplicación desde el cajón de aplicaciones o la pantalla de inicio, luego iniciando sesión con su correo electrónico o cuenta de Facebook, o creando una nueva cuenta si no tiene una. A continuación, puede navegar a través de las categorías y géneros de contenido disponibles en Netflix, o buscar títulos específicos que desea ver. También puede ajustar algunos ajustes y preferencias dentro de la aplicación, como la calidad del vídeo, el idioma, los subtítulos, etc.

-

Comparación de Netflix mod APK y la aplicación oficial de Netflix

-

Comparación de funciones y características

- -

Sin embargo, Netflix mod APK también tiene algunos inconvenientes y limitaciones en comparación con la aplicación oficial de Netflix. Por ejemplo, Netflix mod APK no puede tener todas las características y funciones que la aplicación oficial tiene, tales como la descarga de contenido para la visualización sin conexión, la creación de múltiples perfiles, conseguir recomendaciones personalizadas, etc. Netflix mod APK también puede tener algunos errores o errores que pueden afectar el rendimiento o la funcionalidad de la aplicación. Además, Netflix mod APK no se puede actualizar o apoyar regularmente, lo que puede causar problemas de compatibilidad o riesgos de seguridad.

-

Comparación de pros y contras

-

Para resumir, aquí están algunos de los pros y los contras de usar Netflix mod APK versus la aplicación oficial de Netflix:

- - -Netflix mod APK -Aplicación oficial de Netflix - - -Pros: -Pros: - - - - - - -Contras: -Contras: - - - - - - -

Conclusión y preguntas frecuentes

- -

Si tiene alguna pregunta sobre Netflix mod APK, puede encontrar las respuestas en las siguientes preguntas frecuentes:

-

Q: ¿Es Netflix mod APK legal?

-

A: No, Netflix mod APK no es legal, ya que viola los términos y condiciones de la aplicación original y su desarrollador. El uso de un mod de Netflix APK puede resultar en acciones legales o sanciones de Netflix u otras autoridades.

-

Q: ¿Es seguro el mod APK de Netflix?

-

A: No necesariamente, Netflix mod APK puede no ser seguro, ya que puede exponer el dispositivo y los datos del usuario a malware, virus, spyware u otros ataques maliciosos que pueden dañar el dispositivo o comprometer la privacidad y la seguridad del usuario. Los usuarios siempre deben escanear el archivo APK mod con un software antivirus antes de instalarlo en su dispositivo.

-

Q: ¿Cómo puedo actualizar Netflix mod APK?

-

A: Para actualizar Netflix mod APK, los usuarios necesitan encontrar una versión más nueva del archivo mod APK de una fuente confiable, luego descargarlo e instalarlo en su dispositivo. Los usuarios también deben desinstalar la versión anterior del mod APK antes de instalar el nuevo.

-

Q: ¿Cómo puedo desinstalar Netflix mod APK?

-

A: Para desinstalar Netflix mod APK, los usuarios necesitan ir a la configuración de su dispositivo, luego aplicaciones o aplicaciones, a continuación, encontrar y seleccionar la aplicación Netflix mod APK, a continuación, toque en desinstalar. Los usuarios también deben eliminar los archivos o carpetas residuales relacionados con la aplicación de su dispositivo de almacenamiento.

-

Q: ¿Puedo usar Netflix mod APK en otros dispositivos?

-

A: Sí, los usuarios pueden usar Netflix mod APK en otros dispositivos que admiten el sistema operativo Android, como teléfonos inteligentes, tabletas, computadoras portátiles, televisores inteligentes, consolas de juegos y dispositivos de transmisión. Sin embargo, los usuarios deben asegurarse de que el archivo APK mod es compatible con su modelo de dispositivo y la versión antes de instalarlo en su dispositivo.

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Ekelebe De J Martins.md b/spaces/Benson/text-generation/Examples/Descargar Ekelebe De J Martins.md deleted file mode 100644 index 9776fbf0eeba70b149a600c7989fd34e8e241c76..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Ekelebe De J Martins.md +++ /dev/null @@ -1,64 +0,0 @@ - -

Cómo descargar música de Tommy J Pisa

-

Si eres un fan de la música pop y dangdut indonesia, es posible que hayas oído hablar de Tommy J Pisa, un cantante que saltó a la fama en los años 1980 y 1990. Es conocido por su voz melodiosa y canciones románticas, como "Dibatas Kota Ini", "Surat Untuk Kekasih", y "Biarkan Aku Menangis". Su música ha tocado los corazones de muchos oyentes y se ha convertido en parte de la herencia musical de Indonesia.

-

descargar ekelebe de j martins


Download Ziphttps://bltlly.com/2v6K2z



-

Pero, ¿cómo se puede descargar música de Tommy J Pisa y disfrutar de ella en sus dispositivos? En este artículo, le mostraremos tres maneras de hacerlo legal y éticamente, sin violar ninguna ley de derechos de autor ni dañar al artista. También responderemos algunas preguntas frecuentes sobre Tommy J Pisa y su música.

-

Opción 1: Compra sus álbumes o canciones en tiendas de música en línea

-

La forma más sencilla de descargar música de Tommy J Pisa es comprar sus álbumes o canciones de tiendas de música en línea, como iTunes, Amazon o Google Play. Al hacer esto, apoyará al artista financieramente y obtendrá archivos MP3 de alta calidad que puede reproducir en cualquier dispositivo. También obtendrá acceso a la ilustración del álbum, letras y otra información.

-

Para comprar música de Tommy J Pisa en línea, necesitará una tarjeta de crédito o una billetera digital, como PayPal. También tendrá que crear una cuenta en la tienda de música en línea de su elección y descargar su aplicación o software. Una vez que hayas hecho eso, puedes navegar por su catálogo y buscar los álbumes o canciones de Tommy J Pisa. Puede previsualizar las canciones antes de comprarlas y luego hacer clic en el botón de compra para completar la compra. Las canciones se descargarán en tu dispositivo o almacenamiento en la nube y podrás escucharlas en cualquier momento.

-

Opción 2: Transmitir su música desde plataformas en línea que permiten escuchar sin conexión

- -

Para transmitir música de Tommy J Pisa en línea, necesitará una conexión a Internet y una suscripción a la plataforma de su elección. Algunas plataformas ofrecen pruebas gratuitas o planes con anuncios, mientras que otras requieren una cuota mensual o anual. También tendrá que descargar su aplicación o software y crear una cuenta. Una vez que hayas hecho eso, puedes buscar la música de Tommy J Pisa y agregarla a tu biblioteca o lista de reproducción. A continuación, puede escucharlo en línea o descargarlo para escucharlo sin conexión alternando el botón de descarga. Las canciones se almacenarán en su dispositivo o almacenamiento en la nube y puede escucharlas en cualquier momento.

-

Opción 3: Descargar su música desde sitios web libres y legales que ofrecen sus canciones con su permiso

-

La tercera forma de descargar música de Tommy J Pisa es descargar su música desde sitios web gratuitos y legales que ofrecen sus canciones con su permiso. Estos sitios web suelen ser administrados por fans o sellos independientes que han obtenido los derechos para distribuir su música de forma gratuita. También pueden ofrecer otros contenidos relacionados con Tommy J Pisa, como vídeos, fotos o noticias.

-

Para descargar música de Tommy J Pisa desde estos sitios web, necesitará una conexión a Internet y un navegador web. También tendrá que encontrar estos sitios web mediante la búsqueda en línea o siguiendo los enlaces de las redes sociales u otras fuentes. Algunos ejemplos de estos sitios web son:

-

- -Sitio webDescripción -[Akurama Records]( 1 )Una discográfica con sede en Yakarta que ha subido varios álbumes de Tommy J Pisa en YouTube. Puede escucharlos en línea o descargarlos como archivos MP3 utilizando una herramienta de descarga de YouTube. -[Tommy J Pisa Fans Club]Un sitio web dedicado a Tommy J Pisa que tiene una colección de sus canciones, videos, fotos y noticias. Puede escuchar sus canciones en línea o descargarlas como archivos MP3 haciendo clic en el enlace de descarga. - - -

Sin embargo, debe tener cuidado al descargar música de estos sitios web, ya que algunos de ellos pueden contener virus, malware o spyware que pueden dañar su dispositivo o comprometer su privacidad. También debes respetar los deseos del artista y no compartir su música sin su permiso o usarla con fines comerciales.

-

Conclusión

-

En conclusión, hay tres maneras de descargar música de Tommy J Pisa legal y éticamente: comprar sus álbumes o canciones de tiendas de música en línea, streaming de su música desde plataformas en línea que permiten escuchar fuera de línea, y descargar su música de sitios web libres y legales que ofrecen sus canciones con su permiso. Al hacerlo, podrás disfrutar de su música en tus dispositivos y apreciar su talento y contribución a la escena musical indonesia.

-

Aquí hay algunos consejos sobre cómo disfrutar de su música:

- -

Preguntas frecuentes

-

¿Quién es Tommy J Pisa?

-

Tommy J Pisa es un cantante indonesio especializado en música pop y dangdut. Nació en Yakarta el 22 de diciembre de 1960. Comenzó su carrera como cantante callejero y más tarde se unió a varias bandas antes de ir en solitario. Ha publicado más de 20 álbumes y ha ganado varios premios y reconocimientos por su música.

-

¿Qué es dangdut?

- -

¿Cuáles son algunas de las canciones más populares de Tommy J Pisa?

-

Algunas de las canciones más populares de Tommy J Pisa son:

-
    -
  1. "Dibatas Kota Ini" (Al borde de esta ciudad), una canción sobre una relación a larga distancia que termina en tragedia.
  2. -
  3. "Surat Untuk Kekasih" (Carta para Amante), una canción sobre un hombre que escribe una carta a su amante que lo ha dejado por otro hombre.
  4. -
  5. "Biarkan Aku Menangis" (Let Me Cry), una canción sobre un hombre que expresa su tristeza y arrepentimiento después de perder a su amante.
  6. -
  7. "Disini Dibatas Kota Ini" (Aquí en el borde de esta ciudad), una secuela de "Dibatas Kota Ini" que cuenta la historia del amante que regresa a la ciudad después de años de separación.
  8. -
  9. "Nasib Pengamen" (The Fate of Street Singers), una canción que refleja la propia experiencia de Tommy J Pisa como cantante callejero que lucha por llegar a fin de mes.
  10. -
-

¿Dónde puedo encontrar más información sobre Tommy J Pisa?

-

Puedes encontrar más información sobre Tommy J Pisa en las siguientes fuentes:

- -

¿Cómo puedo contactar a Tommy J Pisa?

-

Si desea ponerse en contacto con Tommy J Pisa por cualquier motivo, como reservarlo para un espectáculo, colaborar con él o enviarle un correo de fans, puede hacerlo utilizando los siguientes métodos:

- 64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/BetterAPI/BetterChat_new/src/lib/buildPrompt.ts b/spaces/BetterAPI/BetterChat_new/src/lib/buildPrompt.ts deleted file mode 100644 index bd48390ba13ed8e68b790cfd475c32f5824d907d..0000000000000000000000000000000000000000 --- a/spaces/BetterAPI/BetterChat_new/src/lib/buildPrompt.ts +++ /dev/null @@ -1,33 +0,0 @@ -import { - PUBLIC_ASSISTANT_MESSAGE_TOKEN, - PUBLIC_MAX_INPUT_TOKENS, - PUBLIC_PREPROMPT, - PUBLIC_SEP_TOKEN, - PUBLIC_USER_MESSAGE_TOKEN, -} from "$env/static/public"; -import type { Message } from "./types/Message"; - -/** - * Convert [{user: "assistant", content: "hi"}, {user: "user", content: "hello"}] to: - * - * <|assistant|>hi<|endoftext|><|prompter|>hello<|endoftext|><|assistant|> - */ -export function buildPrompt(messages: Message[]): string { - const prompt = - messages - .map( - (m) => - (m.from === "user" - ? PUBLIC_USER_MESSAGE_TOKEN + m.content - : PUBLIC_ASSISTANT_MESSAGE_TOKEN + m.content) + - (m.content.endsWith(PUBLIC_SEP_TOKEN) ? "" : PUBLIC_SEP_TOKEN) - ) - .join("") + PUBLIC_ASSISTANT_MESSAGE_TOKEN; - - // Not super precise, but it's truncated in the model's backend anyway - return ( - PUBLIC_PREPROMPT + - "\n-----\n" + - prompt.split(" ").slice(-parseInt(PUBLIC_MAX_INPUT_TOKENS)).join(" ") - ); -} diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/typing_extensions.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/typing_extensions.py deleted file mode 100644 index 9cbf5b87b590c2d40fd3db2444339df85f71c611..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/typing_extensions.py +++ /dev/null @@ -1,2312 +0,0 @@ -import abc -import collections -import collections.abc -import functools -import inspect -import operator -import sys -import types as _types -import typing -import warnings - - -__all__ = [ - # Super-special typing primitives. - 'Any', - 'ClassVar', - 'Concatenate', - 'Final', - 'LiteralString', - 'ParamSpec', - 'ParamSpecArgs', - 'ParamSpecKwargs', - 'Self', - 'Type', - 'TypeVar', - 'TypeVarTuple', - 'Unpack', - - # ABCs (from collections.abc). - 'Awaitable', - 'AsyncIterator', - 'AsyncIterable', - 'Coroutine', - 'AsyncGenerator', - 'AsyncContextManager', - 'ChainMap', - - # Concrete collection types. - 'ContextManager', - 'Counter', - 'Deque', - 'DefaultDict', - 'NamedTuple', - 'OrderedDict', - 'TypedDict', - - # Structural checks, a.k.a. protocols. - 'SupportsIndex', - - # One-off things. - 'Annotated', - 'assert_never', - 'assert_type', - 'clear_overloads', - 'dataclass_transform', - 'deprecated', - 'get_overloads', - 'final', - 'get_args', - 'get_origin', - 'get_type_hints', - 'IntVar', - 'is_typeddict', - 'Literal', - 'NewType', - 'overload', - 'override', - 'Protocol', - 'reveal_type', - 'runtime', - 'runtime_checkable', - 'Text', - 'TypeAlias', - 'TypeGuard', - 'TYPE_CHECKING', - 'Never', - 'NoReturn', - 'Required', - 'NotRequired', -] - -# for backward compatibility -PEP_560 = True -GenericMeta = type - -# The functions below are modified copies of typing internal helpers. -# They are needed by _ProtocolMeta and they provide support for PEP 646. - -_marker = object() - - -def _check_generic(cls, parameters, elen=_marker): - """Check correct count for parameters of a generic cls (internal helper). - This gives a nice error message in case of count mismatch. - """ - if not elen: - raise TypeError(f"{cls} is not a generic class") - if elen is _marker: - if not hasattr(cls, "__parameters__") or not cls.__parameters__: - raise TypeError(f"{cls} is not a generic class") - elen = len(cls.__parameters__) - alen = len(parameters) - if alen != elen: - if hasattr(cls, "__parameters__"): - parameters = [p for p in cls.__parameters__ if not _is_unpack(p)] - num_tv_tuples = sum(isinstance(p, TypeVarTuple) for p in parameters) - if (num_tv_tuples > 0) and (alen >= elen - num_tv_tuples): - return - raise TypeError(f"Too {'many' if alen > elen else 'few'} parameters for {cls};" - f" actual {alen}, expected {elen}") - - -if sys.version_info >= (3, 10): - def _should_collect_from_parameters(t): - return isinstance( - t, (typing._GenericAlias, _types.GenericAlias, _types.UnionType) - ) -elif sys.version_info >= (3, 9): - def _should_collect_from_parameters(t): - return isinstance(t, (typing._GenericAlias, _types.GenericAlias)) -else: - def _should_collect_from_parameters(t): - return isinstance(t, typing._GenericAlias) and not t._special - - -def _collect_type_vars(types, typevar_types=None): - """Collect all type variable contained in types in order of - first appearance (lexicographic order). For example:: - - _collect_type_vars((T, List[S, T])) == (T, S) - """ - if typevar_types is None: - typevar_types = typing.TypeVar - tvars = [] - for t in types: - if ( - isinstance(t, typevar_types) and - t not in tvars and - not _is_unpack(t) - ): - tvars.append(t) - if _should_collect_from_parameters(t): - tvars.extend([t for t in t.__parameters__ if t not in tvars]) - return tuple(tvars) - - -NoReturn = typing.NoReturn - -# Some unconstrained type variables. These are used by the container types. -# (These are not for export.) -T = typing.TypeVar('T') # Any type. -KT = typing.TypeVar('KT') # Key type. -VT = typing.TypeVar('VT') # Value type. -T_co = typing.TypeVar('T_co', covariant=True) # Any type covariant containers. -T_contra = typing.TypeVar('T_contra', contravariant=True) # Ditto contravariant. - - -if sys.version_info >= (3, 11): - from typing import Any -else: - - class _AnyMeta(type): - def __instancecheck__(self, obj): - if self is Any: - raise TypeError("typing_extensions.Any cannot be used with isinstance()") - return super().__instancecheck__(obj) - - def __repr__(self): - if self is Any: - return "typing_extensions.Any" - return super().__repr__() - - class Any(metaclass=_AnyMeta): - """Special type indicating an unconstrained type. - - Any is compatible with every type. - - Any assumed to have all methods. - - All values assumed to be instances of Any. - Note that all the above statements are true from the point of view of - static type checkers. At runtime, Any should not be used with instance - checks. - """ - def __new__(cls, *args, **kwargs): - if cls is Any: - raise TypeError("Any cannot be instantiated") - return super().__new__(cls, *args, **kwargs) - - -ClassVar = typing.ClassVar - -# On older versions of typing there is an internal class named "Final". -# 3.8+ -if hasattr(typing, 'Final') and sys.version_info[:2] >= (3, 7): - Final = typing.Final -# 3.7 -else: - class _FinalForm(typing._SpecialForm, _root=True): - - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - item = typing._type_check(parameters, - f'{self._name} accepts only a single type.') - return typing._GenericAlias(self, (item,)) - - Final = _FinalForm('Final', - doc="""A special typing construct to indicate that a name - cannot be re-assigned or overridden in a subclass. - For example: - - MAX_SIZE: Final = 9000 - MAX_SIZE += 1 # Error reported by type checker - - class Connection: - TIMEOUT: Final[int] = 10 - class FastConnector(Connection): - TIMEOUT = 1 # Error reported by type checker - - There is no runtime checking of these properties.""") - -if sys.version_info >= (3, 11): - final = typing.final -else: - # @final exists in 3.8+, but we backport it for all versions - # before 3.11 to keep support for the __final__ attribute. - # See https://bugs.python.org/issue46342 - def final(f): - """This decorator can be used to indicate to type checkers that - the decorated method cannot be overridden, and decorated class - cannot be subclassed. For example: - - class Base: - @final - def done(self) -> None: - ... - class Sub(Base): - def done(self) -> None: # Error reported by type checker - ... - @final - class Leaf: - ... - class Other(Leaf): # Error reported by type checker - ... - - There is no runtime checking of these properties. The decorator - sets the ``__final__`` attribute to ``True`` on the decorated object - to allow runtime introspection. - """ - try: - f.__final__ = True - except (AttributeError, TypeError): - # Skip the attribute silently if it is not writable. - # AttributeError happens if the object has __slots__ or a - # read-only property, TypeError if it's a builtin class. - pass - return f - - -def IntVar(name): - return typing.TypeVar(name) - - -# 3.8+: -if hasattr(typing, 'Literal'): - Literal = typing.Literal -# 3.7: -else: - class _LiteralForm(typing._SpecialForm, _root=True): - - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - return typing._GenericAlias(self, parameters) - - Literal = _LiteralForm('Literal', - doc="""A type that can be used to indicate to type checkers - that the corresponding value has a value literally equivalent - to the provided parameter. For example: - - var: Literal[4] = 4 - - The type checker understands that 'var' is literally equal to - the value 4 and no other value. - - Literal[...] cannot be subclassed. There is no runtime - checking verifying that the parameter is actually a value - instead of a type.""") - - -_overload_dummy = typing._overload_dummy # noqa - - -if hasattr(typing, "get_overloads"): # 3.11+ - overload = typing.overload - get_overloads = typing.get_overloads - clear_overloads = typing.clear_overloads -else: - # {module: {qualname: {firstlineno: func}}} - _overload_registry = collections.defaultdict( - functools.partial(collections.defaultdict, dict) - ) - - def overload(func): - """Decorator for overloaded functions/methods. - - In a stub file, place two or more stub definitions for the same - function in a row, each decorated with @overload. For example: - - @overload - def utf8(value: None) -> None: ... - @overload - def utf8(value: bytes) -> bytes: ... - @overload - def utf8(value: str) -> bytes: ... - - In a non-stub file (i.e. a regular .py file), do the same but - follow it with an implementation. The implementation should *not* - be decorated with @overload. For example: - - @overload - def utf8(value: None) -> None: ... - @overload - def utf8(value: bytes) -> bytes: ... - @overload - def utf8(value: str) -> bytes: ... - def utf8(value): - # implementation goes here - - The overloads for a function can be retrieved at runtime using the - get_overloads() function. - """ - # classmethod and staticmethod - f = getattr(func, "__func__", func) - try: - _overload_registry[f.__module__][f.__qualname__][ - f.__code__.co_firstlineno - ] = func - except AttributeError: - # Not a normal function; ignore. - pass - return _overload_dummy - - def get_overloads(func): - """Return all defined overloads for *func* as a sequence.""" - # classmethod and staticmethod - f = getattr(func, "__func__", func) - if f.__module__ not in _overload_registry: - return [] - mod_dict = _overload_registry[f.__module__] - if f.__qualname__ not in mod_dict: - return [] - return list(mod_dict[f.__qualname__].values()) - - def clear_overloads(): - """Clear all overloads in the registry.""" - _overload_registry.clear() - - -# This is not a real generic class. Don't use outside annotations. -Type = typing.Type - -# Various ABCs mimicking those in collections.abc. -# A few are simply re-exported for completeness. - - -Awaitable = typing.Awaitable -Coroutine = typing.Coroutine -AsyncIterable = typing.AsyncIterable -AsyncIterator = typing.AsyncIterator -Deque = typing.Deque -ContextManager = typing.ContextManager -AsyncContextManager = typing.AsyncContextManager -DefaultDict = typing.DefaultDict - -# 3.7.2+ -if hasattr(typing, 'OrderedDict'): - OrderedDict = typing.OrderedDict -# 3.7.0-3.7.2 -else: - OrderedDict = typing._alias(collections.OrderedDict, (KT, VT)) - -Counter = typing.Counter -ChainMap = typing.ChainMap -AsyncGenerator = typing.AsyncGenerator -NewType = typing.NewType -Text = typing.Text -TYPE_CHECKING = typing.TYPE_CHECKING - - -_PROTO_WHITELIST = ['Callable', 'Awaitable', - 'Iterable', 'Iterator', 'AsyncIterable', 'AsyncIterator', - 'Hashable', 'Sized', 'Container', 'Collection', 'Reversible', - 'ContextManager', 'AsyncContextManager'] - - -def _get_protocol_attrs(cls): - attrs = set() - for base in cls.__mro__[:-1]: # without object - if base.__name__ in ('Protocol', 'Generic'): - continue - annotations = getattr(base, '__annotations__', {}) - for attr in list(base.__dict__.keys()) + list(annotations.keys()): - if (not attr.startswith('_abc_') and attr not in ( - '__abstractmethods__', '__annotations__', '__weakref__', - '_is_protocol', '_is_runtime_protocol', '__dict__', - '__args__', '__slots__', - '__next_in_mro__', '__parameters__', '__origin__', - '__orig_bases__', '__extra__', '__tree_hash__', - '__doc__', '__subclasshook__', '__init__', '__new__', - '__module__', '_MutableMapping__marker', '_gorg')): - attrs.add(attr) - return attrs - - -def _is_callable_members_only(cls): - return all(callable(getattr(cls, attr, None)) for attr in _get_protocol_attrs(cls)) - - -def _maybe_adjust_parameters(cls): - """Helper function used in Protocol.__init_subclass__ and _TypedDictMeta.__new__. - - The contents of this function are very similar - to logic found in typing.Generic.__init_subclass__ - on the CPython main branch. - """ - tvars = [] - if '__orig_bases__' in cls.__dict__: - tvars = typing._collect_type_vars(cls.__orig_bases__) - # Look for Generic[T1, ..., Tn] or Protocol[T1, ..., Tn]. - # If found, tvars must be a subset of it. - # If not found, tvars is it. - # Also check for and reject plain Generic, - # and reject multiple Generic[...] and/or Protocol[...]. - gvars = None - for base in cls.__orig_bases__: - if (isinstance(base, typing._GenericAlias) and - base.__origin__ in (typing.Generic, Protocol)): - # for error messages - the_base = base.__origin__.__name__ - if gvars is not None: - raise TypeError( - "Cannot inherit from Generic[...]" - " and/or Protocol[...] multiple types.") - gvars = base.__parameters__ - if gvars is None: - gvars = tvars - else: - tvarset = set(tvars) - gvarset = set(gvars) - if not tvarset <= gvarset: - s_vars = ', '.join(str(t) for t in tvars if t not in gvarset) - s_args = ', '.join(str(g) for g in gvars) - raise TypeError(f"Some type variables ({s_vars}) are" - f" not listed in {the_base}[{s_args}]") - tvars = gvars - cls.__parameters__ = tuple(tvars) - - -# 3.8+ -if hasattr(typing, 'Protocol'): - Protocol = typing.Protocol -# 3.7 -else: - - def _no_init(self, *args, **kwargs): - if type(self)._is_protocol: - raise TypeError('Protocols cannot be instantiated') - - class _ProtocolMeta(abc.ABCMeta): # noqa: B024 - # This metaclass is a bit unfortunate and exists only because of the lack - # of __instancehook__. - def __instancecheck__(cls, instance): - # We need this method for situations where attributes are - # assigned in __init__. - if ((not getattr(cls, '_is_protocol', False) or - _is_callable_members_only(cls)) and - issubclass(instance.__class__, cls)): - return True - if cls._is_protocol: - if all(hasattr(instance, attr) and - (not callable(getattr(cls, attr, None)) or - getattr(instance, attr) is not None) - for attr in _get_protocol_attrs(cls)): - return True - return super().__instancecheck__(instance) - - class Protocol(metaclass=_ProtocolMeta): - # There is quite a lot of overlapping code with typing.Generic. - # Unfortunately it is hard to avoid this while these live in two different - # modules. The duplicated code will be removed when Protocol is moved to typing. - """Base class for protocol classes. Protocol classes are defined as:: - - class Proto(Protocol): - def meth(self) -> int: - ... - - Such classes are primarily used with static type checkers that recognize - structural subtyping (static duck-typing), for example:: - - class C: - def meth(self) -> int: - return 0 - - def func(x: Proto) -> int: - return x.meth() - - func(C()) # Passes static type check - - See PEP 544 for details. Protocol classes decorated with - @typing_extensions.runtime act as simple-minded runtime protocol that checks - only the presence of given attributes, ignoring their type signatures. - - Protocol classes can be generic, they are defined as:: - - class GenProto(Protocol[T]): - def meth(self) -> T: - ... - """ - __slots__ = () - _is_protocol = True - - def __new__(cls, *args, **kwds): - if cls is Protocol: - raise TypeError("Type Protocol cannot be instantiated; " - "it can only be used as a base class") - return super().__new__(cls) - - @typing._tp_cache - def __class_getitem__(cls, params): - if not isinstance(params, tuple): - params = (params,) - if not params and cls is not typing.Tuple: - raise TypeError( - f"Parameter list to {cls.__qualname__}[...] cannot be empty") - msg = "Parameters to generic types must be types." - params = tuple(typing._type_check(p, msg) for p in params) # noqa - if cls is Protocol: - # Generic can only be subscripted with unique type variables. - if not all(isinstance(p, typing.TypeVar) for p in params): - i = 0 - while isinstance(params[i], typing.TypeVar): - i += 1 - raise TypeError( - "Parameters to Protocol[...] must all be type variables." - f" Parameter {i + 1} is {params[i]}") - if len(set(params)) != len(params): - raise TypeError( - "Parameters to Protocol[...] must all be unique") - else: - # Subscripting a regular Generic subclass. - _check_generic(cls, params, len(cls.__parameters__)) - return typing._GenericAlias(cls, params) - - def __init_subclass__(cls, *args, **kwargs): - if '__orig_bases__' in cls.__dict__: - error = typing.Generic in cls.__orig_bases__ - else: - error = typing.Generic in cls.__bases__ - if error: - raise TypeError("Cannot inherit from plain Generic") - _maybe_adjust_parameters(cls) - - # Determine if this is a protocol or a concrete subclass. - if not cls.__dict__.get('_is_protocol', None): - cls._is_protocol = any(b is Protocol for b in cls.__bases__) - - # Set (or override) the protocol subclass hook. - def _proto_hook(other): - if not cls.__dict__.get('_is_protocol', None): - return NotImplemented - if not getattr(cls, '_is_runtime_protocol', False): - if sys._getframe(2).f_globals['__name__'] in ['abc', 'functools']: - return NotImplemented - raise TypeError("Instance and class checks can only be used with" - " @runtime protocols") - if not _is_callable_members_only(cls): - if sys._getframe(2).f_globals['__name__'] in ['abc', 'functools']: - return NotImplemented - raise TypeError("Protocols with non-method members" - " don't support issubclass()") - if not isinstance(other, type): - # Same error as for issubclass(1, int) - raise TypeError('issubclass() arg 1 must be a class') - for attr in _get_protocol_attrs(cls): - for base in other.__mro__: - if attr in base.__dict__: - if base.__dict__[attr] is None: - return NotImplemented - break - annotations = getattr(base, '__annotations__', {}) - if (isinstance(annotations, typing.Mapping) and - attr in annotations and - isinstance(other, _ProtocolMeta) and - other._is_protocol): - break - else: - return NotImplemented - return True - if '__subclasshook__' not in cls.__dict__: - cls.__subclasshook__ = _proto_hook - - # We have nothing more to do for non-protocols. - if not cls._is_protocol: - return - - # Check consistency of bases. - for base in cls.__bases__: - if not (base in (object, typing.Generic) or - base.__module__ == 'collections.abc' and - base.__name__ in _PROTO_WHITELIST or - isinstance(base, _ProtocolMeta) and base._is_protocol): - raise TypeError('Protocols can only inherit from other' - f' protocols, got {repr(base)}') - cls.__init__ = _no_init - - -# 3.8+ -if hasattr(typing, 'runtime_checkable'): - runtime_checkable = typing.runtime_checkable -# 3.7 -else: - def runtime_checkable(cls): - """Mark a protocol class as a runtime protocol, so that it - can be used with isinstance() and issubclass(). Raise TypeError - if applied to a non-protocol class. - - This allows a simple-minded structural check very similar to the - one-offs in collections.abc such as Hashable. - """ - if not isinstance(cls, _ProtocolMeta) or not cls._is_protocol: - raise TypeError('@runtime_checkable can be only applied to protocol classes,' - f' got {cls!r}') - cls._is_runtime_protocol = True - return cls - - -# Exists for backwards compatibility. -runtime = runtime_checkable - - -# 3.8+ -if hasattr(typing, 'SupportsIndex'): - SupportsIndex = typing.SupportsIndex -# 3.7 -else: - @runtime_checkable - class SupportsIndex(Protocol): - __slots__ = () - - @abc.abstractmethod - def __index__(self) -> int: - pass - - -if hasattr(typing, "Required"): - # The standard library TypedDict in Python 3.8 does not store runtime information - # about which (if any) keys are optional. See https://bugs.python.org/issue38834 - # The standard library TypedDict in Python 3.9.0/1 does not honour the "total" - # keyword with old-style TypedDict(). See https://bugs.python.org/issue42059 - # The standard library TypedDict below Python 3.11 does not store runtime - # information about optional and required keys when using Required or NotRequired. - # Generic TypedDicts are also impossible using typing.TypedDict on Python <3.11. - TypedDict = typing.TypedDict - _TypedDictMeta = typing._TypedDictMeta - is_typeddict = typing.is_typeddict -else: - def _check_fails(cls, other): - try: - if sys._getframe(1).f_globals['__name__'] not in ['abc', - 'functools', - 'typing']: - # Typed dicts are only for static structural subtyping. - raise TypeError('TypedDict does not support instance and class checks') - except (AttributeError, ValueError): - pass - return False - - def _dict_new(*args, **kwargs): - if not args: - raise TypeError('TypedDict.__new__(): not enough arguments') - _, args = args[0], args[1:] # allow the "cls" keyword be passed - return dict(*args, **kwargs) - - _dict_new.__text_signature__ = '($cls, _typename, _fields=None, /, **kwargs)' - - def _typeddict_new(*args, total=True, **kwargs): - if not args: - raise TypeError('TypedDict.__new__(): not enough arguments') - _, args = args[0], args[1:] # allow the "cls" keyword be passed - if args: - typename, args = args[0], args[1:] # allow the "_typename" keyword be passed - elif '_typename' in kwargs: - typename = kwargs.pop('_typename') - import warnings - warnings.warn("Passing '_typename' as keyword argument is deprecated", - DeprecationWarning, stacklevel=2) - else: - raise TypeError("TypedDict.__new__() missing 1 required positional " - "argument: '_typename'") - if args: - try: - fields, = args # allow the "_fields" keyword be passed - except ValueError: - raise TypeError('TypedDict.__new__() takes from 2 to 3 ' - f'positional arguments but {len(args) + 2} ' - 'were given') - elif '_fields' in kwargs and len(kwargs) == 1: - fields = kwargs.pop('_fields') - import warnings - warnings.warn("Passing '_fields' as keyword argument is deprecated", - DeprecationWarning, stacklevel=2) - else: - fields = None - - if fields is None: - fields = kwargs - elif kwargs: - raise TypeError("TypedDict takes either a dict or keyword arguments," - " but not both") - - ns = {'__annotations__': dict(fields)} - try: - # Setting correct module is necessary to make typed dict classes pickleable. - ns['__module__'] = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - pass - - return _TypedDictMeta(typename, (), ns, total=total) - - _typeddict_new.__text_signature__ = ('($cls, _typename, _fields=None,' - ' /, *, total=True, **kwargs)') - - _TAKES_MODULE = "module" in inspect.signature(typing._type_check).parameters - - class _TypedDictMeta(type): - def __init__(cls, name, bases, ns, total=True): - super().__init__(name, bases, ns) - - def __new__(cls, name, bases, ns, total=True): - # Create new typed dict class object. - # This method is called directly when TypedDict is subclassed, - # or via _typeddict_new when TypedDict is instantiated. This way - # TypedDict supports all three syntaxes described in its docstring. - # Subclasses and instances of TypedDict return actual dictionaries - # via _dict_new. - ns['__new__'] = _typeddict_new if name == 'TypedDict' else _dict_new - # Don't insert typing.Generic into __bases__ here, - # or Generic.__init_subclass__ will raise TypeError - # in the super().__new__() call. - # Instead, monkey-patch __bases__ onto the class after it's been created. - tp_dict = super().__new__(cls, name, (dict,), ns) - - if any(issubclass(base, typing.Generic) for base in bases): - tp_dict.__bases__ = (typing.Generic, dict) - _maybe_adjust_parameters(tp_dict) - - annotations = {} - own_annotations = ns.get('__annotations__', {}) - msg = "TypedDict('Name', {f0: t0, f1: t1, ...}); each t must be a type" - kwds = {"module": tp_dict.__module__} if _TAKES_MODULE else {} - own_annotations = { - n: typing._type_check(tp, msg, **kwds) - for n, tp in own_annotations.items() - } - required_keys = set() - optional_keys = set() - - for base in bases: - annotations.update(base.__dict__.get('__annotations__', {})) - required_keys.update(base.__dict__.get('__required_keys__', ())) - optional_keys.update(base.__dict__.get('__optional_keys__', ())) - - annotations.update(own_annotations) - for annotation_key, annotation_type in own_annotations.items(): - annotation_origin = get_origin(annotation_type) - if annotation_origin is Annotated: - annotation_args = get_args(annotation_type) - if annotation_args: - annotation_type = annotation_args[0] - annotation_origin = get_origin(annotation_type) - - if annotation_origin is Required: - required_keys.add(annotation_key) - elif annotation_origin is NotRequired: - optional_keys.add(annotation_key) - elif total: - required_keys.add(annotation_key) - else: - optional_keys.add(annotation_key) - - tp_dict.__annotations__ = annotations - tp_dict.__required_keys__ = frozenset(required_keys) - tp_dict.__optional_keys__ = frozenset(optional_keys) - if not hasattr(tp_dict, '__total__'): - tp_dict.__total__ = total - return tp_dict - - __instancecheck__ = __subclasscheck__ = _check_fails - - TypedDict = _TypedDictMeta('TypedDict', (dict,), {}) - TypedDict.__module__ = __name__ - TypedDict.__doc__ = \ - """A simple typed name space. At runtime it is equivalent to a plain dict. - - TypedDict creates a dictionary type that expects all of its - instances to have a certain set of keys, with each key - associated with a value of a consistent type. This expectation - is not checked at runtime but is only enforced by type checkers. - Usage:: - - class Point2D(TypedDict): - x: int - y: int - label: str - - a: Point2D = {'x': 1, 'y': 2, 'label': 'good'} # OK - b: Point2D = {'z': 3, 'label': 'bad'} # Fails type check - - assert Point2D(x=1, y=2, label='first') == dict(x=1, y=2, label='first') - - The type info can be accessed via the Point2D.__annotations__ dict, and - the Point2D.__required_keys__ and Point2D.__optional_keys__ frozensets. - TypedDict supports two additional equivalent forms:: - - Point2D = TypedDict('Point2D', x=int, y=int, label=str) - Point2D = TypedDict('Point2D', {'x': int, 'y': int, 'label': str}) - - The class syntax is only supported in Python 3.6+, while two other - syntax forms work for Python 2.7 and 3.2+ - """ - - if hasattr(typing, "_TypedDictMeta"): - _TYPEDDICT_TYPES = (typing._TypedDictMeta, _TypedDictMeta) - else: - _TYPEDDICT_TYPES = (_TypedDictMeta,) - - def is_typeddict(tp): - """Check if an annotation is a TypedDict class - - For example:: - class Film(TypedDict): - title: str - year: int - - is_typeddict(Film) # => True - is_typeddict(Union[list, str]) # => False - """ - return isinstance(tp, tuple(_TYPEDDICT_TYPES)) - - -if hasattr(typing, "assert_type"): - assert_type = typing.assert_type - -else: - def assert_type(__val, __typ): - """Assert (to the type checker) that the value is of the given type. - - When the type checker encounters a call to assert_type(), it - emits an error if the value is not of the specified type:: - - def greet(name: str) -> None: - assert_type(name, str) # ok - assert_type(name, int) # type checker error - - At runtime this returns the first argument unchanged and otherwise - does nothing. - """ - return __val - - -if hasattr(typing, "Required"): - get_type_hints = typing.get_type_hints -else: - import functools - import types - - # replaces _strip_annotations() - def _strip_extras(t): - """Strips Annotated, Required and NotRequired from a given type.""" - if isinstance(t, _AnnotatedAlias): - return _strip_extras(t.__origin__) - if hasattr(t, "__origin__") and t.__origin__ in (Required, NotRequired): - return _strip_extras(t.__args__[0]) - if isinstance(t, typing._GenericAlias): - stripped_args = tuple(_strip_extras(a) for a in t.__args__) - if stripped_args == t.__args__: - return t - return t.copy_with(stripped_args) - if hasattr(types, "GenericAlias") and isinstance(t, types.GenericAlias): - stripped_args = tuple(_strip_extras(a) for a in t.__args__) - if stripped_args == t.__args__: - return t - return types.GenericAlias(t.__origin__, stripped_args) - if hasattr(types, "UnionType") and isinstance(t, types.UnionType): - stripped_args = tuple(_strip_extras(a) for a in t.__args__) - if stripped_args == t.__args__: - return t - return functools.reduce(operator.or_, stripped_args) - - return t - - def get_type_hints(obj, globalns=None, localns=None, include_extras=False): - """Return type hints for an object. - - This is often the same as obj.__annotations__, but it handles - forward references encoded as string literals, adds Optional[t] if a - default value equal to None is set and recursively replaces all - 'Annotated[T, ...]', 'Required[T]' or 'NotRequired[T]' with 'T' - (unless 'include_extras=True'). - - The argument may be a module, class, method, or function. The annotations - are returned as a dictionary. For classes, annotations include also - inherited members. - - TypeError is raised if the argument is not of a type that can contain - annotations, and an empty dictionary is returned if no annotations are - present. - - BEWARE -- the behavior of globalns and localns is counterintuitive - (unless you are familiar with how eval() and exec() work). The - search order is locals first, then globals. - - - If no dict arguments are passed, an attempt is made to use the - globals from obj (or the respective module's globals for classes), - and these are also used as the locals. If the object does not appear - to have globals, an empty dictionary is used. - - - If one dict argument is passed, it is used for both globals and - locals. - - - If two dict arguments are passed, they specify globals and - locals, respectively. - """ - if hasattr(typing, "Annotated"): - hint = typing.get_type_hints( - obj, globalns=globalns, localns=localns, include_extras=True - ) - else: - hint = typing.get_type_hints(obj, globalns=globalns, localns=localns) - if include_extras: - return hint - return {k: _strip_extras(t) for k, t in hint.items()} - - -# Python 3.9+ has PEP 593 (Annotated) -if hasattr(typing, 'Annotated'): - Annotated = typing.Annotated - # Not exported and not a public API, but needed for get_origin() and get_args() - # to work. - _AnnotatedAlias = typing._AnnotatedAlias -# 3.7-3.8 -else: - class _AnnotatedAlias(typing._GenericAlias, _root=True): - """Runtime representation of an annotated type. - - At its core 'Annotated[t, dec1, dec2, ...]' is an alias for the type 't' - with extra annotations. The alias behaves like a normal typing alias, - instantiating is the same as instantiating the underlying type, binding - it to types is also the same. - """ - def __init__(self, origin, metadata): - if isinstance(origin, _AnnotatedAlias): - metadata = origin.__metadata__ + metadata - origin = origin.__origin__ - super().__init__(origin, origin) - self.__metadata__ = metadata - - def copy_with(self, params): - assert len(params) == 1 - new_type = params[0] - return _AnnotatedAlias(new_type, self.__metadata__) - - def __repr__(self): - return (f"typing_extensions.Annotated[{typing._type_repr(self.__origin__)}, " - f"{', '.join(repr(a) for a in self.__metadata__)}]") - - def __reduce__(self): - return operator.getitem, ( - Annotated, (self.__origin__,) + self.__metadata__ - ) - - def __eq__(self, other): - if not isinstance(other, _AnnotatedAlias): - return NotImplemented - if self.__origin__ != other.__origin__: - return False - return self.__metadata__ == other.__metadata__ - - def __hash__(self): - return hash((self.__origin__, self.__metadata__)) - - class Annotated: - """Add context specific metadata to a type. - - Example: Annotated[int, runtime_check.Unsigned] indicates to the - hypothetical runtime_check module that this type is an unsigned int. - Every other consumer of this type can ignore this metadata and treat - this type as int. - - The first argument to Annotated must be a valid type (and will be in - the __origin__ field), the remaining arguments are kept as a tuple in - the __extra__ field. - - Details: - - - It's an error to call `Annotated` with less than two arguments. - - Nested Annotated are flattened:: - - Annotated[Annotated[T, Ann1, Ann2], Ann3] == Annotated[T, Ann1, Ann2, Ann3] - - - Instantiating an annotated type is equivalent to instantiating the - underlying type:: - - Annotated[C, Ann1](5) == C(5) - - - Annotated can be used as a generic type alias:: - - Optimized = Annotated[T, runtime.Optimize()] - Optimized[int] == Annotated[int, runtime.Optimize()] - - OptimizedList = Annotated[List[T], runtime.Optimize()] - OptimizedList[int] == Annotated[List[int], runtime.Optimize()] - """ - - __slots__ = () - - def __new__(cls, *args, **kwargs): - raise TypeError("Type Annotated cannot be instantiated.") - - @typing._tp_cache - def __class_getitem__(cls, params): - if not isinstance(params, tuple) or len(params) < 2: - raise TypeError("Annotated[...] should be used " - "with at least two arguments (a type and an " - "annotation).") - allowed_special_forms = (ClassVar, Final) - if get_origin(params[0]) in allowed_special_forms: - origin = params[0] - else: - msg = "Annotated[t, ...]: t must be a type." - origin = typing._type_check(params[0], msg) - metadata = tuple(params[1:]) - return _AnnotatedAlias(origin, metadata) - - def __init_subclass__(cls, *args, **kwargs): - raise TypeError( - f"Cannot subclass {cls.__module__}.Annotated" - ) - -# Python 3.8 has get_origin() and get_args() but those implementations aren't -# Annotated-aware, so we can't use those. Python 3.9's versions don't support -# ParamSpecArgs and ParamSpecKwargs, so only Python 3.10's versions will do. -if sys.version_info[:2] >= (3, 10): - get_origin = typing.get_origin - get_args = typing.get_args -# 3.7-3.9 -else: - try: - # 3.9+ - from typing import _BaseGenericAlias - except ImportError: - _BaseGenericAlias = typing._GenericAlias - try: - # 3.9+ - from typing import GenericAlias as _typing_GenericAlias - except ImportError: - _typing_GenericAlias = typing._GenericAlias - - def get_origin(tp): - """Get the unsubscripted version of a type. - - This supports generic types, Callable, Tuple, Union, Literal, Final, ClassVar - and Annotated. Return None for unsupported types. Examples:: - - get_origin(Literal[42]) is Literal - get_origin(int) is None - get_origin(ClassVar[int]) is ClassVar - get_origin(Generic) is Generic - get_origin(Generic[T]) is Generic - get_origin(Union[T, int]) is Union - get_origin(List[Tuple[T, T]][int]) == list - get_origin(P.args) is P - """ - if isinstance(tp, _AnnotatedAlias): - return Annotated - if isinstance(tp, (typing._GenericAlias, _typing_GenericAlias, _BaseGenericAlias, - ParamSpecArgs, ParamSpecKwargs)): - return tp.__origin__ - if tp is typing.Generic: - return typing.Generic - return None - - def get_args(tp): - """Get type arguments with all substitutions performed. - - For unions, basic simplifications used by Union constructor are performed. - Examples:: - get_args(Dict[str, int]) == (str, int) - get_args(int) == () - get_args(Union[int, Union[T, int], str][int]) == (int, str) - get_args(Union[int, Tuple[T, int]][str]) == (int, Tuple[str, int]) - get_args(Callable[[], T][int]) == ([], int) - """ - if isinstance(tp, _AnnotatedAlias): - return (tp.__origin__,) + tp.__metadata__ - if isinstance(tp, (typing._GenericAlias, _typing_GenericAlias)): - if getattr(tp, "_special", False): - return () - res = tp.__args__ - if get_origin(tp) is collections.abc.Callable and res[0] is not Ellipsis: - res = (list(res[:-1]), res[-1]) - return res - return () - - -# 3.10+ -if hasattr(typing, 'TypeAlias'): - TypeAlias = typing.TypeAlias -# 3.9 -elif sys.version_info[:2] >= (3, 9): - class _TypeAliasForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - @_TypeAliasForm - def TypeAlias(self, parameters): - """Special marker indicating that an assignment should - be recognized as a proper type alias definition by type - checkers. - - For example:: - - Predicate: TypeAlias = Callable[..., bool] - - It's invalid when used anywhere except as in the example above. - """ - raise TypeError(f"{self} is not subscriptable") -# 3.7-3.8 -else: - class _TypeAliasForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - TypeAlias = _TypeAliasForm('TypeAlias', - doc="""Special marker indicating that an assignment should - be recognized as a proper type alias definition by type - checkers. - - For example:: - - Predicate: TypeAlias = Callable[..., bool] - - It's invalid when used anywhere except as in the example - above.""") - - -class _DefaultMixin: - """Mixin for TypeVarLike defaults.""" - - __slots__ = () - - def __init__(self, default): - if isinstance(default, (tuple, list)): - self.__default__ = tuple((typing._type_check(d, "Default must be a type") - for d in default)) - elif default != _marker: - self.__default__ = typing._type_check(default, "Default must be a type") - else: - self.__default__ = None - - -# Add default and infer_variance parameters from PEP 696 and 695 -class TypeVar(typing.TypeVar, _DefaultMixin, _root=True): - """Type variable.""" - - __module__ = 'typing' - - def __init__(self, name, *constraints, bound=None, - covariant=False, contravariant=False, - default=_marker, infer_variance=False): - super().__init__(name, *constraints, bound=bound, covariant=covariant, - contravariant=contravariant) - _DefaultMixin.__init__(self, default) - self.__infer_variance__ = infer_variance - - # for pickling: - try: - def_mod = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - def_mod = None - if def_mod != 'typing_extensions': - self.__module__ = def_mod - - -# Python 3.10+ has PEP 612 -if hasattr(typing, 'ParamSpecArgs'): - ParamSpecArgs = typing.ParamSpecArgs - ParamSpecKwargs = typing.ParamSpecKwargs -# 3.7-3.9 -else: - class _Immutable: - """Mixin to indicate that object should not be copied.""" - __slots__ = () - - def __copy__(self): - return self - - def __deepcopy__(self, memo): - return self - - class ParamSpecArgs(_Immutable): - """The args for a ParamSpec object. - - Given a ParamSpec object P, P.args is an instance of ParamSpecArgs. - - ParamSpecArgs objects have a reference back to their ParamSpec: - - P.args.__origin__ is P - - This type is meant for runtime introspection and has no special meaning to - static type checkers. - """ - def __init__(self, origin): - self.__origin__ = origin - - def __repr__(self): - return f"{self.__origin__.__name__}.args" - - def __eq__(self, other): - if not isinstance(other, ParamSpecArgs): - return NotImplemented - return self.__origin__ == other.__origin__ - - class ParamSpecKwargs(_Immutable): - """The kwargs for a ParamSpec object. - - Given a ParamSpec object P, P.kwargs is an instance of ParamSpecKwargs. - - ParamSpecKwargs objects have a reference back to their ParamSpec: - - P.kwargs.__origin__ is P - - This type is meant for runtime introspection and has no special meaning to - static type checkers. - """ - def __init__(self, origin): - self.__origin__ = origin - - def __repr__(self): - return f"{self.__origin__.__name__}.kwargs" - - def __eq__(self, other): - if not isinstance(other, ParamSpecKwargs): - return NotImplemented - return self.__origin__ == other.__origin__ - -# 3.10+ -if hasattr(typing, 'ParamSpec'): - - # Add default Parameter - PEP 696 - class ParamSpec(typing.ParamSpec, _DefaultMixin, _root=True): - """Parameter specification variable.""" - - __module__ = 'typing' - - def __init__(self, name, *, bound=None, covariant=False, contravariant=False, - default=_marker): - super().__init__(name, bound=bound, covariant=covariant, - contravariant=contravariant) - _DefaultMixin.__init__(self, default) - - # for pickling: - try: - def_mod = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - def_mod = None - if def_mod != 'typing_extensions': - self.__module__ = def_mod - -# 3.7-3.9 -else: - - # Inherits from list as a workaround for Callable checks in Python < 3.9.2. - class ParamSpec(list, _DefaultMixin): - """Parameter specification variable. - - Usage:: - - P = ParamSpec('P') - - Parameter specification variables exist primarily for the benefit of static - type checkers. They are used to forward the parameter types of one - callable to another callable, a pattern commonly found in higher order - functions and decorators. They are only valid when used in ``Concatenate``, - or s the first argument to ``Callable``. In Python 3.10 and higher, - they are also supported in user-defined Generics at runtime. - See class Generic for more information on generic types. An - example for annotating a decorator:: - - T = TypeVar('T') - P = ParamSpec('P') - - def add_logging(f: Callable[P, T]) -> Callable[P, T]: - '''A type-safe decorator to add logging to a function.''' - def inner(*args: P.args, **kwargs: P.kwargs) -> T: - logging.info(f'{f.__name__} was called') - return f(*args, **kwargs) - return inner - - @add_logging - def add_two(x: float, y: float) -> float: - '''Add two numbers together.''' - return x + y - - Parameter specification variables defined with covariant=True or - contravariant=True can be used to declare covariant or contravariant - generic types. These keyword arguments are valid, but their actual semantics - are yet to be decided. See PEP 612 for details. - - Parameter specification variables can be introspected. e.g.: - - P.__name__ == 'T' - P.__bound__ == None - P.__covariant__ == False - P.__contravariant__ == False - - Note that only parameter specification variables defined in global scope can - be pickled. - """ - - # Trick Generic __parameters__. - __class__ = typing.TypeVar - - @property - def args(self): - return ParamSpecArgs(self) - - @property - def kwargs(self): - return ParamSpecKwargs(self) - - def __init__(self, name, *, bound=None, covariant=False, contravariant=False, - default=_marker): - super().__init__([self]) - self.__name__ = name - self.__covariant__ = bool(covariant) - self.__contravariant__ = bool(contravariant) - if bound: - self.__bound__ = typing._type_check(bound, 'Bound must be a type.') - else: - self.__bound__ = None - _DefaultMixin.__init__(self, default) - - # for pickling: - try: - def_mod = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - def_mod = None - if def_mod != 'typing_extensions': - self.__module__ = def_mod - - def __repr__(self): - if self.__covariant__: - prefix = '+' - elif self.__contravariant__: - prefix = '-' - else: - prefix = '~' - return prefix + self.__name__ - - def __hash__(self): - return object.__hash__(self) - - def __eq__(self, other): - return self is other - - def __reduce__(self): - return self.__name__ - - # Hack to get typing._type_check to pass. - def __call__(self, *args, **kwargs): - pass - - -# 3.7-3.9 -if not hasattr(typing, 'Concatenate'): - # Inherits from list as a workaround for Callable checks in Python < 3.9.2. - class _ConcatenateGenericAlias(list): - - # Trick Generic into looking into this for __parameters__. - __class__ = typing._GenericAlias - - # Flag in 3.8. - _special = False - - def __init__(self, origin, args): - super().__init__(args) - self.__origin__ = origin - self.__args__ = args - - def __repr__(self): - _type_repr = typing._type_repr - return (f'{_type_repr(self.__origin__)}' - f'[{", ".join(_type_repr(arg) for arg in self.__args__)}]') - - def __hash__(self): - return hash((self.__origin__, self.__args__)) - - # Hack to get typing._type_check to pass in Generic. - def __call__(self, *args, **kwargs): - pass - - @property - def __parameters__(self): - return tuple( - tp for tp in self.__args__ if isinstance(tp, (typing.TypeVar, ParamSpec)) - ) - - -# 3.7-3.9 -@typing._tp_cache -def _concatenate_getitem(self, parameters): - if parameters == (): - raise TypeError("Cannot take a Concatenate of no types.") - if not isinstance(parameters, tuple): - parameters = (parameters,) - if not isinstance(parameters[-1], ParamSpec): - raise TypeError("The last parameter to Concatenate should be a " - "ParamSpec variable.") - msg = "Concatenate[arg, ...]: each arg must be a type." - parameters = tuple(typing._type_check(p, msg) for p in parameters) - return _ConcatenateGenericAlias(self, parameters) - - -# 3.10+ -if hasattr(typing, 'Concatenate'): - Concatenate = typing.Concatenate - _ConcatenateGenericAlias = typing._ConcatenateGenericAlias # noqa -# 3.9 -elif sys.version_info[:2] >= (3, 9): - @_TypeAliasForm - def Concatenate(self, parameters): - """Used in conjunction with ``ParamSpec`` and ``Callable`` to represent a - higher order function which adds, removes or transforms parameters of a - callable. - - For example:: - - Callable[Concatenate[int, P], int] - - See PEP 612 for detailed information. - """ - return _concatenate_getitem(self, parameters) -# 3.7-8 -else: - class _ConcatenateForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - return _concatenate_getitem(self, parameters) - - Concatenate = _ConcatenateForm( - 'Concatenate', - doc="""Used in conjunction with ``ParamSpec`` and ``Callable`` to represent a - higher order function which adds, removes or transforms parameters of a - callable. - - For example:: - - Callable[Concatenate[int, P], int] - - See PEP 612 for detailed information. - """) - -# 3.10+ -if hasattr(typing, 'TypeGuard'): - TypeGuard = typing.TypeGuard -# 3.9 -elif sys.version_info[:2] >= (3, 9): - class _TypeGuardForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - @_TypeGuardForm - def TypeGuard(self, parameters): - """Special typing form used to annotate the return type of a user-defined - type guard function. ``TypeGuard`` only accepts a single type argument. - At runtime, functions marked this way should return a boolean. - - ``TypeGuard`` aims to benefit *type narrowing* -- a technique used by static - type checkers to determine a more precise type of an expression within a - program's code flow. Usually type narrowing is done by analyzing - conditional code flow and applying the narrowing to a block of code. The - conditional expression here is sometimes referred to as a "type guard". - - Sometimes it would be convenient to use a user-defined boolean function - as a type guard. Such a function should use ``TypeGuard[...]`` as its - return type to alert static type checkers to this intention. - - Using ``-> TypeGuard`` tells the static type checker that for a given - function: - - 1. The return value is a boolean. - 2. If the return value is ``True``, the type of its argument - is the type inside ``TypeGuard``. - - For example:: - - def is_str(val: Union[str, float]): - # "isinstance" type guard - if isinstance(val, str): - # Type of ``val`` is narrowed to ``str`` - ... - else: - # Else, type of ``val`` is narrowed to ``float``. - ... - - Strict type narrowing is not enforced -- ``TypeB`` need not be a narrower - form of ``TypeA`` (it can even be a wider form) and this may lead to - type-unsafe results. The main reason is to allow for things like - narrowing ``List[object]`` to ``List[str]`` even though the latter is not - a subtype of the former, since ``List`` is invariant. The responsibility of - writing type-safe type guards is left to the user. - - ``TypeGuard`` also works with type variables. For more information, see - PEP 647 (User-Defined Type Guards). - """ - item = typing._type_check(parameters, f'{self} accepts only a single type.') - return typing._GenericAlias(self, (item,)) -# 3.7-3.8 -else: - class _TypeGuardForm(typing._SpecialForm, _root=True): - - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - item = typing._type_check(parameters, - f'{self._name} accepts only a single type') - return typing._GenericAlias(self, (item,)) - - TypeGuard = _TypeGuardForm( - 'TypeGuard', - doc="""Special typing form used to annotate the return type of a user-defined - type guard function. ``TypeGuard`` only accepts a single type argument. - At runtime, functions marked this way should return a boolean. - - ``TypeGuard`` aims to benefit *type narrowing* -- a technique used by static - type checkers to determine a more precise type of an expression within a - program's code flow. Usually type narrowing is done by analyzing - conditional code flow and applying the narrowing to a block of code. The - conditional expression here is sometimes referred to as a "type guard". - - Sometimes it would be convenient to use a user-defined boolean function - as a type guard. Such a function should use ``TypeGuard[...]`` as its - return type to alert static type checkers to this intention. - - Using ``-> TypeGuard`` tells the static type checker that for a given - function: - - 1. The return value is a boolean. - 2. If the return value is ``True``, the type of its argument - is the type inside ``TypeGuard``. - - For example:: - - def is_str(val: Union[str, float]): - # "isinstance" type guard - if isinstance(val, str): - # Type of ``val`` is narrowed to ``str`` - ... - else: - # Else, type of ``val`` is narrowed to ``float``. - ... - - Strict type narrowing is not enforced -- ``TypeB`` need not be a narrower - form of ``TypeA`` (it can even be a wider form) and this may lead to - type-unsafe results. The main reason is to allow for things like - narrowing ``List[object]`` to ``List[str]`` even though the latter is not - a subtype of the former, since ``List`` is invariant. The responsibility of - writing type-safe type guards is left to the user. - - ``TypeGuard`` also works with type variables. For more information, see - PEP 647 (User-Defined Type Guards). - """) - - -# Vendored from cpython typing._SpecialFrom -class _SpecialForm(typing._Final, _root=True): - __slots__ = ('_name', '__doc__', '_getitem') - - def __init__(self, getitem): - self._getitem = getitem - self._name = getitem.__name__ - self.__doc__ = getitem.__doc__ - - def __getattr__(self, item): - if item in {'__name__', '__qualname__'}: - return self._name - - raise AttributeError(item) - - def __mro_entries__(self, bases): - raise TypeError(f"Cannot subclass {self!r}") - - def __repr__(self): - return f'typing_extensions.{self._name}' - - def __reduce__(self): - return self._name - - def __call__(self, *args, **kwds): - raise TypeError(f"Cannot instantiate {self!r}") - - def __or__(self, other): - return typing.Union[self, other] - - def __ror__(self, other): - return typing.Union[other, self] - - def __instancecheck__(self, obj): - raise TypeError(f"{self} cannot be used with isinstance()") - - def __subclasscheck__(self, cls): - raise TypeError(f"{self} cannot be used with issubclass()") - - @typing._tp_cache - def __getitem__(self, parameters): - return self._getitem(self, parameters) - - -if hasattr(typing, "LiteralString"): - LiteralString = typing.LiteralString -else: - @_SpecialForm - def LiteralString(self, params): - """Represents an arbitrary literal string. - - Example:: - - from pip._vendor.typing_extensions import LiteralString - - def query(sql: LiteralString) -> ...: - ... - - query("SELECT * FROM table") # ok - query(f"SELECT * FROM {input()}") # not ok - - See PEP 675 for details. - - """ - raise TypeError(f"{self} is not subscriptable") - - -if hasattr(typing, "Self"): - Self = typing.Self -else: - @_SpecialForm - def Self(self, params): - """Used to spell the type of "self" in classes. - - Example:: - - from typing import Self - - class ReturnsSelf: - def parse(self, data: bytes) -> Self: - ... - return self - - """ - - raise TypeError(f"{self} is not subscriptable") - - -if hasattr(typing, "Never"): - Never = typing.Never -else: - @_SpecialForm - def Never(self, params): - """The bottom type, a type that has no members. - - This can be used to define a function that should never be - called, or a function that never returns:: - - from pip._vendor.typing_extensions import Never - - def never_call_me(arg: Never) -> None: - pass - - def int_or_str(arg: int | str) -> None: - never_call_me(arg) # type checker error - match arg: - case int(): - print("It's an int") - case str(): - print("It's a str") - case _: - never_call_me(arg) # ok, arg is of type Never - - """ - - raise TypeError(f"{self} is not subscriptable") - - -if hasattr(typing, 'Required'): - Required = typing.Required - NotRequired = typing.NotRequired -elif sys.version_info[:2] >= (3, 9): - class _ExtensionsSpecialForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - @_ExtensionsSpecialForm - def Required(self, parameters): - """A special typing construct to mark a key of a total=False TypedDict - as required. For example: - - class Movie(TypedDict, total=False): - title: Required[str] - year: int - - m = Movie( - title='The Matrix', # typechecker error if key is omitted - year=1999, - ) - - There is no runtime checking that a required key is actually provided - when instantiating a related TypedDict. - """ - item = typing._type_check(parameters, f'{self._name} accepts only a single type.') - return typing._GenericAlias(self, (item,)) - - @_ExtensionsSpecialForm - def NotRequired(self, parameters): - """A special typing construct to mark a key of a TypedDict as - potentially missing. For example: - - class Movie(TypedDict): - title: str - year: NotRequired[int] - - m = Movie( - title='The Matrix', # typechecker error if key is omitted - year=1999, - ) - """ - item = typing._type_check(parameters, f'{self._name} accepts only a single type.') - return typing._GenericAlias(self, (item,)) - -else: - class _RequiredForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - item = typing._type_check(parameters, - f'{self._name} accepts only a single type.') - return typing._GenericAlias(self, (item,)) - - Required = _RequiredForm( - 'Required', - doc="""A special typing construct to mark a key of a total=False TypedDict - as required. For example: - - class Movie(TypedDict, total=False): - title: Required[str] - year: int - - m = Movie( - title='The Matrix', # typechecker error if key is omitted - year=1999, - ) - - There is no runtime checking that a required key is actually provided - when instantiating a related TypedDict. - """) - NotRequired = _RequiredForm( - 'NotRequired', - doc="""A special typing construct to mark a key of a TypedDict as - potentially missing. For example: - - class Movie(TypedDict): - title: str - year: NotRequired[int] - - m = Movie( - title='The Matrix', # typechecker error if key is omitted - year=1999, - ) - """) - - -if hasattr(typing, "Unpack"): # 3.11+ - Unpack = typing.Unpack -elif sys.version_info[:2] >= (3, 9): - class _UnpackSpecialForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - class _UnpackAlias(typing._GenericAlias, _root=True): - __class__ = typing.TypeVar - - @_UnpackSpecialForm - def Unpack(self, parameters): - """A special typing construct to unpack a variadic type. For example: - - Shape = TypeVarTuple('Shape') - Batch = NewType('Batch', int) - - def add_batch_axis( - x: Array[Unpack[Shape]] - ) -> Array[Batch, Unpack[Shape]]: ... - - """ - item = typing._type_check(parameters, f'{self._name} accepts only a single type.') - return _UnpackAlias(self, (item,)) - - def _is_unpack(obj): - return isinstance(obj, _UnpackAlias) - -else: - class _UnpackAlias(typing._GenericAlias, _root=True): - __class__ = typing.TypeVar - - class _UnpackForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - item = typing._type_check(parameters, - f'{self._name} accepts only a single type.') - return _UnpackAlias(self, (item,)) - - Unpack = _UnpackForm( - 'Unpack', - doc="""A special typing construct to unpack a variadic type. For example: - - Shape = TypeVarTuple('Shape') - Batch = NewType('Batch', int) - - def add_batch_axis( - x: Array[Unpack[Shape]] - ) -> Array[Batch, Unpack[Shape]]: ... - - """) - - def _is_unpack(obj): - return isinstance(obj, _UnpackAlias) - - -if hasattr(typing, "TypeVarTuple"): # 3.11+ - - # Add default Parameter - PEP 696 - class TypeVarTuple(typing.TypeVarTuple, _DefaultMixin, _root=True): - """Type variable tuple.""" - - def __init__(self, name, *, default=_marker): - super().__init__(name) - _DefaultMixin.__init__(self, default) - - # for pickling: - try: - def_mod = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - def_mod = None - if def_mod != 'typing_extensions': - self.__module__ = def_mod - -else: - class TypeVarTuple(_DefaultMixin): - """Type variable tuple. - - Usage:: - - Ts = TypeVarTuple('Ts') - - In the same way that a normal type variable is a stand-in for a single - type such as ``int``, a type variable *tuple* is a stand-in for a *tuple* - type such as ``Tuple[int, str]``. - - Type variable tuples can be used in ``Generic`` declarations. - Consider the following example:: - - class Array(Generic[*Ts]): ... - - The ``Ts`` type variable tuple here behaves like ``tuple[T1, T2]``, - where ``T1`` and ``T2`` are type variables. To use these type variables - as type parameters of ``Array``, we must *unpack* the type variable tuple using - the star operator: ``*Ts``. The signature of ``Array`` then behaves - as if we had simply written ``class Array(Generic[T1, T2]): ...``. - In contrast to ``Generic[T1, T2]``, however, ``Generic[*Shape]`` allows - us to parameterise the class with an *arbitrary* number of type parameters. - - Type variable tuples can be used anywhere a normal ``TypeVar`` can. - This includes class definitions, as shown above, as well as function - signatures and variable annotations:: - - class Array(Generic[*Ts]): - - def __init__(self, shape: Tuple[*Ts]): - self._shape: Tuple[*Ts] = shape - - def get_shape(self) -> Tuple[*Ts]: - return self._shape - - shape = (Height(480), Width(640)) - x: Array[Height, Width] = Array(shape) - y = abs(x) # Inferred type is Array[Height, Width] - z = x + x # ... is Array[Height, Width] - x.get_shape() # ... is tuple[Height, Width] - - """ - - # Trick Generic __parameters__. - __class__ = typing.TypeVar - - def __iter__(self): - yield self.__unpacked__ - - def __init__(self, name, *, default=_marker): - self.__name__ = name - _DefaultMixin.__init__(self, default) - - # for pickling: - try: - def_mod = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - def_mod = None - if def_mod != 'typing_extensions': - self.__module__ = def_mod - - self.__unpacked__ = Unpack[self] - - def __repr__(self): - return self.__name__ - - def __hash__(self): - return object.__hash__(self) - - def __eq__(self, other): - return self is other - - def __reduce__(self): - return self.__name__ - - def __init_subclass__(self, *args, **kwds): - if '_root' not in kwds: - raise TypeError("Cannot subclass special typing classes") - - -if hasattr(typing, "reveal_type"): - reveal_type = typing.reveal_type -else: - def reveal_type(__obj: T) -> T: - """Reveal the inferred type of a variable. - - When a static type checker encounters a call to ``reveal_type()``, - it will emit the inferred type of the argument:: - - x: int = 1 - reveal_type(x) - - Running a static type checker (e.g., ``mypy``) on this example - will produce output similar to 'Revealed type is "builtins.int"'. - - At runtime, the function prints the runtime type of the - argument and returns it unchanged. - - """ - print(f"Runtime type is {type(__obj).__name__!r}", file=sys.stderr) - return __obj - - -if hasattr(typing, "assert_never"): - assert_never = typing.assert_never -else: - def assert_never(__arg: Never) -> Never: - """Assert to the type checker that a line of code is unreachable. - - Example:: - - def int_or_str(arg: int | str) -> None: - match arg: - case int(): - print("It's an int") - case str(): - print("It's a str") - case _: - assert_never(arg) - - If a type checker finds that a call to assert_never() is - reachable, it will emit an error. - - At runtime, this throws an exception when called. - - """ - raise AssertionError("Expected code to be unreachable") - - -if sys.version_info >= (3, 12): - # dataclass_transform exists in 3.11 but lacks the frozen_default parameter - dataclass_transform = typing.dataclass_transform -else: - def dataclass_transform( - *, - eq_default: bool = True, - order_default: bool = False, - kw_only_default: bool = False, - frozen_default: bool = False, - field_specifiers: typing.Tuple[ - typing.Union[typing.Type[typing.Any], typing.Callable[..., typing.Any]], - ... - ] = (), - **kwargs: typing.Any, - ) -> typing.Callable[[T], T]: - """Decorator that marks a function, class, or metaclass as providing - dataclass-like behavior. - - Example: - - from pip._vendor.typing_extensions import dataclass_transform - - _T = TypeVar("_T") - - # Used on a decorator function - @dataclass_transform() - def create_model(cls: type[_T]) -> type[_T]: - ... - return cls - - @create_model - class CustomerModel: - id: int - name: str - - # Used on a base class - @dataclass_transform() - class ModelBase: ... - - class CustomerModel(ModelBase): - id: int - name: str - - # Used on a metaclass - @dataclass_transform() - class ModelMeta(type): ... - - class ModelBase(metaclass=ModelMeta): ... - - class CustomerModel(ModelBase): - id: int - name: str - - Each of the ``CustomerModel`` classes defined in this example will now - behave similarly to a dataclass created with the ``@dataclasses.dataclass`` - decorator. For example, the type checker will synthesize an ``__init__`` - method. - - The arguments to this decorator can be used to customize this behavior: - - ``eq_default`` indicates whether the ``eq`` parameter is assumed to be - True or False if it is omitted by the caller. - - ``order_default`` indicates whether the ``order`` parameter is - assumed to be True or False if it is omitted by the caller. - - ``kw_only_default`` indicates whether the ``kw_only`` parameter is - assumed to be True or False if it is omitted by the caller. - - ``frozen_default`` indicates whether the ``frozen`` parameter is - assumed to be True or False if it is omitted by the caller. - - ``field_specifiers`` specifies a static list of supported classes - or functions that describe fields, similar to ``dataclasses.field()``. - - At runtime, this decorator records its arguments in the - ``__dataclass_transform__`` attribute on the decorated object. - - See PEP 681 for details. - - """ - def decorator(cls_or_fn): - cls_or_fn.__dataclass_transform__ = { - "eq_default": eq_default, - "order_default": order_default, - "kw_only_default": kw_only_default, - "frozen_default": frozen_default, - "field_specifiers": field_specifiers, - "kwargs": kwargs, - } - return cls_or_fn - return decorator - - -if hasattr(typing, "override"): - override = typing.override -else: - _F = typing.TypeVar("_F", bound=typing.Callable[..., typing.Any]) - - def override(__arg: _F) -> _F: - """Indicate that a method is intended to override a method in a base class. - - Usage: - - class Base: - def method(self) -> None: ... - pass - - class Child(Base): - @override - def method(self) -> None: - super().method() - - When this decorator is applied to a method, the type checker will - validate that it overrides a method with the same name on a base class. - This helps prevent bugs that may occur when a base class is changed - without an equivalent change to a child class. - - There is no runtime checking of these properties. The decorator - sets the ``__override__`` attribute to ``True`` on the decorated object - to allow runtime introspection. - - See PEP 698 for details. - - """ - try: - __arg.__override__ = True - except (AttributeError, TypeError): - # Skip the attribute silently if it is not writable. - # AttributeError happens if the object has __slots__ or a - # read-only property, TypeError if it's a builtin class. - pass - return __arg - - -if hasattr(typing, "deprecated"): - deprecated = typing.deprecated -else: - _T = typing.TypeVar("_T") - - def deprecated( - __msg: str, - *, - category: typing.Optional[typing.Type[Warning]] = DeprecationWarning, - stacklevel: int = 1, - ) -> typing.Callable[[_T], _T]: - """Indicate that a class, function or overload is deprecated. - - Usage: - - @deprecated("Use B instead") - class A: - pass - - @deprecated("Use g instead") - def f(): - pass - - @overload - @deprecated("int support is deprecated") - def g(x: int) -> int: ... - @overload - def g(x: str) -> int: ... - - When this decorator is applied to an object, the type checker - will generate a diagnostic on usage of the deprecated object. - - No runtime warning is issued. The decorator sets the ``__deprecated__`` - attribute on the decorated object to the deprecation message - passed to the decorator. If applied to an overload, the decorator - must be after the ``@overload`` decorator for the attribute to - exist on the overload as returned by ``get_overloads()``. - - See PEP 702 for details. - - """ - def decorator(__arg: _T) -> _T: - if category is None: - __arg.__deprecated__ = __msg - return __arg - elif isinstance(__arg, type): - original_new = __arg.__new__ - has_init = __arg.__init__ is not object.__init__ - - @functools.wraps(original_new) - def __new__(cls, *args, **kwargs): - warnings.warn(__msg, category=category, stacklevel=stacklevel + 1) - # Mirrors a similar check in object.__new__. - if not has_init and (args or kwargs): - raise TypeError(f"{cls.__name__}() takes no arguments") - if original_new is not object.__new__: - return original_new(cls, *args, **kwargs) - else: - return original_new(cls) - - __arg.__new__ = staticmethod(__new__) - __arg.__deprecated__ = __new__.__deprecated__ = __msg - return __arg - elif callable(__arg): - @functools.wraps(__arg) - def wrapper(*args, **kwargs): - warnings.warn(__msg, category=category, stacklevel=stacklevel + 1) - return __arg(*args, **kwargs) - - __arg.__deprecated__ = wrapper.__deprecated__ = __msg - return wrapper - else: - raise TypeError( - "@deprecated decorator with non-None category must be applied to " - f"a class or callable, not {__arg!r}" - ) - - return decorator - - -# We have to do some monkey patching to deal with the dual nature of -# Unpack/TypeVarTuple: -# - We want Unpack to be a kind of TypeVar so it gets accepted in -# Generic[Unpack[Ts]] -# - We want it to *not* be treated as a TypeVar for the purposes of -# counting generic parameters, so that when we subscript a generic, -# the runtime doesn't try to substitute the Unpack with the subscripted type. -if not hasattr(typing, "TypeVarTuple"): - typing._collect_type_vars = _collect_type_vars - typing._check_generic = _check_generic - - -# Backport typing.NamedTuple as it exists in Python 3.11. -# In 3.11, the ability to define generic `NamedTuple`s was supported. -# This was explicitly disallowed in 3.9-3.10, and only half-worked in <=3.8. -if sys.version_info >= (3, 11): - NamedTuple = typing.NamedTuple -else: - def _caller(): - try: - return sys._getframe(2).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): # For platforms without _getframe() - return None - - def _make_nmtuple(name, types, module, defaults=()): - fields = [n for n, t in types] - annotations = {n: typing._type_check(t, f"field {n} annotation must be a type") - for n, t in types} - nm_tpl = collections.namedtuple(name, fields, - defaults=defaults, module=module) - nm_tpl.__annotations__ = nm_tpl.__new__.__annotations__ = annotations - # The `_field_types` attribute was removed in 3.9; - # in earlier versions, it is the same as the `__annotations__` attribute - if sys.version_info < (3, 9): - nm_tpl._field_types = annotations - return nm_tpl - - _prohibited_namedtuple_fields = typing._prohibited - _special_namedtuple_fields = frozenset({'__module__', '__name__', '__annotations__'}) - - class _NamedTupleMeta(type): - def __new__(cls, typename, bases, ns): - assert _NamedTuple in bases - for base in bases: - if base is not _NamedTuple and base is not typing.Generic: - raise TypeError( - 'can only inherit from a NamedTuple type and Generic') - bases = tuple(tuple if base is _NamedTuple else base for base in bases) - types = ns.get('__annotations__', {}) - default_names = [] - for field_name in types: - if field_name in ns: - default_names.append(field_name) - elif default_names: - raise TypeError(f"Non-default namedtuple field {field_name} " - f"cannot follow default field" - f"{'s' if len(default_names) > 1 else ''} " - f"{', '.join(default_names)}") - nm_tpl = _make_nmtuple( - typename, types.items(), - defaults=[ns[n] for n in default_names], - module=ns['__module__'] - ) - nm_tpl.__bases__ = bases - if typing.Generic in bases: - class_getitem = typing.Generic.__class_getitem__.__func__ - nm_tpl.__class_getitem__ = classmethod(class_getitem) - # update from user namespace without overriding special namedtuple attributes - for key in ns: - if key in _prohibited_namedtuple_fields: - raise AttributeError("Cannot overwrite NamedTuple attribute " + key) - elif key not in _special_namedtuple_fields and key not in nm_tpl._fields: - setattr(nm_tpl, key, ns[key]) - if typing.Generic in bases: - nm_tpl.__init_subclass__() - return nm_tpl - - def NamedTuple(__typename, __fields=None, **kwargs): - if __fields is None: - __fields = kwargs.items() - elif kwargs: - raise TypeError("Either list of fields or keywords" - " can be provided to NamedTuple, not both") - return _make_nmtuple(__typename, __fields, module=_caller()) - - NamedTuple.__doc__ = typing.NamedTuple.__doc__ - _NamedTuple = type.__new__(_NamedTupleMeta, 'NamedTuple', (), {}) - - # On 3.8+, alter the signature so that it matches typing.NamedTuple. - # The signature of typing.NamedTuple on >=3.8 is invalid syntax in Python 3.7, - # so just leave the signature as it is on 3.7. - if sys.version_info >= (3, 8): - NamedTuple.__text_signature__ = '(typename, fields=None, /, **kwargs)' - - def _namedtuple_mro_entries(bases): - assert NamedTuple in bases - return (_NamedTuple,) - - NamedTuple.__mro_entries__ = _namedtuple_mro_entries diff --git a/spaces/CVPR/LIVE/thrust/thrust/mr/tls_pool.h b/spaces/CVPR/LIVE/thrust/thrust/mr/tls_pool.h deleted file mode 100644 index c732f022f74c29eb71a9cbe1335c9f0177becdc8..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/mr/tls_pool.h +++ /dev/null @@ -1,64 +0,0 @@ -/* - * Copyright 2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file tls_pool.h - * \brief A function wrapping a thread local instance of a \p unsynchronized_pool_resource. - */ - -#pragma once - -#include - -#if THRUST_CPP_DIALECT >= 2011 - -#include - -namespace thrust -{ -namespace mr -{ - -/*! \addtogroup memory_management Memory Management - * \addtogroup memory_resources Memory Resources - * \ingroup memory_resources - * \{ - */ - -/*! Potentially constructs, if not yet created, and then returns the address of a thread-local \p unsynchronized_pool_resource, - * - * \tparam Upstream the template argument to the pool template - * \param upstream the argument to the constructor, if invoked - */ -template -__host__ -thrust::mr::unsynchronized_pool_resource & tls_pool(Upstream * upstream = NULL) -{ - static thread_local auto adaptor = [&]{ - assert(upstream); - return thrust::mr::unsynchronized_pool_resource(upstream); - }(); - - return adaptor; -} - -/*! \} - */ - -} // end mr -} // end thrust - -#endif // THRUST_CPP_DIALECT >= 2011 - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/cross_system.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/cross_system.h deleted file mode 100644 index f89f3dba8d3c9c07e259e0aba3ed7aed6dfa1f54..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/cross_system.h +++ /dev/null @@ -1,344 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ -#pragma once - -#include -#include -#include - -namespace thrust -{ -namespace cuda_cub { - - template - struct cross_system : execution_policy > - { - typedef thrust::execution_policy policy1; - typedef thrust::execution_policy policy2; - - policy1 &sys1; - policy2 &sys2; - - inline __host__ __device__ - cross_system(policy1 &sys1, policy2 &sys2) : sys1(sys1), sys2(sys2) {} - - inline __host__ __device__ - cross_system rotate() const - { - return cross_system(sys2, sys1); - } - }; - -#if THRUST_CPP_DIALECT >= 2011 - // Device to host. - template - THRUST_CONSTEXPR __host__ __device__ - auto direction_of_copy( - thrust::system::cuda::execution_policy const& - , thrust::cpp::execution_policy const& - ) - THRUST_DECLTYPE_RETURNS( - thrust::detail::integral_constant< - cudaMemcpyKind, cudaMemcpyDeviceToHost - >{} - ) - - // Host to device. - template - THRUST_CONSTEXPR __host__ __device__ - auto direction_of_copy( - thrust::cpp::execution_policy const& - , thrust::system::cuda::execution_policy const& - ) - THRUST_DECLTYPE_RETURNS( - thrust::detail::integral_constant< - cudaMemcpyKind, cudaMemcpyHostToDevice - >{} - ) - - // Device to device. - template - THRUST_CONSTEXPR __host__ __device__ - auto direction_of_copy( - thrust::system::cuda::execution_policy const& - , thrust::system::cuda::execution_policy const& - ) - THRUST_DECLTYPE_RETURNS( - thrust::detail::integral_constant< - cudaMemcpyKind, cudaMemcpyDeviceToDevice - >{} - ) - - // Device to device. - template - THRUST_CONSTEXPR __host__ __device__ - auto direction_of_copy(execution_policy const &) - THRUST_DECLTYPE_RETURNS( - thrust::detail::integral_constant< - cudaMemcpyKind, cudaMemcpyDeviceToDevice - >{} - ) - - template - THRUST_CONSTEXPR __host__ __device__ - auto direction_of_copy( - execution_policy> const &systems - ) - THRUST_DECLTYPE_RETURNS( - direction_of_copy( - derived_cast(derived_cast(systems).sys1) - , derived_cast(derived_cast(systems).sys2) - ) - ) - - template (), - std::declval()))> - THRUST_CONSTEXPR __host__ __device__ - auto is_device_to_host_copy( - ExecutionPolicy0 const& exec0 - , ExecutionPolicy1 const& exec1 - ) - noexcept -> - thrust::detail::integral_constant< - bool, cudaMemcpyDeviceToHost == Direction::value - > - { - return {}; - } - - template ()))> - THRUST_CONSTEXPR __host__ __device__ - auto is_device_to_host_copy(ExecutionPolicy const& exec) - noexcept -> - thrust::detail::integral_constant< - bool, cudaMemcpyDeviceToHost == Direction::value - > - { - return {}; - } - - template (), - std::declval()))> - THRUST_CONSTEXPR __host__ __device__ - auto is_host_to_device_copy( - ExecutionPolicy0 const& exec0 - , ExecutionPolicy1 const& exec1 - ) - noexcept -> - thrust::detail::integral_constant< - bool, cudaMemcpyHostToDevice == Direction::value - > - { - return {}; - } - - template ()))> - THRUST_CONSTEXPR __host__ __device__ - auto is_host_to_device_copy(ExecutionPolicy const& exec) - noexcept -> - thrust::detail::integral_constant< - bool, cudaMemcpyHostToDevice == Direction::value - > - { - return {}; - } - - template (), - std::declval()))> - THRUST_CONSTEXPR __host__ __device__ - auto is_device_to_device_copy( - ExecutionPolicy0 const& exec0 - , ExecutionPolicy1 const& exec1 - ) - noexcept -> - thrust::detail::integral_constant< - bool, cudaMemcpyDeviceToDevice == Direction::value - > - { - return {}; - } - - template ()))> - THRUST_CONSTEXPR __host__ __device__ - auto is_device_to_device_copy(ExecutionPolicy const& exec) - noexcept -> - thrust::detail::integral_constant< - bool, cudaMemcpyDeviceToDevice == Direction::value - > - { - return {}; - } - - ///////////////////////////////////////////////////////////////////////////// - - // Device to host. - template - __host__ __device__ - auto - select_device_system(thrust::cuda::execution_policy &sys1, - thrust::execution_policy &) - THRUST_DECLTYPE_RETURNS(sys1) - - // Device to host. - template - __host__ __device__ - auto - select_device_system(thrust::cuda::execution_policy const &sys1, - thrust::execution_policy const &) - THRUST_DECLTYPE_RETURNS(sys1) - - // Host to device. - template - __host__ __device__ - auto - select_device_system(thrust::execution_policy &, - thrust::cuda::execution_policy &sys2) - THRUST_DECLTYPE_RETURNS(sys2) - - // Host to device. - template - __host__ __device__ - auto - select_device_system(thrust::execution_policy const &, - thrust::cuda::execution_policy const &sys2) - THRUST_DECLTYPE_RETURNS(sys2) - - // Device to device. - template - __host__ __device__ - auto - select_device_system(thrust::cuda::execution_policy &sys1, - thrust::cuda::execution_policy &) - THRUST_DECLTYPE_RETURNS(sys1) - - // Device to device. - template - __host__ __device__ - auto - select_device_system(thrust::cuda::execution_policy const &sys1, - thrust::cuda::execution_policy const &) - THRUST_DECLTYPE_RETURNS(sys1) - - ///////////////////////////////////////////////////////////////////////////// - - // Device to host. - template - __host__ __device__ - auto - select_host_system(thrust::cuda::execution_policy &, - thrust::execution_policy &sys2) - THRUST_DECLTYPE_RETURNS(sys2) - - // Device to host. - template - __host__ __device__ - auto - select_host_system(thrust::cuda::execution_policy const &, - thrust::execution_policy const &sys2) - THRUST_DECLTYPE_RETURNS(sys2) - - // Host to device. - template - __host__ __device__ - auto - select_host_system(thrust::execution_policy &sys1, - thrust::cuda::execution_policy &) - THRUST_DECLTYPE_RETURNS(sys1) - - // Host to device. - template - __host__ __device__ - auto - select_host_system(thrust::execution_policy const &sys1, - thrust::cuda::execution_policy const &) - THRUST_DECLTYPE_RETURNS(sys1) - - // Device to device. - template - __host__ __device__ - auto - select_host_system(thrust::execution_policy &sys1, - thrust::execution_policy &) - THRUST_DECLTYPE_RETURNS(sys1) - - // Device to device. - template - __host__ __device__ - auto - select_host_system(thrust::execution_policy const &sys1, - thrust::execution_policy const &) - THRUST_DECLTYPE_RETURNS(sys1) -#endif - - // Device to host. - template - __host__ __device__ - cross_system - select_system(execution_policy const & sys1, - thrust::cpp::execution_policy const &sys2) - { - thrust::execution_policy & non_const_sys1 = const_cast &>(sys1); - thrust::cpp::execution_policy &non_const_sys2 = const_cast &>(sys2); - return cross_system(non_const_sys1, non_const_sys2); - } - - // Host to device. - template - __host__ __device__ - cross_system - select_system(thrust::cpp::execution_policy const &sys1, - execution_policy const & sys2) - { - thrust::cpp::execution_policy &non_const_sys1 = const_cast &>(sys1); - thrust::execution_policy & non_const_sys2 = const_cast &>(sys2); - return cross_system(non_const_sys1, non_const_sys2); - } - -} // namespace cuda_cub -} // end namespace thrust - diff --git a/spaces/CVPR/WALT/mmcv_custom/runner/__init__.py b/spaces/CVPR/WALT/mmcv_custom/runner/__init__.py deleted file mode 100644 index c701cb016abe470611830dc960999970738352bb..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmcv_custom/runner/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Open-MMLab. All rights reserved. -from .checkpoint import save_checkpoint -from .epoch_based_runner import EpochBasedRunnerAmp - - -__all__ = [ - 'EpochBasedRunnerAmp', 'save_checkpoint' -] diff --git a/spaces/CVPR/WALT/mmdet/core/anchor/utils.py b/spaces/CVPR/WALT/mmdet/core/anchor/utils.py deleted file mode 100644 index ab9b53f37f7be1f52fe63c5e53df64ac1303b9e0..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/core/anchor/utils.py +++ /dev/null @@ -1,71 +0,0 @@ -import torch - - -def images_to_levels(target, num_levels): - """Convert targets by image to targets by feature level. - - [target_img0, target_img1] -> [target_level0, target_level1, ...] - """ - target = torch.stack(target, 0) - level_targets = [] - start = 0 - for n in num_levels: - end = start + n - # level_targets.append(target[:, start:end].squeeze(0)) - level_targets.append(target[:, start:end]) - start = end - return level_targets - - -def anchor_inside_flags(flat_anchors, - valid_flags, - img_shape, - allowed_border=0): - """Check whether the anchors are inside the border. - - Args: - flat_anchors (torch.Tensor): Flatten anchors, shape (n, 4). - valid_flags (torch.Tensor): An existing valid flags of anchors. - img_shape (tuple(int)): Shape of current image. - allowed_border (int, optional): The border to allow the valid anchor. - Defaults to 0. - - Returns: - torch.Tensor: Flags indicating whether the anchors are inside a \ - valid range. - """ - img_h, img_w = img_shape[:2] - if allowed_border >= 0: - inside_flags = valid_flags & \ - (flat_anchors[:, 0] >= -allowed_border) & \ - (flat_anchors[:, 1] >= -allowed_border) & \ - (flat_anchors[:, 2] < img_w + allowed_border) & \ - (flat_anchors[:, 3] < img_h + allowed_border) - else: - inside_flags = valid_flags - return inside_flags - - -def calc_region(bbox, ratio, featmap_size=None): - """Calculate a proportional bbox region. - - The bbox center are fixed and the new h' and w' is h * ratio and w * ratio. - - Args: - bbox (Tensor): Bboxes to calculate regions, shape (n, 4). - ratio (float): Ratio of the output region. - featmap_size (tuple): Feature map size used for clipping the boundary. - - Returns: - tuple: x1, y1, x2, y2 - """ - x1 = torch.round((1 - ratio) * bbox[0] + ratio * bbox[2]).long() - y1 = torch.round((1 - ratio) * bbox[1] + ratio * bbox[3]).long() - x2 = torch.round(ratio * bbox[0] + (1 - ratio) * bbox[2]).long() - y2 = torch.round(ratio * bbox[1] + (1 - ratio) * bbox[3]).long() - if featmap_size is not None: - x1 = x1.clamp(min=0, max=featmap_size[1]) - y1 = y1.clamp(min=0, max=featmap_size[0]) - x2 = x2.clamp(min=0, max=featmap_size[1]) - y2 = y2.clamp(min=0, max=featmap_size[0]) - return (x1, y1, x2, y2) diff --git a/spaces/CVPR/lama-example/saicinpainting/training/modules/multiscale.py b/spaces/CVPR/lama-example/saicinpainting/training/modules/multiscale.py deleted file mode 100644 index 65f0a54925593e9da8106bfc6d65a4098ce001d7..0000000000000000000000000000000000000000 --- a/spaces/CVPR/lama-example/saicinpainting/training/modules/multiscale.py +++ /dev/null @@ -1,244 +0,0 @@ -from typing import List, Tuple, Union, Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from saicinpainting.training.modules.base import get_conv_block_ctor, get_activation -from saicinpainting.training.modules.pix2pixhd import ResnetBlock - - -class ResNetHead(nn.Module): - def __init__(self, input_nc, ngf=64, n_downsampling=3, n_blocks=9, norm_layer=nn.BatchNorm2d, - padding_type='reflect', conv_kind='default', activation=nn.ReLU(True)): - assert (n_blocks >= 0) - super(ResNetHead, self).__init__() - - conv_layer = get_conv_block_ctor(conv_kind) - - model = [nn.ReflectionPad2d(3), - conv_layer(input_nc, ngf, kernel_size=7, padding=0), - norm_layer(ngf), - activation] - - ### downsample - for i in range(n_downsampling): - mult = 2 ** i - model += [conv_layer(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1), - norm_layer(ngf * mult * 2), - activation] - - mult = 2 ** n_downsampling - - ### resnet blocks - for i in range(n_blocks): - model += [ResnetBlock(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer, - conv_kind=conv_kind)] - - self.model = nn.Sequential(*model) - - def forward(self, input): - return self.model(input) - - -class ResNetTail(nn.Module): - def __init__(self, output_nc, ngf=64, n_downsampling=3, n_blocks=9, norm_layer=nn.BatchNorm2d, - padding_type='reflect', conv_kind='default', activation=nn.ReLU(True), - up_norm_layer=nn.BatchNorm2d, up_activation=nn.ReLU(True), add_out_act=False, out_extra_layers_n=0, - add_in_proj=None): - assert (n_blocks >= 0) - super(ResNetTail, self).__init__() - - mult = 2 ** n_downsampling - - model = [] - - if add_in_proj is not None: - model.append(nn.Conv2d(add_in_proj, ngf * mult, kernel_size=1)) - - ### resnet blocks - for i in range(n_blocks): - model += [ResnetBlock(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer, - conv_kind=conv_kind)] - - ### upsample - for i in range(n_downsampling): - mult = 2 ** (n_downsampling - i) - model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2), kernel_size=3, stride=2, padding=1, - output_padding=1), - up_norm_layer(int(ngf * mult / 2)), - up_activation] - self.model = nn.Sequential(*model) - - out_layers = [] - for _ in range(out_extra_layers_n): - out_layers += [nn.Conv2d(ngf, ngf, kernel_size=1, padding=0), - up_norm_layer(ngf), - up_activation] - out_layers += [nn.ReflectionPad2d(3), - nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)] - - if add_out_act: - out_layers.append(get_activation('tanh' if add_out_act is True else add_out_act)) - - self.out_proj = nn.Sequential(*out_layers) - - def forward(self, input, return_last_act=False): - features = self.model(input) - out = self.out_proj(features) - if return_last_act: - return out, features - else: - return out - - -class MultiscaleResNet(nn.Module): - def __init__(self, input_nc, output_nc, ngf=64, n_downsampling=2, n_blocks_head=2, n_blocks_tail=6, n_scales=3, - norm_layer=nn.BatchNorm2d, padding_type='reflect', conv_kind='default', activation=nn.ReLU(True), - up_norm_layer=nn.BatchNorm2d, up_activation=nn.ReLU(True), add_out_act=False, out_extra_layers_n=0, - out_cumulative=False, return_only_hr=False): - super().__init__() - - self.heads = nn.ModuleList([ResNetHead(input_nc, ngf=ngf, n_downsampling=n_downsampling, - n_blocks=n_blocks_head, norm_layer=norm_layer, padding_type=padding_type, - conv_kind=conv_kind, activation=activation) - for i in range(n_scales)]) - tail_in_feats = ngf * (2 ** n_downsampling) + ngf - self.tails = nn.ModuleList([ResNetTail(output_nc, - ngf=ngf, n_downsampling=n_downsampling, - n_blocks=n_blocks_tail, norm_layer=norm_layer, padding_type=padding_type, - conv_kind=conv_kind, activation=activation, up_norm_layer=up_norm_layer, - up_activation=up_activation, add_out_act=add_out_act, - out_extra_layers_n=out_extra_layers_n, - add_in_proj=None if (i == n_scales - 1) else tail_in_feats) - for i in range(n_scales)]) - - self.out_cumulative = out_cumulative - self.return_only_hr = return_only_hr - - @property - def num_scales(self): - return len(self.heads) - - def forward(self, ms_inputs: List[torch.Tensor], smallest_scales_num: Optional[int] = None) \ - -> Union[torch.Tensor, List[torch.Tensor]]: - """ - :param ms_inputs: List of inputs of different resolutions from HR to LR - :param smallest_scales_num: int or None, number of smallest scales to take at input - :return: Depending on return_only_hr: - True: Only the most HR output - False: List of outputs of different resolutions from HR to LR - """ - if smallest_scales_num is None: - assert len(self.heads) == len(ms_inputs), (len(self.heads), len(ms_inputs), smallest_scales_num) - smallest_scales_num = len(self.heads) - else: - assert smallest_scales_num == len(ms_inputs) <= len(self.heads), (len(self.heads), len(ms_inputs), smallest_scales_num) - - cur_heads = self.heads[-smallest_scales_num:] - ms_features = [cur_head(cur_inp) for cur_head, cur_inp in zip(cur_heads, ms_inputs)] - - all_outputs = [] - prev_tail_features = None - for i in range(len(ms_features)): - scale_i = -i - 1 - - cur_tail_input = ms_features[-i - 1] - if prev_tail_features is not None: - if prev_tail_features.shape != cur_tail_input.shape: - prev_tail_features = F.interpolate(prev_tail_features, size=cur_tail_input.shape[2:], - mode='bilinear', align_corners=False) - cur_tail_input = torch.cat((cur_tail_input, prev_tail_features), dim=1) - - cur_out, cur_tail_feats = self.tails[scale_i](cur_tail_input, return_last_act=True) - - prev_tail_features = cur_tail_feats - all_outputs.append(cur_out) - - if self.out_cumulative: - all_outputs_cum = [all_outputs[0]] - for i in range(1, len(ms_features)): - cur_out = all_outputs[i] - cur_out_cum = cur_out + F.interpolate(all_outputs_cum[-1], size=cur_out.shape[2:], - mode='bilinear', align_corners=False) - all_outputs_cum.append(cur_out_cum) - all_outputs = all_outputs_cum - - if self.return_only_hr: - return all_outputs[-1] - else: - return all_outputs[::-1] - - -class MultiscaleDiscriminatorSimple(nn.Module): - def __init__(self, ms_impl): - super().__init__() - self.ms_impl = nn.ModuleList(ms_impl) - - @property - def num_scales(self): - return len(self.ms_impl) - - def forward(self, ms_inputs: List[torch.Tensor], smallest_scales_num: Optional[int] = None) \ - -> List[Tuple[torch.Tensor, List[torch.Tensor]]]: - """ - :param ms_inputs: List of inputs of different resolutions from HR to LR - :param smallest_scales_num: int or None, number of smallest scales to take at input - :return: List of pairs (prediction, features) for different resolutions from HR to LR - """ - if smallest_scales_num is None: - assert len(self.ms_impl) == len(ms_inputs), (len(self.ms_impl), len(ms_inputs), smallest_scales_num) - smallest_scales_num = len(self.heads) - else: - assert smallest_scales_num == len(ms_inputs) <= len(self.ms_impl), \ - (len(self.ms_impl), len(ms_inputs), smallest_scales_num) - - return [cur_discr(cur_input) for cur_discr, cur_input in zip(self.ms_impl[-smallest_scales_num:], ms_inputs)] - - -class SingleToMultiScaleInputMixin: - def forward(self, x: torch.Tensor) -> List: - orig_height, orig_width = x.shape[2:] - factors = [2 ** i for i in range(self.num_scales)] - ms_inputs = [F.interpolate(x, size=(orig_height // f, orig_width // f), mode='bilinear', align_corners=False) - for f in factors] - return super().forward(ms_inputs) - - -class GeneratorMultiToSingleOutputMixin: - def forward(self, x): - return super().forward(x)[0] - - -class DiscriminatorMultiToSingleOutputMixin: - def forward(self, x): - out_feat_tuples = super().forward(x) - return out_feat_tuples[0][0], [f for _, flist in out_feat_tuples for f in flist] - - -class DiscriminatorMultiToSingleOutputStackedMixin: - def __init__(self, *args, return_feats_only_levels=None, **kwargs): - super().__init__(*args, **kwargs) - self.return_feats_only_levels = return_feats_only_levels - - def forward(self, x): - out_feat_tuples = super().forward(x) - outs = [out for out, _ in out_feat_tuples] - scaled_outs = [outs[0]] + [F.interpolate(cur_out, size=outs[0].shape[-2:], - mode='bilinear', align_corners=False) - for cur_out in outs[1:]] - out = torch.cat(scaled_outs, dim=1) - if self.return_feats_only_levels is not None: - feat_lists = [out_feat_tuples[i][1] for i in self.return_feats_only_levels] - else: - feat_lists = [flist for _, flist in out_feat_tuples] - feats = [f for flist in feat_lists for f in flist] - return out, feats - - -class MultiscaleDiscrSingleInput(SingleToMultiScaleInputMixin, DiscriminatorMultiToSingleOutputStackedMixin, MultiscaleDiscriminatorSimple): - pass - - -class MultiscaleResNetSingle(GeneratorMultiToSingleOutputMixin, SingleToMultiScaleInputMixin, MultiscaleResNet): - pass diff --git a/spaces/CVPR/regionclip-demo/detectron2/data/datasets/lvis.py b/spaces/CVPR/regionclip-demo/detectron2/data/datasets/lvis.py deleted file mode 100644 index 2248129e798baefda037a8dddf7abe3c8f15dd40..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/data/datasets/lvis.py +++ /dev/null @@ -1,357 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import os -from fvcore.common.timer import Timer - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.structures import BoxMode -from detectron2.utils.file_io import PathManager - -from .builtin_meta import _get_coco_instances_meta -from .lvis_v0_5_categories import LVIS_CATEGORIES as LVIS_V0_5_CATEGORIES -from .lvis_v1_categories import LVIS_CATEGORIES as LVIS_V1_CATEGORIES - -import torch -import numpy as np -""" -This file contains functions to parse LVIS-format annotations into dicts in the -"Detectron2 format". -""" - -logger = logging.getLogger(__name__) - -__all__ = ["load_lvis_json", "register_lvis_instances", "get_lvis_instances_meta"] - - -def register_lvis_instances(name, metadata, json_file, image_root): - """ - Register a dataset in LVIS's json annotation format for instance detection and segmentation. - - Args: - name (str): a name that identifies the dataset, e.g. "lvis_v0.5_train". - metadata (dict): extra metadata associated with this dataset. It can be an empty dict. - json_file (str): path to the json instance annotation file. - image_root (str or path-like): directory which contains all the images. - """ - DatasetCatalog.register(name, lambda: load_lvis_json(json_file, image_root, name)) - MetadataCatalog.get(name).set( - json_file=json_file, image_root=image_root, evaluator_type="lvis", **metadata - ) - - -def load_lvis_json_original(json_file, image_root, dataset_name=None, filter_open_cls=True, clip_gt_crop=True, max_gt_per_img=500): - """ - Load a json file in LVIS's annotation format. - - Args: - json_file (str): full path to the LVIS json annotation file. - image_root (str): the directory where the images in this json file exists. - dataset_name (str): the name of the dataset (e.g., "lvis_v0.5_train"). - If provided, this function will put "thing_classes" into the metadata - associated with this dataset. - filter_open_cls: open-set setting, filter the open-set categories during training - clip_gt_crop: must filter images with empty annotations or too many GT bbox, - even if in testing (eg, use CLIP on GT regions) - Returns: - list[dict]: a list of dicts in Detectron2 standard format. (See - `Using Custom Datasets `_ ) - - Notes: - 1. This function does not read the image files. - The results do not have the "image" field. - """ - from lvis import LVIS - - if 'train' in dataset_name: #'zeroshot' in dataset_name and 'train' in dataset_name: # openset setting, filter the novel classes during training - filter_open_cls = True - else: - filter_open_cls = False - - json_file = PathManager.get_local_path(json_file) - - timer = Timer() - lvis_api = LVIS(json_file) - if timer.seconds() > 1: - logger.info("Loading {} takes {:.2f} seconds.".format(json_file, timer.seconds())) - - if dataset_name is not None: - meta = get_lvis_instances_meta(dataset_name) - MetadataCatalog.get(dataset_name).set(**meta) - - # sort indices for reproducible results - img_ids = sorted(lvis_api.imgs.keys()) - # imgs is a list of dicts, each looks something like: - # {'license': 4, - # 'url': 'http://farm6.staticflickr.com/5454/9413846304_881d5e5c3b_z.jpg', - # 'file_name': 'COCO_val2014_000000001268.jpg', - # 'height': 427, - # 'width': 640, - # 'date_captured': '2013-11-17 05:57:24', - # 'id': 1268} - imgs = lvis_api.load_imgs(img_ids) - # anns is a list[list[dict]], where each dict is an annotation - # record for an object. The inner list enumerates the objects in an image - # and the outer list enumerates over images. Example of anns[0]: - # [{'segmentation': [[192.81, - # 247.09, - # ... - # 219.03, - # 249.06]], - # 'area': 1035.749, - # 'image_id': 1268, - # 'bbox': [192.81, 224.8, 74.73, 33.43], - # 'category_id': 16, - # 'id': 42986}, - # ...] - anns = [lvis_api.img_ann_map[img_id] for img_id in img_ids] - - # Sanity check that each annotation has a unique id - ann_ids = [ann["id"] for anns_per_image in anns for ann in anns_per_image] - assert len(set(ann_ids)) == len(ann_ids), "Annotation ids in '{}' are not unique".format( - json_file - ) - - imgs_anns = list(zip(imgs, anns)) - - logger.info("Loaded {} images in the LVIS format from {}".format(len(imgs_anns), json_file)) - - def get_file_name(img_root, img_dict): - # Determine the path including the split folder ("train2017", "val2017", "test2017") from - # the coco_url field. Example: - # 'coco_url': 'http://images.cocodataset.org/train2017/000000155379.jpg' - split_folder, file_name = img_dict["coco_url"].split("/")[-2:] - return os.path.join(img_root + split_folder, file_name) - - dataset_dicts = [] - cls_type_dict = {cls_meta['id']: cls_meta['frequency'] for cls_meta in lvis_api.dataset['categories']} # map cls id to cls type - area_dict = {'r': [], 'c': [], 'f': []} # calculate box area for each type of class - # import os - # from PIL import Image - # custom_img_path = 'datasets/epic_sample_frames' - # custom_img_list = [os.path.join(custom_img_path, item) for item in os.listdir(custom_img_path)] - # cnt = 0 - for (img_dict, anno_dict_list) in imgs_anns: - record = {} - record["file_name"] = get_file_name(image_root, img_dict) - # record["file_name"] = custom_img_list[cnt]; cnt += 1; - # if cnt == 46: - # break # get_file_name(image_root, img_dict) - # img_file = Image.open(record["file_name"]) - record["height"] = img_dict["height"] - record["width"] = img_dict["width"] - # record["height"] = img_file.size[1] # img_dict["height"] - # record["width"] = img_file.size[0] # img_dict["width"] - record["not_exhaustive_category_ids"] = img_dict.get("not_exhaustive_category_ids", []) - record["neg_category_ids"] = img_dict.get("neg_category_ids", []) - image_id = record["image_id"] = img_dict["id"] - - objs = [] - for anno in anno_dict_list: - # Check that the image_id in this annotation is the same as - # the image_id we're looking at. - # This fails only when the data parsing logic or the annotation file is buggy. - assert anno["image_id"] == image_id - obj = {"bbox": anno["bbox"], "bbox_mode": BoxMode.XYWH_ABS} - # LVIS data loader can be used to load COCO dataset categories. In this case `meta` - # variable will have a field with COCO-specific category mapping. - if dataset_name is not None and "thing_dataset_id_to_contiguous_id" in meta: - obj["category_id"] = meta["thing_dataset_id_to_contiguous_id"][anno["category_id"]] - else: - obj["category_id"] = anno["category_id"] - 1 # Convert 1-indexed to 0-indexed - obj['frequency'] = cls_type_dict[anno["category_id"]] # used for open-set filtering - if filter_open_cls: # filter categories for open-set training - if obj['frequency'] == 'r': - continue - area_dict[obj['frequency']].append(anno["bbox"][2] * anno["bbox"][3]) - - segm = anno["segmentation"] # list[list[float]] - # filter out invalid polygons (< 3 points) - valid_segm = [poly for poly in segm if len(poly) % 2 == 0 and len(poly) >= 6] - assert len(segm) == len( - valid_segm - ), "Annotation contains an invalid polygon with < 3 points" - assert len(segm) > 0 - obj["segmentation"] = segm - objs.append(obj) - if (filter_open_cls or clip_gt_crop) and len(objs) == 0: # no annotation for this image - continue - record["annotations"] = objs - dataset_dicts.append(record) - - # For the training in open-set setting, map original category id to new category id number (base categories) - if filter_open_cls: - # get new category id in order - old_to_new = {} - for i in range(len(cls_type_dict)): - if cls_type_dict[i+1] != 'r': # cls_type_dict is 1-indexed - old_to_new[i] = len(old_to_new) - # map annotation to new category id - for record in dataset_dicts: - record.pop('not_exhaustive_category_ids') # won't be used - record.pop('neg_category_ids') # won't be used - for obj in record['annotations']: - obj['category_id'] = old_to_new[obj['category_id']] # 0-indexed id - assert obj['frequency'] != 'r' - logger.info("\n\nModel will be trained in the open-set setting! {} / {} categories are kept.\n".format(len(old_to_new),len(cls_type_dict))) - # calculate box area for each type of class - area_lst = np.array([0, 400, 1600, 2500, 5000, 10000, 22500, 224 * 224, 90000, 160000, 1e8]) - # rare_cls = np.histogram(np.array(area_dict['r']), bins=area_lst)[0] - # common_cls = np.histogram(np.array(area_dict['c']), bins=area_lst)[0] - # freq_cls = np.histogram(np.array(area_dict['f']), bins=area_lst)[0] - # print("rare classes: {}; \ncommon classes: {}; \nfrequent classes: {}".format(rare_cls/rare_cls.sum()*100, common_cls/common_cls.sum()*100, freq_cls/freq_cls.sum()*100)) - # # apply CLIP on GT regions: some images has large number of GT bbox (eg, 759), remove them, otherwise, OOM - if clip_gt_crop: - # len_num = sorted([len(item["annotations"]) for item in dataset_dicts], reverse=True) - dataset_dicts = sorted(dataset_dicts, key=lambda x: len(x["annotations"]), reverse=True) - for record in dataset_dicts: - record["annotations"] = record["annotations"][:max_gt_per_img] # only <10 / 20k images in test have >300 GT boxes - #dataset_dicts = sorted(dataset_dicts, key=lambda x: len(x["annotations"]))[:12] #[12000:14000] # - #dataset_dicts = sorted(dataset_dicts, key=lambda x: len(x["annotations"]))[-1200:-1000] - #eval_cls_acc(dataset_dicts, area_lst) - return dataset_dicts - -def load_lvis_json(json_file, image_root, dataset_name=None, filter_open_cls=True, clip_gt_crop=True, max_gt_per_img=500, custom_img_path='datasets/custom_images'): - """ - This is a tentitive function for loading custom images. - Given a folder of images (eg, 'datasets/custom_images'), load their meta data into a dictionary - """ - import os - from PIL import Image - custom_img_list = [os.path.join(custom_img_path, item) for item in os.listdir(custom_img_path)] - - dataset_dicts = [] - for f_i, file_name in enumerate(custom_img_list): - record = {} - record["file_name"] = file_name - img_file = Image.open(record["file_name"]) - record["height"] = img_file.size[1] - record["width"] = img_file.size[0] - record["image_id"] = f_i - - dataset_dicts.append(record) - - return dataset_dicts - -def eval_cls_acc(dataset_dicts, area_lst): - #pred_file = '/home/v-yiwuzhong/projects/detectron2-open-set/output/rcnn_gt_crop/vit/instances_predictions.pth' - #pred_file = '/home/v-yiwuzhong/projects/azureblobs/vyiwuzhong_phillytools/results/test_CLIP_rcnn_resnet50_crop_regions_perclassnms/inference/instances_predictions.pth' - #pred_file = '/home/v-yiwuzhong/projects/azureblobs/vyiwuzhong_phillytools/results/test_CLIP_rcnn_vitb32_crop_regions_perclassnms/inference/instances_predictions.pth' - #pred_file = '/home/v-yiwuzhong/projects/azureblobs/vyiwuzhong_phillytools/results/test_CLIP_fast_rcnn_resnet50_roifeatmap/inference/instances_predictions.pth' - #pred_file = '/home/v-yiwuzhong/projects/azureblobs/vyiwuzhong_phillytools/results/test_CLIP_fast_rcnn_resnet50_supmrcnnbaselinefpn/inference/instances_predictions.pth' - #pred_file = '/home/v-yiwuzhong/projects/azureblobs/vyiwuzhong_phillytools/results/test_CLIP_fast_rcnn_resnet50_supmrcnnbaselinec4/inference/instances_predictions.pth' - pred_file = '/home/v-yiwuzhong/projects/azureblobs/vyiwuzhong_phillytools/results/test_CLIP_fast_rcnn_resnet50_e1-3-3gtbox/inference/instances_predictions.pth' - predictions = torch.load(pred_file) - correct = 0 - wrong = 0 - area_threshold = area_lst[1:-1] # np.array([400, 1600, 2500, 5000, 10000, 22500, 224 * 224, 90000, 160000]) - acc_list = [[0, 0] for i in range(area_threshold.shape[0] + 1)] - small_cnt = 0 - for preds, gts in zip(predictions, dataset_dicts): - assert preds['image_id'] == gts['image_id'] # same image - #assert len(preds['instances']) == len(gts['annotations']) - box_seen = {} # keep a set for the predicted boxes that have been checked - for pred, gt in zip(preds['instances'], gts['annotations']): - if pred['bbox'][0] in box_seen: # duplicate box due to perclass NMS - continue - else: - box_seen[pred['bbox'][0]] = 1 - if np.sum(np.array(pred['bbox']) - np.array(gt['bbox'])) < 1.0: # same box - pass - else: # has been NMS and shuffled - for gt in gts['annotations']: - if np.sum(np.array(pred['bbox']) - np.array(gt['bbox'])) < 1.0: # same box - break - assert np.sum(np.array(pred['bbox']) - np.array(gt['bbox'])) < 1.0 # same box - this_area = gt['bbox'][2] * gt['bbox'][3] - block = (area_threshold < this_area).nonzero()[0].shape[0] - if pred['category_id'] == gt['category_id']: # matched - correct += 1 - acc_list[block][0] += 1 - else: - wrong += 1 - acc_list[block][1] += 1 - - print("\n\nGot correct {} and wrong {}. Accuracy is {} / {} = {}\n\n".format(correct,wrong,correct,correct+wrong,correct/(correct+wrong))) - block_acc = [100 * acc_list[i][0] / (acc_list[i][0] + acc_list[i][1]) for i in range(len(acc_list))] - block_acc = [round(i, 1) for i in block_acc] - print("Block accuracy: {}".format(block_acc)) - block_num = [acc_list[i][0] + acc_list[i][1] for i in range(len(acc_list))] - block_num = list(block_num / np.sum(block_num) * 100) - block_num = [round(i, 1) for i in block_num] - print("Block #instances: {}".format(block_num)) - return - -def get_lvis_instances_meta(dataset_name): - """ - Load LVIS metadata. - - Args: - dataset_name (str): LVIS dataset name without the split name (e.g., "lvis_v0.5"). - - Returns: - dict: LVIS metadata with keys: thing_classes - """ - if "cocofied" in dataset_name: - return _get_coco_instances_meta() - if "v0.5" in dataset_name: - return _get_lvis_instances_meta_v0_5() - elif "v1" in dataset_name: - return _get_lvis_instances_meta_v1() - raise ValueError("No built-in metadata for dataset {}".format(dataset_name)) - - -def _get_lvis_instances_meta_v0_5(): - assert len(LVIS_V0_5_CATEGORIES) == 1230 - cat_ids = [k["id"] for k in LVIS_V0_5_CATEGORIES] - assert min(cat_ids) == 1 and max(cat_ids) == len( - cat_ids - ), "Category ids are not in [1, #categories], as expected" - # Ensure that the category list is sorted by id - lvis_categories = sorted(LVIS_V0_5_CATEGORIES, key=lambda x: x["id"]) - thing_classes = [k["synonyms"][0] for k in lvis_categories] - meta = {"thing_classes": thing_classes} - return meta - - -def _get_lvis_instances_meta_v1(): - assert len(LVIS_V1_CATEGORIES) == 1203 - cat_ids = [k["id"] for k in LVIS_V1_CATEGORIES] - assert min(cat_ids) == 1 and max(cat_ids) == len( - cat_ids - ), "Category ids are not in [1, #categories], as expected" - # Ensure that the category list is sorted by id - lvis_categories = sorted(LVIS_V1_CATEGORIES, key=lambda x: x["id"]) - thing_classes = [k["synonyms"][0] for k in lvis_categories] - meta = {"thing_classes": thing_classes} - return meta - - -if __name__ == "__main__": - """ - Test the LVIS json dataset loader. - - Usage: - python -m detectron2.data.datasets.lvis \ - path/to/json path/to/image_root dataset_name vis_limit - """ - import sys - import numpy as np - from detectron2.utils.logger import setup_logger - from PIL import Image - import detectron2.data.datasets # noqa # add pre-defined metadata - from detectron2.utils.visualizer import Visualizer - - logger = setup_logger(name=__name__) - meta = MetadataCatalog.get(sys.argv[3]) - - dicts = load_lvis_json(sys.argv[1], sys.argv[2], sys.argv[3]) - logger.info("Done loading {} samples.".format(len(dicts))) - - dirname = "lvis-data-vis" - os.makedirs(dirname, exist_ok=True) - for d in dicts[: int(sys.argv[4])]: - img = np.array(Image.open(d["file_name"])) - visualizer = Visualizer(img, metadata=meta) - vis = visualizer.draw_dataset_dict(d) - fpath = os.path.join(dirname, os.path.basename(d["file_name"])) - vis.save(fpath) diff --git a/spaces/Cecil8352/vits-models/attentions.py b/spaces/Cecil8352/vits-models/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/Cecil8352/vits-models/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/CofAI/chat/client/css/style.css b/spaces/CofAI/chat/client/css/style.css deleted file mode 100644 index 918cf83eb9a36bf07c861e4476c60af65f5bf91d..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat/client/css/style.css +++ /dev/null @@ -1,18 +0,0 @@ -@import "./global.css"; -@import "./hljs.css"; -@import "./main.css"; -@import "./sidebar.css"; -@import "./conversation.css"; -@import "./message.css"; -@import "./stop-generating.css"; -@import "./typing.css"; -@import "./checkbox.css"; -@import "./label.css"; -@import "./button.css"; -@import "./buttons.css"; -@import "./dropdown.css"; -@import "./field.css"; -@import "./select.css"; -@import "./options.css"; -@import "./settings.css"; -@import "./message-input.css"; diff --git "a/spaces/Cong723/gpt-academic-public/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py" "b/spaces/Cong723/gpt-academic-public/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py" deleted file mode 100644 index e57f80f1d45bd3ec23837253848f7b32a5ccd751..0000000000000000000000000000000000000000 --- "a/spaces/Cong723/gpt-academic-public/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py" +++ /dev/null @@ -1,138 +0,0 @@ -import threading -from request_llm.bridge_all import predict_no_ui_long_connection -from toolbox import update_ui -from toolbox import CatchException, write_results_to_file, report_execption -from .crazy_utils import breakdown_txt_to_satisfy_token_limit - -def extract_code_block_carefully(txt): - splitted = txt.split('```') - n_code_block_seg = len(splitted) - 1 - if n_code_block_seg <= 1: return txt - # 剩下的情况都开头除去 ``` 结尾除去一次 ``` - txt_out = '```'.join(splitted[1:-1]) - return txt_out - - - -def break_txt_into_half_at_some_linebreak(txt): - lines = txt.split('\n') - n_lines = len(lines) - pre = lines[:(n_lines//2)] - post = lines[(n_lines//2):] - return "\n".join(pre), "\n".join(post) - - -@CatchException -def 全项目切换英文(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt, web_port): - # 第1步:清空历史,以免输入溢出 - history = [] - - # 第2步:尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 第3步:集合文件 - import time, glob, os, shutil, re - os.makedirs('gpt_log/generated_english_version', exist_ok=True) - os.makedirs('gpt_log/generated_english_version/crazy_functions', exist_ok=True) - file_manifest = [f for f in glob.glob('./*.py') if ('test_project' not in f) and ('gpt_log' not in f)] + \ - [f for f in glob.glob('./crazy_functions/*.py') if ('test_project' not in f) and ('gpt_log' not in f)] - # file_manifest = ['./toolbox.py'] - i_say_show_user_buffer = [] - - # 第4步:随便显示点什么防止卡顿的感觉 - for index, fp in enumerate(file_manifest): - # if 'test_project' in fp: continue - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - i_say_show_user =f'[{index}/{len(file_manifest)}] 接下来请将以下代码中包含的所有中文转化为英文,只输出转化后的英文代码,请用代码块输出代码: {os.path.abspath(fp)}' - i_say_show_user_buffer.append(i_say_show_user) - chatbot.append((i_say_show_user, "[Local Message] 等待多线程操作,中间过程不予显示.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - - # 第5步:Token限制下的截断与处理 - MAX_TOKEN = 3000 - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - def get_token_fn(txt): return len(enc.encode(txt, disallowed_special=())) - - - # 第6步:任务函数 - mutable_return = [None for _ in file_manifest] - observe_window = [[""] for _ in file_manifest] - def thread_worker(fp,index): - if index > 10: - time.sleep(60) - print('Openai 限制免费用户每分钟20次请求,降低请求频率中。') - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - i_say_template = lambda fp, file_content: f'接下来请将以下代码中包含的所有中文转化为英文,只输出代码,文件名是{fp},文件代码是 ```{file_content}```' - try: - gpt_say = "" - # 分解代码文件 - file_content_breakdown = breakdown_txt_to_satisfy_token_limit(file_content, get_token_fn, MAX_TOKEN) - for file_content_partial in file_content_breakdown: - i_say = i_say_template(fp, file_content_partial) - # # ** gpt request ** - gpt_say_partial = predict_no_ui_long_connection(inputs=i_say, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=observe_window[index]) - gpt_say_partial = extract_code_block_carefully(gpt_say_partial) - gpt_say += gpt_say_partial - mutable_return[index] = gpt_say - except ConnectionAbortedError as token_exceed_err: - print('至少一个线程任务Token溢出而失败', e) - except Exception as e: - print('至少一个线程任务意外失败', e) - - # 第7步:所有线程同时开始执行任务函数 - handles = [threading.Thread(target=thread_worker, args=(fp,index)) for index, fp in enumerate(file_manifest)] - for h in handles: - h.daemon = True - h.start() - chatbot.append(('开始了吗?', f'多线程操作已经开始')) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 第8步:循环轮询各个线程是否执行完毕 - cnt = 0 - while True: - cnt += 1 - time.sleep(0.2) - th_alive = [h.is_alive() for h in handles] - if not any(th_alive): break - # 更好的UI视觉效果 - observe_win = [] - for thread_index, alive in enumerate(th_alive): - observe_win.append("[ ..."+observe_window[thread_index][0][-60:].replace('\n','').replace('```','...').replace(' ','.').replace('
','.....').replace('$','.')+"... ]") - stat = [f'执行中: {obs}\n\n' if alive else '已完成\n\n' for alive, obs in zip(th_alive, observe_win)] - stat_str = ''.join(stat) - chatbot[-1] = (chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt%10+1))) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 第9步:把结果写入文件 - for index, h in enumerate(handles): - h.join() # 这里其实不需要join了,肯定已经都结束了 - fp = file_manifest[index] - gpt_say = mutable_return[index] - i_say_show_user = i_say_show_user_buffer[index] - - where_to_relocate = f'gpt_log/generated_english_version/{fp}' - if gpt_say is not None: - with open(where_to_relocate, 'w+', encoding='utf-8') as f: - f.write(gpt_say) - else: # 失败 - shutil.copyfile(file_manifest[index], where_to_relocate) - chatbot.append((i_say_show_user, f'[Local Message] 已完成{os.path.abspath(fp)}的转化,\n\n存入{os.path.abspath(where_to_relocate)}')) - history.append(i_say_show_user); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - time.sleep(1) - - # 第10步:备份一个文件 - res = write_results_to_file(history) - chatbot.append(("生成一份任务执行报告", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 diff --git a/spaces/Cvandi/remake/scripts/pytorch2onnx.py b/spaces/Cvandi/remake/scripts/pytorch2onnx.py deleted file mode 100644 index 09d99b2e0171265e70e7507ed8e882b616b449a1..0000000000000000000000000000000000000000 --- a/spaces/Cvandi/remake/scripts/pytorch2onnx.py +++ /dev/null @@ -1,36 +0,0 @@ -import argparse -import torch -import torch.onnx -from basicsr.archs.rrdbnet_arch import RRDBNet - - -def main(args): - # An instance of the model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - if args.params: - keyname = 'params' - else: - keyname = 'params_ema' - model.load_state_dict(torch.load(args.input)[keyname]) - # set the train mode to false since we will only run the forward pass. - model.train(False) - model.cpu().eval() - - # An example input - x = torch.rand(1, 3, 64, 64) - # Export the model - with torch.no_grad(): - torch_out = torch.onnx._export(model, x, args.output, opset_version=11, export_params=True) - print(torch_out.shape) - - -if __name__ == '__main__': - """Convert pytorch model to onnx models""" - parser = argparse.ArgumentParser() - parser.add_argument( - '--input', type=str, default='experiments/pretrained_models/RealESRGAN_x4plus.pth', help='Input model path') - parser.add_argument('--output', type=str, default='realesrgan-x4.onnx', help='Output onnx path') - parser.add_argument('--params', action='store_false', help='Use params instead of params_ema') - args = parser.parse_args() - - main(args) diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/rpn/inference.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/rpn/inference.py deleted file mode 100644 index 9a2e3871e42fac9fcef3db00da626ec0386d68b2..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/rpn/inference.py +++ /dev/null @@ -1,199 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import torch - -from maskrcnn_benchmark.modeling.box_coder import BoxCoder -from maskrcnn_benchmark.structures.bounding_box import BoxList -from maskrcnn_benchmark.structures.boxlist_ops import cat_boxlist -from maskrcnn_benchmark.structures.boxlist_ops import boxlist_nms -from maskrcnn_benchmark.structures.boxlist_ops import remove_small_boxes - -from ..utils import cat -from .utils import permute_and_flatten - -class RPNPostProcessor(torch.nn.Module): - """ - Performs post-processing on the outputs of the RPN boxes, before feeding the - proposals to the heads - """ - - def __init__( - self, - pre_nms_top_n, - post_nms_top_n, - nms_thresh, - min_size, - box_coder=None, - fpn_post_nms_top_n=None, - ): - """ - Arguments: - pre_nms_top_n (int) - post_nms_top_n (int) - nms_thresh (float) - min_size (int) - box_coder (BoxCoder) - fpn_post_nms_top_n (int) - """ - super(RPNPostProcessor, self).__init__() - self.pre_nms_top_n = pre_nms_top_n # 12000 - self.post_nms_top_n = post_nms_top_n # 2000 - self.nms_thresh = nms_thresh # 0.7 - self.min_size = min_size # 0 - - if box_coder is None: - box_coder = BoxCoder(weights=(1.0, 1.0, 1.0, 1.0)) - self.box_coder = box_coder - - if fpn_post_nms_top_n is None: - fpn_post_nms_top_n = post_nms_top_n - self.fpn_post_nms_top_n = fpn_post_nms_top_n # 2000 - - def add_gt_proposals(self, proposals, targets): - """ - Arguments: - proposals: list[BoxList] - targets: list[BoxList] - """ - # Get the device we're operating on - device = proposals[0].bbox.device - - gt_boxes = [target.copy_with_fields([]) for target in targets] - - # later cat of bbox requires all fields to be present for all bbox - # so we need to add a dummy for objectness that's missing - for gt_box in gt_boxes: - gt_box.add_field("objectness", torch.ones(len(gt_box), device=device)) - - proposals = [ - cat_boxlist((proposal, gt_box)) - for proposal, gt_box in zip(proposals, gt_boxes) - ] - - return proposals - - def forward_for_single_feature_map(self, anchors, objectness, box_regression): - """ - Arguments: - anchors: list[BoxList] # [image,number,[n,4]] - objectness: tensor of size N, A, H, W - box_regression: tensor of size N, A * 4, H, W - """ - device = objectness.device - N, A, H, W = objectness.shape - # put in the same format as anchors - objectness = permute_and_flatten(objectness, N, A, 1, H, W).view(N, -1) # N H*W*A*1 - objectness = objectness.sigmoid() - box_regression = permute_and_flatten(box_regression, N, A, 18, H, W) # N H*W*A 4 - num_anchors = A * H * W # 391040 97760 - - pre_nms_top_n = min(self.pre_nms_top_n, num_anchors) #12000 - objectness, topk_idx = objectness.topk(pre_nms_top_n, dim=1, sorted=True) - # objectness = objectness.cpu() - batch_idx = torch.arange(N, device=device)[:, None] - box_regression = box_regression[batch_idx, topk_idx] - image_shapes = [box.size for box in anchors] - concat_anchors = torch.cat([a.bbox for a in anchors], dim=0) - concat_anchors = concat_anchors.reshape(N, -1, 4)[batch_idx, topk_idx] - proposals = self.box_coder.decode_iou( - box_regression.view(-1, 18), concat_anchors.view(-1, 4) - ) - - proposals = proposals.view(N, -1, 4) - - result = [] - for proposal, score, im_shape in zip(proposals, objectness, image_shapes): - boxlist = BoxList(proposal, im_shape, mode="xyxy") - boxlist.add_field("objectness", score) - boxlist = boxlist.clip_to_image(remove_empty=False) - boxlist = remove_small_boxes(boxlist, self.min_size) - boxlist = boxlist_nms( - boxlist, - self.nms_thresh, - max_proposals=self.post_nms_top_n, - score_field="objectness", - ) - result.append(boxlist) - return result - - def forward(self, anchors, objectness, box_regression, targets=None): - """ - Arguments: - anchors: list[list[BoxList]] - objectness: list[tensor] - box_regression: list[tensor] - - Returns: - boxlists (list[BoxList]): the post-processed anchors, after - applying box decoding and NMS - """ - sampled_boxes = [] - num_levels = len(objectness) # classification - anchors = list(zip(*anchors)) # [image,number,[n,4]] - # i =-1 - for a, o, b in zip(anchors, objectness, box_regression): - sampled_boxes.append(self.forward_for_single_feature_map(a, o, b)) - - - boxlists = list(zip(*sampled_boxes)) - boxlists = [cat_boxlist(boxlist) for boxlist in boxlists] - - if num_levels > 1: - boxlists = self.select_over_all_levels(boxlists) - - # append ground-truth bboxes to proposals - if self.training and targets is not None: - boxlists = self.add_gt_proposals(boxlists, targets) - - return boxlists - - def select_over_all_levels(self, boxlists): - num_images = len(boxlists) - # different behavior during training and during testing: - # during training, post_nms_top_n is over *all* the proposals combined, while - # during testing, it is over the proposals for each image - # TODO resolve this difference and make it consistent. It should be per image, - # and not per batch - if self.training: - objectness = torch.cat( - [boxlist.get_field("objectness") for boxlist in boxlists], dim=0 - ) - box_sizes = [len(boxlist) for boxlist in boxlists] - post_nms_top_n = min(self.fpn_post_nms_top_n, len(objectness)) - _, inds_sorted = torch.topk(objectness, post_nms_top_n, dim=0, sorted=True) - inds_mask = torch.zeros_like(objectness, dtype=torch.uint8) - inds_mask[inds_sorted] = 1 - inds_mask = inds_mask.split(box_sizes) - for i in range(num_images): - boxlists[i] = boxlists[i][inds_mask[i]] - else: - for i in range(num_images): - objectness = boxlists[i].get_field("objectness") - post_nms_top_n = min(self.fpn_post_nms_top_n, len(objectness)) - _, inds_sorted = torch.topk( - objectness, post_nms_top_n, dim=0, sorted=True - ) - boxlists[i] = boxlists[i][inds_sorted] - return boxlists - - -def make_rpn_postprocessor(config, rpn_box_coder, is_train): - fpn_post_nms_top_n = config.MODEL.RPN.FPN_POST_NMS_TOP_N_TRAIN # 2000 - if not is_train: - fpn_post_nms_top_n = config.MODEL.RPN.FPN_POST_NMS_TOP_N_TEST - - pre_nms_top_n = config.MODEL.RPN.PRE_NMS_TOP_N_TRAIN # 12000 - post_nms_top_n = config.MODEL.RPN.POST_NMS_TOP_N_TRAIN # 2000 - if not is_train: - pre_nms_top_n = config.MODEL.RPN.PRE_NMS_TOP_N_TEST - post_nms_top_n = config.MODEL.RPN.POST_NMS_TOP_N_TEST - nms_thresh = config.MODEL.RPN.NMS_THRESH # 0.7 - min_size = config.MODEL.RPN.MIN_SIZE # 0 - box_selector = RPNPostProcessor( - pre_nms_top_n=pre_nms_top_n, #12000 - post_nms_top_n=post_nms_top_n, #2000 - nms_thresh=nms_thresh, # 0.7 - min_size=min_size, # 0 - box_coder=rpn_box_coder, - fpn_post_nms_top_n=fpn_post_nms_top_n, #2000 - ) - return box_selector diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_P_.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_P_.py deleted file mode 100644 index 1abc02590c240377177d4ac12fe4848720e24959..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_P_.py +++ /dev/null @@ -1,5 +0,0 @@ -from .T_S_I_V_ import table_T_S_I_V_ - - -class table_T_S_I_P_(table_T_S_I_V_): - pass diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-9da94804.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-9da94804.css deleted file mode 100644 index 79d901421a55ea578fdaf2c50c84e8fafcea8c41..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-9da94804.css +++ /dev/null @@ -1 +0,0 @@ -div.svelte-1gww5xe{display:flex;position:absolute;justify-content:center;align-items:center;border-radius:var(--radius-sm);background-color:#000c;padding:var(--size-1) .4rem;color:#fff;font-size:var(--text-sm)}span.svelte-1gww5xe{display:inline-block;margin-right:var(--size-1);border-radius:var(--radius-xs);width:var(--size-3);height:var(--size-3)}.wrap.svelte-1mjxput{margin-top:var(--size-3)}.legend.svelte-1mjxput{display:flex;justify-content:center;align-items:center;color:var(--body-text-color)}.legend-item.svelte-1mjxput{display:flex;align-items:center;gap:var(--spacing-sm);margin-right:var(--size-2);margin-left:var(--size-2)}.legend-box.svelte-1mjxput{display:inline-block;border-radius:var(--radius-xs);width:var(--size-3);height:var(--size-3)}svg.svelte-1mjxput{width:var(--size-full)}.label-text.svelte-1mjxput{fill:var(--body-text-color);font-size:var(--text-sm);font-family:var(--font-mono)}.main-label.svelte-1mjxput{display:flex;justify-content:center;align-items:center;color:var(--body-text-color)}.chart.svelte-etmurc{display:flex;display:relative;justify-content:center;align-items:center;background:var(--background-fill-primary);width:var(--size-full);height:var(--size-64)} diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_async/connection.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_async/connection.py deleted file mode 100644 index 9014ab957a2b03a9ca258ec693f15189c6d8cd77..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_async/connection.py +++ /dev/null @@ -1,215 +0,0 @@ -import itertools -import logging -import ssl -from types import TracebackType -from typing import Iterable, Iterator, Optional, Type - -from .._backends.auto import AutoBackend -from .._backends.base import SOCKET_OPTION, AsyncNetworkBackend, AsyncNetworkStream -from .._exceptions import ConnectError, ConnectionNotAvailable, ConnectTimeout -from .._models import Origin, Request, Response -from .._ssl import default_ssl_context -from .._synchronization import AsyncLock -from .._trace import Trace -from .http11 import AsyncHTTP11Connection -from .interfaces import AsyncConnectionInterface - -RETRIES_BACKOFF_FACTOR = 0.5 # 0s, 0.5s, 1s, 2s, 4s, etc. - - -logger = logging.getLogger("httpcore.connection") - - -def exponential_backoff(factor: float) -> Iterator[float]: - yield 0 - for n in itertools.count(2): - yield factor * (2 ** (n - 2)) - - -class AsyncHTTPConnection(AsyncConnectionInterface): - def __init__( - self, - origin: Origin, - ssl_context: Optional[ssl.SSLContext] = None, - keepalive_expiry: Optional[float] = None, - http1: bool = True, - http2: bool = False, - retries: int = 0, - local_address: Optional[str] = None, - uds: Optional[str] = None, - network_backend: Optional[AsyncNetworkBackend] = None, - socket_options: Optional[Iterable[SOCKET_OPTION]] = None, - ) -> None: - self._origin = origin - self._ssl_context = ssl_context - self._keepalive_expiry = keepalive_expiry - self._http1 = http1 - self._http2 = http2 - self._retries = retries - self._local_address = local_address - self._uds = uds - - self._network_backend: AsyncNetworkBackend = ( - AutoBackend() if network_backend is None else network_backend - ) - self._connection: Optional[AsyncConnectionInterface] = None - self._connect_failed: bool = False - self._request_lock = AsyncLock() - self._socket_options = socket_options - - async def handle_async_request(self, request: Request) -> Response: - if not self.can_handle_request(request.url.origin): - raise RuntimeError( - f"Attempted to send request to {request.url.origin} on connection to {self._origin}" - ) - - async with self._request_lock: - if self._connection is None: - try: - stream = await self._connect(request) - - ssl_object = stream.get_extra_info("ssl_object") - http2_negotiated = ( - ssl_object is not None - and ssl_object.selected_alpn_protocol() == "h2" - ) - if http2_negotiated or (self._http2 and not self._http1): - from .http2 import AsyncHTTP2Connection - - self._connection = AsyncHTTP2Connection( - origin=self._origin, - stream=stream, - keepalive_expiry=self._keepalive_expiry, - ) - else: - self._connection = AsyncHTTP11Connection( - origin=self._origin, - stream=stream, - keepalive_expiry=self._keepalive_expiry, - ) - except Exception as exc: - self._connect_failed = True - raise exc - elif not self._connection.is_available(): - raise ConnectionNotAvailable() - - return await self._connection.handle_async_request(request) - - async def _connect(self, request: Request) -> AsyncNetworkStream: - timeouts = request.extensions.get("timeout", {}) - sni_hostname = request.extensions.get("sni_hostname", None) - timeout = timeouts.get("connect", None) - - retries_left = self._retries - delays = exponential_backoff(factor=RETRIES_BACKOFF_FACTOR) - - while True: - try: - if self._uds is None: - kwargs = { - "host": self._origin.host.decode("ascii"), - "port": self._origin.port, - "local_address": self._local_address, - "timeout": timeout, - "socket_options": self._socket_options, - } - async with Trace("connect_tcp", logger, request, kwargs) as trace: - stream = await self._network_backend.connect_tcp(**kwargs) - trace.return_value = stream - else: - kwargs = { - "path": self._uds, - "timeout": timeout, - "socket_options": self._socket_options, - } - async with Trace( - "connect_unix_socket", logger, request, kwargs - ) as trace: - stream = await self._network_backend.connect_unix_socket( - **kwargs - ) - trace.return_value = stream - - if self._origin.scheme == b"https": - ssl_context = ( - default_ssl_context() - if self._ssl_context is None - else self._ssl_context - ) - alpn_protocols = ["http/1.1", "h2"] if self._http2 else ["http/1.1"] - ssl_context.set_alpn_protocols(alpn_protocols) - - kwargs = { - "ssl_context": ssl_context, - "server_hostname": sni_hostname - or self._origin.host.decode("ascii"), - "timeout": timeout, - } - async with Trace("start_tls", logger, request, kwargs) as trace: - stream = await stream.start_tls(**kwargs) - trace.return_value = stream - return stream - except (ConnectError, ConnectTimeout): - if retries_left <= 0: - raise - retries_left -= 1 - delay = next(delays) - async with Trace("retry", logger, request, kwargs) as trace: - await self._network_backend.sleep(delay) - - def can_handle_request(self, origin: Origin) -> bool: - return origin == self._origin - - async def aclose(self) -> None: - if self._connection is not None: - async with Trace("close", logger, None, {}): - await self._connection.aclose() - - def is_available(self) -> bool: - if self._connection is None: - # If HTTP/2 support is enabled, and the resulting connection could - # end up as HTTP/2 then we should indicate the connection as being - # available to service multiple requests. - return ( - self._http2 - and (self._origin.scheme == b"https" or not self._http1) - and not self._connect_failed - ) - return self._connection.is_available() - - def has_expired(self) -> bool: - if self._connection is None: - return self._connect_failed - return self._connection.has_expired() - - def is_idle(self) -> bool: - if self._connection is None: - return self._connect_failed - return self._connection.is_idle() - - def is_closed(self) -> bool: - if self._connection is None: - return self._connect_failed - return self._connection.is_closed() - - def info(self) -> str: - if self._connection is None: - return "CONNECTION FAILED" if self._connect_failed else "CONNECTING" - return self._connection.info() - - def __repr__(self) -> str: - return f"<{self.__class__.__name__} [{self.info()}]>" - - # These context managers are not used in the standard flow, but are - # useful for testing or working with connection instances directly. - - async def __aenter__(self) -> "AsyncHTTPConnection": - return self - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]] = None, - exc_value: Optional[BaseException] = None, - traceback: Optional[TracebackType] = None, - ) -> None: - await self.aclose() diff --git a/spaces/Dai1123/CalqChat/README.md b/spaces/Dai1123/CalqChat/README.md deleted file mode 100644 index 9454fc58f8bf8701aa5c061ab77d4576a3e60ee0..0000000000000000000000000000000000000000 --- a/spaces/Dai1123/CalqChat/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: CalqResume -emoji: 🦀 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: apache-2.0 - ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/dataset/pano_s2d3d_dataset.py b/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/dataset/pano_s2d3d_dataset.py deleted file mode 100644 index b6939fea1a08e5f1c1eb985b85fc739be0c53b04..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/dataset/pano_s2d3d_dataset.py +++ /dev/null @@ -1,107 +0,0 @@ -""" -@date: 2021/6/16 -@description: -""" -import math -import os -import numpy as np - -from dataset.communal.read import read_image, read_label -from dataset.communal.base_dataset import BaseDataset -from utils.logger import get_logger - - -class PanoS2D3DDataset(BaseDataset): - def __init__(self, root_dir, mode, shape=None, max_wall_num=0, aug=None, camera_height=1.6, logger=None, - split_list=None, patch_num=256, keys=None, for_test_index=None, subset=None): - super().__init__(mode, shape, max_wall_num, aug, camera_height, patch_num, keys) - - if logger is None: - logger = get_logger() - self.root_dir = root_dir - - if mode is None: - return - label_dir = os.path.join(root_dir, 'valid' if mode == 'val' else mode, 'label_cor') - img_dir = os.path.join(root_dir, 'valid' if mode == 'val' else mode, 'img') - - if split_list is None: - split_list = [name.split('.')[0] for name in os.listdir(label_dir) if - not name.startswith('.') and name.endswith('txt')] - - split_list.sort() - - assert subset == 'pano' or subset == 's2d3d' or subset is None, 'error subset' - if subset == 'pano': - split_list = [name for name in split_list if 'pano_' in name] - logger.info(f"Use PanoContext Dataset") - elif subset == 's2d3d': - split_list = [name for name in split_list if 'camera_' in name] - logger.info(f"Use Stanford2D3D Dataset") - - if for_test_index is not None: - split_list = split_list[:for_test_index] - - self.data = [] - invalid_num = 0 - for name in split_list: - img_path = os.path.join(img_dir, f"{name}.png") - label_path = os.path.join(label_dir, f"{name}.txt") - - if not os.path.exists(img_path): - logger.warning(f"{img_path} not exists") - invalid_num += 1 - continue - if not os.path.exists(label_path): - logger.warning(f"{label_path} not exists") - invalid_num += 1 - continue - - with open(label_path, 'r') as f: - lines = [line for line in f.readlines() if - len([c for c in line.split(' ') if c[0].isnumeric()]) > 1] - if len(lines) % 2 != 0: - invalid_num += 1 - continue - self.data.append([img_path, label_path]) - - logger.info( - f"Build dataset mode: {self.mode} valid: {len(self.data)} invalid: {invalid_num}") - - def __getitem__(self, idx): - rgb_path, label_path = self.data[idx] - label = read_label(label_path, data_type='Pano_S2D3D') - image = read_image(rgb_path, self.shape) - output = self.process_data(label, image, self.patch_num) - return output - - -if __name__ == '__main__': - - modes = ['test', 'val', 'train'] - for i in range(1): - for mode in modes: - print(mode) - mp3d_dataset = PanoS2D3DDataset(root_dir='../src/dataset/pano_s2d3d', mode=mode, aug={ - # 'STRETCH': True, - # 'ROTATE': True, - # 'FLIP': True, - # 'GAMMA': True - }) - continue - save_dir = f'../src/dataset/pano_s2d3d/visualization/{mode}' - if not os.path.isdir(save_dir): - os.makedirs(save_dir) - - bar = tqdm(mp3d_dataset, ncols=100) - for data in bar: - bar.set_description(f"Processing {data['id']}") - boundary_list = depth2boundaries(data['ratio'], data['depth'], step=None) - pano_img = draw_boundaries(data['image'].transpose(1, 2, 0), boundary_list=boundary_list, show=False) - Image.fromarray((pano_img * 255).astype(np.uint8)).save( - os.path.join(save_dir, f"{data['id']}_boundary.png")) - - floorplan = draw_floorplan(uv2xyz(boundary_list[0])[..., ::2], show=False, - marker_color=None, center_color=0.8, show_radius=None) - Image.fromarray((floorplan.squeeze() * 255).astype(np.uint8)).save( - os.path.join(save_dir, f"{data['id']}_floorplan.png")) diff --git a/spaces/Datasculptor/DescriptionGPT/detic/modeling/backbone/swintransformer.py b/spaces/Datasculptor/DescriptionGPT/detic/modeling/backbone/swintransformer.py deleted file mode 100644 index 21cabb37dd87a443e27eeb805f9739bef86540bf..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/DescriptionGPT/detic/modeling/backbone/swintransformer.py +++ /dev/null @@ -1,750 +0,0 @@ -# -------------------------------------------------------- -# Swin Transformer -# Copyright (c) 2021 Microsoft -# Licensed under The MIT License [see LICENSE for details] -# Written by Ze Liu, Yutong Lin, Yixuan Wei -# -------------------------------------------------------- - -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Xingyi Zhou from https://github.com/SwinTransformer/Swin-Transformer-Object-Detection/blob/master/mmdet/models/backbones/swin_transformer.py - - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -import numpy as np -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ - -from detectron2.layers import ShapeSpec -from detectron2.modeling.backbone.backbone import Backbone -from detectron2.modeling.backbone.build import BACKBONE_REGISTRY -from detectron2.modeling.backbone.fpn import FPN - -from centernet.modeling.backbone.fpn_p5 import LastLevelP6P7_P5 -from centernet.modeling.backbone.bifpn import BiFPN -# from .checkpoint import load_checkpoint - -class Mlp(nn.Module): - """ Multilayer perceptron.""" - - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class WindowAttention(nn.Module): - """ Window based multi-head self attention (W-MSA) module with relative position bias. - It supports both of shifted and non-shifted window. - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set - attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 - proj_drop (float, optional): Dropout ratio of output. Default: 0.0 - """ - - def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - trunc_normal_(self.relative_position_bias_table, std=.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - """ Forward function. - Args: - x: input features with shape of (num_windows*B, N, C) - mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None - """ - B_, N, C = x.shape - qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class SwinTransformerBlock(nn.Module): - """ Swin Transformer Block. - Args: - dim (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (int): Window size. - shift_size (int): Shift size for SW-MSA. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, dim, num_heads, window_size=7, shift_size=0, - mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0., - act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, window_size=to_2tuple(self.window_size), num_heads=num_heads, - qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop) - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - self.H = None - self.W = None - - def forward(self, x, mask_matrix): - """ Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - mask_matrix: Attention mask for cyclic shift. - """ - B, L, C = x.shape - H, W = self.H, self.W - assert L == H * W, "input feature has wrong size" - - shortcut = x - x = self.norm1(x) - x = x.view(B, H, W, C) - - # pad feature maps to multiples of window size - pad_l = pad_t = 0 - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - _, Hp, Wp, _ = x.shape - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - attn_mask = mask_matrix - else: - shifted_x = x - attn_mask = None - - # partition windows - x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - - if pad_r > 0 or pad_b > 0: - x = x[:, :H, :W, :].contiguous() - - x = x.view(B, H * W, C) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - - return x - - -class PatchMerging(nn.Module): - """ Patch Merging Layer - Args: - dim (int): Number of input channels. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - def __init__(self, dim, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) - self.norm = norm_layer(4 * dim) - - def forward(self, x, H, W): - """ Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - B, L, C = x.shape - assert L == H * W, "input feature has wrong size" - - x = x.view(B, H, W, C) - - # padding - pad_input = (H % 2 == 1) or (W % 2 == 1) - if pad_input: - x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2)) - - x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C - x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C - x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C - x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C - x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C - x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C - - x = self.norm(x) - x = self.reduction(x) - - return x - - -class BasicLayer(nn.Module): - """ A basic Swin Transformer layer for one stage. - Args: - dim (int): Number of feature channels - depth (int): Depths of this stage. - num_heads (int): Number of attention head. - window_size (int): Local window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__(self, - dim, - depth, - num_heads, - window_size=7, - mlp_ratio=4., - qkv_bias=True, - qk_scale=None, - drop=0., - attn_drop=0., - drop_path=0., - norm_layer=nn.LayerNorm, - downsample=None, - use_checkpoint=False): - super().__init__() - self.window_size = window_size - self.shift_size = window_size // 2 - self.depth = depth - self.use_checkpoint = use_checkpoint - - # build blocks - self.blocks = nn.ModuleList([ - SwinTransformerBlock( - dim=dim, - num_heads=num_heads, - window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop, - attn_drop=attn_drop, - drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, - norm_layer=norm_layer) - for i in range(depth)]) - - # patch merging layer - if downsample is not None: - self.downsample = downsample(dim=dim, norm_layer=norm_layer) - else: - self.downsample = None - - def forward(self, x, H, W): - """ Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - - # calculate attention mask for SW-MSA - Hp = int(np.ceil(H / self.window_size)) * self.window_size - Wp = int(np.ceil(W / self.window_size)) * self.window_size - img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1 - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) - - for blk in self.blocks: - blk.H, blk.W = H, W - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x, attn_mask) - else: - x = blk(x, attn_mask) - if self.downsample is not None: - x_down = self.downsample(x, H, W) - Wh, Ww = (H + 1) // 2, (W + 1) // 2 - return x, H, W, x_down, Wh, Ww - else: - return x, H, W, x, H, W - - -class PatchEmbed(nn.Module): - """ Image to Patch Embedding - Args: - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - patch_size = to_2tuple(patch_size) - self.patch_size = patch_size - - self.in_chans = in_chans - self.embed_dim = embed_dim - - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - if norm_layer is not None: - self.norm = norm_layer(embed_dim) - else: - self.norm = None - - def forward(self, x): - """Forward function.""" - # padding - _, _, H, W = x.size() - if W % self.patch_size[1] != 0: - x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1])) - if H % self.patch_size[0] != 0: - x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0])) - - x = self.proj(x) # B C Wh Ww - if self.norm is not None: - Wh, Ww = x.size(2), x.size(3) - x = x.flatten(2).transpose(1, 2) - x = self.norm(x) - x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww) - - return x - - -class SwinTransformer(Backbone): - """ Swin Transformer backbone. - A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` - - https://arxiv.org/pdf/2103.14030 - Args: - pretrain_img_size (int): Input image size for training the pretrained model, - used in absolute postion embedding. Default 224. - patch_size (int | tuple(int)): Patch size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - depths (tuple[int]): Depths of each Swin Transformer stage. - num_heads (tuple[int]): Number of attention head of each stage. - window_size (int): Window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. - drop_rate (float): Dropout rate. - attn_drop_rate (float): Attention dropout rate. Default: 0. - drop_path_rate (float): Stochastic depth rate. Default: 0.2. - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - ape (bool): If True, add absolute position embedding to the patch embedding. Default: False. - patch_norm (bool): If True, add normalization after patch embedding. Default: True. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__(self, - pretrain_img_size=224, - patch_size=4, - in_chans=3, - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - mlp_ratio=4., - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.2, - norm_layer=nn.LayerNorm, - ape=False, - patch_norm=True, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - use_checkpoint=False): - super().__init__() - - self.pretrain_img_size = pretrain_img_size - self.num_layers = len(depths) - self.embed_dim = embed_dim - self.ape = ape - self.patch_norm = patch_norm - self.out_indices = out_indices - self.frozen_stages = frozen_stages - - # split image into non-overlapping patches - self.patch_embed = PatchEmbed( - patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None) - - # absolute position embedding - if self.ape: - pretrain_img_size = to_2tuple(pretrain_img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [pretrain_img_size[0] // patch_size[0], pretrain_img_size[1] // patch_size[1]] - - self.absolute_pos_embed = nn.Parameter(torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1])) - trunc_normal_(self.absolute_pos_embed, std=.02) - - self.pos_drop = nn.Dropout(p=drop_rate) - - # stochastic depth - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule - - # build layers - self.layers = nn.ModuleList() - for i_layer in range(self.num_layers): - layer = BasicLayer( - dim=int(embed_dim * 2 ** i_layer), - depth=depths[i_layer], - num_heads=num_heads[i_layer], - window_size=window_size, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop_rate, - attn_drop=attn_drop_rate, - drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], - norm_layer=norm_layer, - downsample=PatchMerging if (i_layer < self.num_layers - 1) else None, - use_checkpoint=use_checkpoint) - self.layers.append(layer) - - num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)] - self.num_features = num_features - - # add a norm layer for each output - for i_layer in out_indices: - layer = norm_layer(num_features[i_layer]) - layer_name = f'norm{i_layer}' - self.add_module(layer_name, layer) - - self._freeze_stages() - self._out_features = ['swin{}'.format(i) for i in self.out_indices] - self._out_feature_channels = { - 'swin{}'.format(i): self.embed_dim * 2 ** i for i in self.out_indices - } - self._out_feature_strides = { - 'swin{}'.format(i): 2 ** (i + 2) for i in self.out_indices - } - self._size_devisibility = 32 - - - def _freeze_stages(self): - if self.frozen_stages >= 0: - self.patch_embed.eval() - for param in self.patch_embed.parameters(): - param.requires_grad = False - - if self.frozen_stages >= 1 and self.ape: - self.absolute_pos_embed.requires_grad = False - - if self.frozen_stages >= 2: - self.pos_drop.eval() - for i in range(0, self.frozen_stages - 1): - m = self.layers[i] - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - - def _init_weights(m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - if isinstance(pretrained, str): - self.apply(_init_weights) - # load_checkpoint(self, pretrained, strict=False) - elif pretrained is None: - self.apply(_init_weights) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - """Forward function.""" - x = self.patch_embed(x) - - Wh, Ww = x.size(2), x.size(3) - if self.ape: - # interpolate the position embedding to the corresponding size - absolute_pos_embed = F.interpolate(self.absolute_pos_embed, size=(Wh, Ww), mode='bicubic') - x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C - else: - x = x.flatten(2).transpose(1, 2) - x = self.pos_drop(x) - - # outs = [] - outs = {} - for i in range(self.num_layers): - layer = self.layers[i] - x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww) - - if i in self.out_indices: - norm_layer = getattr(self, f'norm{i}') - x_out = norm_layer(x_out) - - out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous() - # outs.append(out) - outs['swin{}'.format(i)] = out - - return outs - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(SwinTransformer, self).train(mode) - self._freeze_stages() - -size2config = { - 'T': { - 'window_size': 7, - 'embed_dim': 96, - 'depth': [2, 2, 6, 2], - 'num_heads': [3, 6, 12, 24], - 'drop_path_rate': 0.2, - 'pretrained': 'models/swin_tiny_patch4_window7_224.pth' - }, - 'S': { - 'window_size': 7, - 'embed_dim': 96, - 'depth': [2, 2, 18, 2], - 'num_heads': [3, 6, 12, 24], - 'drop_path_rate': 0.2, - 'pretrained': 'models/swin_small_patch4_window7_224.pth' - }, - 'B': { - 'window_size': 7, - 'embed_dim': 128, - 'depth': [2, 2, 18, 2], - 'num_heads': [4, 8, 16, 32], - 'drop_path_rate': 0.3, - 'pretrained': 'models/swin_base_patch4_window7_224.pth' - }, - 'B-22k': { - 'window_size': 7, - 'embed_dim': 128, - 'depth': [2, 2, 18, 2], - 'num_heads': [4, 8, 16, 32], - 'drop_path_rate': 0.3, - 'pretrained': 'models/swin_base_patch4_window7_224_22k.pth' - }, - 'B-22k-384': { - 'window_size': 12, - 'embed_dim': 128, - 'depth': [2, 2, 18, 2], - 'num_heads': [4, 8, 16, 32], - 'drop_path_rate': 0.3, - 'pretrained': 'models/swin_base_patch4_window12_384_22k.pth' - }, - 'L-22k': { - 'window_size': 7, - 'embed_dim': 192, - 'depth': [2, 2, 18, 2], - 'num_heads': [6, 12, 24, 48], - 'drop_path_rate': 0.3, # TODO (xingyi): this is unclear - 'pretrained': 'models/swin_large_patch4_window7_224_22k.pth' - }, - 'L-22k-384': { - 'window_size': 12, - 'embed_dim': 192, - 'depth': [2, 2, 18, 2], - 'num_heads': [6, 12, 24, 48], - 'drop_path_rate': 0.3, # TODO (xingyi): this is unclear - 'pretrained': 'models/swin_large_patch4_window12_384_22k.pth' - } -} - -@BACKBONE_REGISTRY.register() -def build_swintransformer_backbone(cfg, input_shape): - """ - """ - config = size2config[cfg.MODEL.SWIN.SIZE] - out_indices = cfg.MODEL.SWIN.OUT_FEATURES - model = SwinTransformer( - embed_dim=config['embed_dim'], - window_size=config['window_size'], - depths=config['depth'], - num_heads=config['num_heads'], - drop_path_rate=config['drop_path_rate'], - out_indices=out_indices, - frozen_stages=-1, - use_checkpoint=cfg.MODEL.SWIN.USE_CHECKPOINT - ) - # print('Initializing', config['pretrained']) - model.init_weights(config['pretrained']) - return model - - -@BACKBONE_REGISTRY.register() -def build_swintransformer_fpn_backbone(cfg, input_shape: ShapeSpec): - """ - """ - bottom_up = build_swintransformer_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.FPN.OUT_CHANNELS - backbone = FPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - norm=cfg.MODEL.FPN.NORM, - top_block=LastLevelP6P7_P5(out_channels, out_channels), - fuse_type=cfg.MODEL.FPN.FUSE_TYPE, - ) - return backbone - - -@BACKBONE_REGISTRY.register() -def build_swintransformer_bifpn_backbone(cfg, input_shape: ShapeSpec): - """ - """ - bottom_up = build_swintransformer_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - backbone = BiFPN( - cfg=cfg, - bottom_up=bottom_up, - in_features=in_features, - out_channels=cfg.MODEL.BIFPN.OUT_CHANNELS, - norm=cfg.MODEL.BIFPN.NORM, - num_levels=cfg.MODEL.BIFPN.NUM_LEVELS, - num_bifpn=cfg.MODEL.BIFPN.NUM_BIFPN, - separable_conv=cfg.MODEL.BIFPN.SEPARABLE_CONV, - ) - return backbone \ No newline at end of file diff --git a/spaces/Dinoking/Guccio-AI-Designer/netdissect/tool/ganseg.py b/spaces/Dinoking/Guccio-AI-Designer/netdissect/tool/ganseg.py deleted file mode 100644 index e6225736d336cf75aedb8a7d7aec1229b497f6a9..0000000000000000000000000000000000000000 --- a/spaces/Dinoking/Guccio-AI-Designer/netdissect/tool/ganseg.py +++ /dev/null @@ -1,89 +0,0 @@ -''' -A simple tool to generate sample of output of a GAN, -and apply semantic segmentation on the output. -''' - -import torch, numpy, os, argparse, sys, shutil -from PIL import Image -from torch.utils.data import TensorDataset -from netdissect.zdataset import standard_z_sample, z_dataset_for_model -from netdissect.progress import default_progress, verbose_progress -from netdissect.autoeval import autoimport_eval -from netdissect.workerpool import WorkerBase, WorkerPool -from netdissect.nethook import edit_layers, retain_layers -from netdissect.segviz import segment_visualization -from netdissect.segmenter import UnifiedParsingSegmenter -from scipy.io import savemat - -def main(): - parser = argparse.ArgumentParser(description='GAN output segmentation util') - parser.add_argument('--model', type=str, default= - 'netdissect.proggan.from_pth_file("' + - 'models/karras/churchoutdoor_lsun.pth")', - help='constructor for the model to test') - parser.add_argument('--outdir', type=str, default='images', - help='directory for image output') - parser.add_argument('--size', type=int, default=100, - help='number of images to output') - parser.add_argument('--seed', type=int, default=1, - help='seed') - parser.add_argument('--quiet', action='store_true', default=False, - help='silences console output') - #if len(sys.argv) == 1: - # parser.print_usage(sys.stderr) - # sys.exit(1) - args = parser.parse_args() - verbose_progress(not args.quiet) - - # Instantiate the model - model = autoimport_eval(args.model) - - # Make the standard z - z_dataset = z_dataset_for_model(model, size=args.size) - - # Make the segmenter - segmenter = UnifiedParsingSegmenter() - - # Write out text labels - labels, cats = segmenter.get_label_and_category_names() - with open(os.path.join(args.outdir, 'labels.txt'), 'w') as f: - for i, (label, cat) in enumerate(labels): - f.write('%s %s\n' % (label, cat)) - - # Move models to cuda - model.cuda() - - batch_size = 10 - progress = default_progress() - dirname = args.outdir - - with torch.no_grad(): - # Pass 2: now generate images - z_loader = torch.utils.data.DataLoader(z_dataset, - batch_size=batch_size, num_workers=2, - pin_memory=True) - for batch_num, [z] in enumerate(progress(z_loader, - desc='Saving images')): - z = z.cuda() - start_index = batch_num * batch_size - tensor_im = model(z) - byte_im = ((tensor_im + 1) / 2 * 255).clamp(0, 255).byte().permute( - 0, 2, 3, 1).cpu() - seg = segmenter.segment_batch(tensor_im) - for i in range(len(tensor_im)): - index = i + start_index - filename = os.path.join(dirname, '%d_img.jpg' % index) - Image.fromarray(byte_im[i].numpy()).save( - filename, optimize=True, quality=100) - filename = os.path.join(dirname, '%d_seg.mat' % index) - savemat(filename, dict(seg=seg[i].cpu().numpy())) - filename = os.path.join(dirname, '%d_seg.png' % index) - Image.fromarray(segment_visualization(seg[i].cpu().numpy(), - tensor_im.shape[2:])).save(filename) - srcdir = os.path.realpath( - os.path.join(os.getcwd(), os.path.dirname(__file__))) - shutil.copy(os.path.join(srcdir, 'lightbox.html'), - os.path.join(dirname, '+lightbox.html')) - -if __name__ == '__main__': - main() diff --git a/spaces/Docfile/open_llm_leaderboard/src/assets/hardcoded_evals.py b/spaces/Docfile/open_llm_leaderboard/src/assets/hardcoded_evals.py deleted file mode 100644 index b361d0c7cc8d250ee097fed25e53612c881a2b59..0000000000000000000000000000000000000000 --- a/spaces/Docfile/open_llm_leaderboard/src/assets/hardcoded_evals.py +++ /dev/null @@ -1,40 +0,0 @@ -from src.display_models.utils import AutoEvalColumn, model_hyperlink - -gpt4_values = { - AutoEvalColumn.model.name: model_hyperlink("https://arxiv.org/abs/2303.08774", "gpt4"), - AutoEvalColumn.revision.name: "tech report", - AutoEvalColumn.precision.name: None, - AutoEvalColumn.average.name: 84.3, - AutoEvalColumn.arc.name: 96.3, - AutoEvalColumn.hellaswag.name: 95.3, - AutoEvalColumn.mmlu.name: 86.4, - AutoEvalColumn.truthfulqa.name: 59.0, - AutoEvalColumn.dummy.name: "GPT-4", - AutoEvalColumn.model_type.name: "", -} - -gpt35_values = { - AutoEvalColumn.model.name: model_hyperlink("https://arxiv.org/abs/2303.08774", "gpt3.5"), - AutoEvalColumn.revision.name: "tech report", - AutoEvalColumn.precision.name: None, - AutoEvalColumn.average.name: 71.9, - AutoEvalColumn.arc.name: 85.2, - AutoEvalColumn.hellaswag.name: 85.5, - AutoEvalColumn.mmlu.name: 70.0, - AutoEvalColumn.truthfulqa.name: 47.0, - AutoEvalColumn.dummy.name: "GPT-3.5", - AutoEvalColumn.model_type.name: "", -} - -baseline = { - AutoEvalColumn.model.name: "

Baseline

", - AutoEvalColumn.revision.name: "N/A", - AutoEvalColumn.precision.name: None, - AutoEvalColumn.average.name: 25.0, - AutoEvalColumn.arc.name: 25.0, - AutoEvalColumn.hellaswag.name: 25.0, - AutoEvalColumn.mmlu.name: 25.0, - AutoEvalColumn.truthfulqa.name: 25.0, - AutoEvalColumn.dummy.name: "baseline", - AutoEvalColumn.model_type.name: "", -} diff --git a/spaces/DragGan/DragGan-Inversion/PTI/utils/alignment.py b/spaces/DragGan/DragGan-Inversion/PTI/utils/alignment.py deleted file mode 100644 index d1e13a0d70eb0827abca405401f83b9939122f2d..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/PTI/utils/alignment.py +++ /dev/null @@ -1,113 +0,0 @@ -import numpy as np -import PIL -import PIL.Image -import scipy -import scipy.ndimage -import dlib - -def get_landmark(img, predictor): - """get landmark with dlib - :return: np.array shape=(68, 2) - """ - detector = dlib.get_frontal_face_detector() - - img = np.array(img) - dets = detector(img, 1) - - for k, d in enumerate(dets): - shape = predictor(img, d) - - t = list(shape.parts()) - a = [] - for tt in t: - a.append([tt.x, tt.y]) - lm = np.array(a) - return lm - - -def align_face(img, predictor, output_size): - """ - :param img: PIL Image - :return: PIL Image - """ - - lm = get_landmark(img, predictor) - - lm_chin = lm[0: 17] # left-right - lm_eyebrow_left = lm[17: 22] # left-right - lm_eyebrow_right = lm[22: 27] # left-right - lm_nose = lm[27: 31] # top-down - lm_nostrils = lm[31: 36] # top-down - lm_eye_left = lm[36: 42] # left-clockwise - lm_eye_right = lm[42: 48] # left-clockwise - lm_mouth_outer = lm[48: 60] # left-clockwise - lm_mouth_inner = lm[60: 68] # left-clockwise - - # Calculate auxiliary vectors. - eye_left = np.mean(lm_eye_left, axis=0) - eye_right = np.mean(lm_eye_right, axis=0) - eye_avg = (eye_left + eye_right) * 0.5 - eye_to_eye = eye_right - eye_left - mouth_left = lm_mouth_outer[0] - mouth_right = lm_mouth_outer[6] - mouth_avg = (mouth_left + mouth_right) * 0.5 - eye_to_mouth = mouth_avg - eye_avg - - # Choose oriented crop rectangle. - x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1] - x /= np.hypot(*x) - x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8) - y = np.flipud(x) * [-1, 1] - c = eye_avg + eye_to_mouth * 0.1 - quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y]) - qsize = np.hypot(*x) * 2 - - # read image - # img = img - - transform_size = output_size - enable_padding = True - - # Shrink. - shrink = int(np.floor(qsize / output_size * 0.5)) - if shrink > 1: - rsize = (int(np.rint(float(img.size[0]) / shrink)), int(np.rint(float(img.size[1]) / shrink))) - img = img.resize(rsize, PIL.Image.ANTIALIAS) - quad /= shrink - qsize /= shrink - - # Crop. - border = max(int(np.rint(qsize * 0.1)), 3) - crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]), - min(crop[3] + border, img.size[1])) - if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]: - img = img.crop(crop) - quad -= crop[0:2] - - # Pad. - pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - img.size[0] + border, 0), - max(pad[3] - img.size[1] + border, 0)) - if enable_padding and max(pad) > border - 4: - pad = np.maximum(pad, int(np.rint(qsize * 0.3))) - img = np.pad(np.float32(img), ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect') - h, w, _ = img.shape - y, x, _ = np.ogrid[:h, :w, :1] - mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], np.float32(w - 1 - x) / pad[2]), - 1.0 - np.minimum(np.float32(y) / pad[1], np.float32(h - 1 - y) / pad[3])) - blur = qsize * 0.02 - img += (scipy.ndimage.gaussian_filter(img, [blur, blur, 0]) - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0) - img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0) - img = PIL.Image.fromarray(np.uint8(np.clip(np.rint(img), 0, 255)), 'RGB') - quad += pad[:2] - - # Transform. - img = img.transform((transform_size, transform_size), PIL.Image.QUAD, (quad + 0.5).flatten(), PIL.Image.BILINEAR) - if output_size < transform_size: - img = img.resize((output_size, output_size), PIL.Image.ANTIALIAS) - - # Return aligned image. - return img diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/utils/face_alignment.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/utils/face_alignment.py deleted file mode 100644 index 5854599e963a79e852d57f396ea08c952f25440e..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/utils/face_alignment.py +++ /dev/null @@ -1,274 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -import numpy as np -import PIL -import PIL.Image -import scipy -import scipy.ndimage -import dlib -import copy -from PIL import Image - - -def get_landmark(img, detector, predictor): - """get landmark with dlib - :return: np.array shape=(68, 2) - """ - # detector = dlib.get_frontal_face_detector() - # dets, _, _ = detector.run(img, 1, -1) - dets = detector(img, 1) - for k, d in enumerate(dets): - shape = predictor(img, d.rect) - t = list(shape.parts()) - a = [] - for tt in t: - a.append([tt.x, tt.y]) - lm = np.array(a) - - # face rect - face_rect = [dets[0].rect.left(), dets[0].rect.top(), - dets[0].rect.right(), dets[0].rect.bottom()] - return lm, face_rect - - -def align_face_for_insetgan(img, detector, predictor, output_size=256): - """ - :param img: numpy array rgb - :return: PIL Image - """ - img_cp = copy.deepcopy(img) - lm, face_rect = get_landmark(img, detector, predictor) - - lm_chin = lm[0: 17] # left-right - lm_eyebrow_left = lm[17: 22] # left-right - lm_eyebrow_right = lm[22: 27] # left-right - lm_nose = lm[27: 31] # top-down - lm_nostrils = lm[31: 36] # top-down - lm_eye_left = lm[36: 42] # left-clockwise - lm_eye_right = lm[42: 48] # left-clockwise - lm_mouth_outer = lm[48: 60] # left-clockwise - lm_mouth_inner = lm[60: 68] # left-clockwise - - # Calculate auxiliary vectors. - eye_left = np.mean(lm_eye_left, axis=0) - eye_right = np.mean(lm_eye_right, axis=0) - eye_avg = (eye_left + eye_right) * 0.5 - eye_to_eye = eye_right - eye_left - mouth_left = lm_mouth_outer[0] - mouth_right = lm_mouth_outer[6] - mouth_avg = (mouth_left + mouth_right) * 0.5 - eye_to_mouth = mouth_avg - eye_avg - - # Choose oriented crop rectangle. - x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1] - x /= np.hypot(*x) - x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8) - y = np.flipud(x) * [-1, 1] - c = eye_avg + eye_to_mouth * 0.1 - quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y]) - qsize = np.hypot(*x) * 2 - - # read image - # opencv to PIL - img = PIL.Image.fromarray(img_cp) - # img = PIL.Image.open(filepath) - - transform_size = output_size - enable_padding = False - - # Shrink. - # shrink = int(np.floor(qsize / output_size * 0.5)) - # if shrink > 1: - # rsize = (int(np.rint(float(img.size[0]) / shrink)), int(np.rint(float(img.size[1]) / shrink))) - # img = img.resize(rsize, PIL.Image.ANTIALIAS) - # quad /= shrink - # qsize /= shrink - - # Crop. - border = max(int(np.rint(qsize * 0.1)), 3) - crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - - # crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]), - # min(crop[3] + border, img.size[1])) - # img.save("debug/raw.jpg") - if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]: - img = img.crop(crop) - quad -= crop[0:2] - # img.save("debug/crop.jpg") - # Pad. - # pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - # int(np.ceil(max(quad[:, 1])))) - # pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - img.size[0] + border, 0), - # max(pad[3] - img.size[1] + border, 0)) - # if enable_padding and max(pad) > border - 4: - # pad = np.maximum(pad, int(np.rint(qsize * 0.3))) - # img = np.pad(np.float32(img), ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect') - # h, w, _ = img.shape - # y, x, _ = np.ogrid[:h, :w, :1] - # mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], np.float32(w - 1 - x) / pad[2]), - # 1.0 - np.minimum(np.float32(y) / pad[1], np.float32(h - 1 - y) / pad[3])) - # blur = qsize * 0.02 - # img += (scipy.ndimage.gaussian_filter(img, [blur, blur, 0]) - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0) - # img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0) - # img = PIL.Image.fromarray(np.uint8(np.clip(np.rint(img), 0, 255)), 'RGB') - # quad += pad[:2] - - # Transform. - # crop shape to transform shape - # nw = - # print(img.size, quad+0.5, np.bound((quad+0.5).flatten())) - # assert False - # img = img.transform((transform_size, transform_size), PIL.Image.QUAD, (quad + 0.5).flatten(), PIL.Image.BILINEAR) - - # img.save("debug/transform.jpg") - # if output_size < transform_size: - img = img.resize((output_size, output_size), PIL.Image.ANTIALIAS) - # img.save("debug/resize.jpg") - # print((quad+crop[0:2]).flatten()) - # assert False - # Return aligned image. - - return img, crop, face_rect - - -def align_face_for_projector(img, detector, predictor, output_size): - """ - :param filepath: str - :return: PIL Image - """ - - img_cp = copy.deepcopy(img) - lm, face_rect = get_landmark(img, detector, predictor) - - lm_chin = lm[0: 17] # left-right - lm_eyebrow_left = lm[17: 22] # left-right - lm_eyebrow_right = lm[22: 27] # left-right - lm_nose = lm[27: 31] # top-down - lm_nostrils = lm[31: 36] # top-down - lm_eye_left = lm[36: 42] # left-clockwise - lm_eye_right = lm[42: 48] # left-clockwise - lm_mouth_outer = lm[48: 60] # left-clockwise - lm_mouth_inner = lm[60: 68] # left-clockwise - - # Calculate auxiliary vectors. - eye_left = np.mean(lm_eye_left, axis=0) - eye_right = np.mean(lm_eye_right, axis=0) - eye_avg = (eye_left + eye_right) * 0.5 - eye_to_eye = eye_right - eye_left - mouth_left = lm_mouth_outer[0] - mouth_right = lm_mouth_outer[6] - mouth_avg = (mouth_left + mouth_right) * 0.5 - eye_to_mouth = mouth_avg - eye_avg - - # Choose oriented crop rectangle. - x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1] - x /= np.hypot(*x) - x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8) - y = np.flipud(x) * [-1, 1] - c = eye_avg + eye_to_mouth * 0.1 - quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y]) - qsize = np.hypot(*x) * 2 - - # read image - img = PIL.Image.fromarray(img_cp) - - transform_size = output_size - enable_padding = True - - # Shrink. - shrink = int(np.floor(qsize / output_size * 0.5)) - if shrink > 1: - rsize = (int(np.rint(float(img.size[0]) / shrink)), - int(np.rint(float(img.size[1]) / shrink))) - img = img.resize(rsize, PIL.Image.ANTIALIAS) - quad /= shrink - qsize /= shrink - - # Crop. - border = max(int(np.rint(qsize * 0.1)), 3) - crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]), - min(crop[3] + border, img.size[1])) - if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]: - img = img.crop(crop) - quad -= crop[0:2] - - # Pad. - pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - img.size[0] + border, 0), - max(pad[3] - img.size[1] + border, 0)) - if enable_padding and max(pad) > border - 4: - pad = np.maximum(pad, int(np.rint(qsize * 0.3))) - img = np.pad(np.float32(img), - ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect') - h, w, _ = img.shape - y, x, _ = np.ogrid[:h, :w, :1] - mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], np.float32(w - 1 - x) / pad[2]), - 1.0 - np.minimum(np.float32(y) / pad[1], np.float32(h - 1 - y) / pad[3])) - blur = qsize * 0.02 - img += (scipy.ndimage.gaussian_filter(img, - [blur, blur, 0]) - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0) - img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0) - img = PIL.Image.fromarray( - np.uint8(np.clip(np.rint(img), 0, 255)), 'RGB') - quad += pad[:2] - - # Transform. - img = img.transform((transform_size, transform_size), - PIL.Image.QUAD, (quad + 0.5).flatten(), PIL.Image.BILINEAR) - if output_size < transform_size: - img = img.resize((output_size, output_size), PIL.Image.ANTIALIAS) - - # Return aligned image. - return img - - -def reverse_quad_transform(image, quad_to_map_to, alpha): - # forward mapping, for simplicity - - result = Image.new("RGBA", image.size) - result_pixels = result.load() - - width, height = result.size - - for y in range(height): - for x in range(width): - result_pixels[x, y] = (0, 0, 0, 0) - - p1 = (quad_to_map_to[0], quad_to_map_to[1]) - p2 = (quad_to_map_to[2], quad_to_map_to[3]) - p3 = (quad_to_map_to[4], quad_to_map_to[5]) - p4 = (quad_to_map_to[6], quad_to_map_to[7]) - - p1_p2_vec = (p2[0] - p1[0], p2[1] - p1[1]) - p4_p3_vec = (p3[0] - p4[0], p3[1] - p4[1]) - - for y in range(height): - for x in range(width): - pixel = image.getpixel((x, y)) - - y_percentage = y / float(height) - x_percentage = x / float(width) - - # interpolate vertically - pa = (p1[0] + p1_p2_vec[0] * y_percentage, - p1[1] + p1_p2_vec[1] * y_percentage) - pb = (p4[0] + p4_p3_vec[0] * y_percentage, - p4[1] + p4_p3_vec[1] * y_percentage) - - pa_to_pb_vec = (pb[0] - pa[0], pb[1] - pa[1]) - - # interpolate horizontally - p = (pa[0] + pa_to_pb_vec[0] * x_percentage, - pa[1] + pa_to_pb_vec[1] * x_percentage) - - try: - result_pixels[p[0], p[1]] = ( - pixel[0], pixel[1], pixel[2], min(int(alpha * 255), pixel[3])) - except Exception: - pass - - return result diff --git a/spaces/DragGan/DragGan/stylegan_human/training_scripts/sg3/train.py b/spaces/DragGan/DragGan/stylegan_human/training_scripts/sg3/train.py deleted file mode 100644 index 5c8d5e2be100495b906f36b93197935c42ec1528..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/stylegan_human/training_scripts/sg3/train.py +++ /dev/null @@ -1,295 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Train a GAN using the techniques described in the paper -"Alias-Free Generative Adversarial Networks".""" - -import os -import click -import re -import json -import tempfile -import torch - -import dnnlib -from training import training_loop -from metrics import metric_main -from torch_utils import training_stats -from torch_utils import custom_ops -import ast -#---------------------------------------------------------------------------- - -def subprocess_fn(rank, c, temp_dir): - dnnlib.util.Logger(file_name=os.path.join(c.run_dir, 'log.txt'), file_mode='a', should_flush=True) - - # Init torch.distributed. - if c.num_gpus > 1: - init_file = os.path.abspath(os.path.join(temp_dir, '.torch_distributed_init')) - if os.name == 'nt': - init_method = 'file:///' + init_file.replace('\\', '/') - torch.distributed.init_process_group(backend='gloo', init_method=init_method, rank=rank, world_size=c.num_gpus) - else: - init_method = f'file://{init_file}' - torch.distributed.init_process_group(backend='nccl', init_method=init_method, rank=rank, world_size=c.num_gpus) - - # Init torch_utils. - sync_device = torch.device('cuda', rank) if c.num_gpus > 1 else None - training_stats.init_multiprocessing(rank=rank, sync_device=sync_device) - if rank != 0: - custom_ops.verbosity = 'none' - - # Execute training loop. - training_loop.training_loop(rank=rank, **c) - -#---------------------------------------------------------------------------- - -def launch_training(c, desc, outdir, dry_run): - dnnlib.util.Logger(should_flush=True) - - # Pick output directory. - prev_run_dirs = [] - if os.path.isdir(outdir): - prev_run_dirs = [x for x in os.listdir(outdir) if os.path.isdir(os.path.join(outdir, x))] - prev_run_ids = [re.match(r'^\d+', x) for x in prev_run_dirs] - prev_run_ids = [int(x.group()) for x in prev_run_ids if x is not None] - cur_run_id = max(prev_run_ids, default=-1) + 1 - c.run_dir = os.path.join(outdir, f'{cur_run_id:05d}-{desc}') - assert not os.path.exists(c.run_dir) - - # Print options. - print() - print('Training options:') - print(json.dumps(c, indent=2)) - print() - print(f'Output directory: {c.run_dir}') - print(f'Number of GPUs: {c.num_gpus}') - print(f'Batch size: {c.batch_size} images') - print(f'Training duration: {c.total_kimg} kimg') - print(f'Dataset path: {c.training_set_kwargs.path}') - print(f'Dataset size: {c.training_set_kwargs.max_size} images') - print(f'Dataset resolution: {c.training_set_kwargs.resolution}') - print(f'Dataset labels: {c.training_set_kwargs.use_labels}') - print(f'Dataset x-flips: {c.training_set_kwargs.xflip}') - print() - - # Dry run? - if dry_run: - print('Dry run; exiting.') - return - - # Create output directory. - print('Creating output directory...') - os.makedirs(c.run_dir) - with open(os.path.join(c.run_dir, 'training_options.json'), 'wt') as f: - json.dump(c, f, indent=2) - - # Launch processes. - print('Launching processes...') - torch.multiprocessing.set_start_method('spawn') - with tempfile.TemporaryDirectory() as temp_dir: - if c.num_gpus == 1: - subprocess_fn(rank=0, c=c, temp_dir=temp_dir) - else: - torch.multiprocessing.spawn(fn=subprocess_fn, args=(c, temp_dir), nprocs=c.num_gpus) - -#---------------------------------------------------------------------------- - -def init_dataset_kwargs(data, square=False): - # dataset - - try: - dataset_kwargs = dnnlib.EasyDict(class_name='training.dataset.ImageFolderDataset', path=data, use_labels=True, max_size=None, xflip=False, square=square) - dataset_obj = dnnlib.util.construct_class_by_name(**dataset_kwargs) # Subclass of training.dataset.Dataset. - dataset_kwargs.resolution = dataset_obj.resolution # Be explicit about resolution. - dataset_kwargs.use_labels = dataset_obj.has_labels # Be explicit about labels. - dataset_kwargs.max_size = len(dataset_obj) # Be explicit about dataset size. - return dataset_kwargs, dataset_obj.name - except IOError as err: - raise click.ClickException(f'--data: {err}') - - print("out of dataset") -#---------------------------------------------------------------------------- - -def parse_comma_separated_list(s): - if isinstance(s, list): - return s - if s is None or s.lower() == 'none' or s == '': - return [] - return s.split(',') - -#---------------------------------------------------------------------------- - -@click.command() - -# Required. -@click.option('--outdir', help='Where to save the results', metavar='DIR', required=True) -@click.option('--cfg', help='Base configuration', type=click.Choice(['stylegan3-t', 'stylegan3-r', 'stylegan2']), required=True) -@click.option('--data', help='Training data', metavar='PATH', required=True) -@click.option('--gpus', help='Number of GPUs to use', metavar='INT', type=click.IntRange(min=1), required=True) -@click.option('--batch', help='Total batch size', metavar='INT', type=click.IntRange(min=1), required=True) -@click.option('--gamma', help='R1 regularization weight', metavar='FLOAT', type=click.FloatRange(min=0), required=True) -@click.option('--square', help='True for square, False for rectangle', type=bool, metavar='BOOL', default=False) - -# Optional features. -@click.option('--cond', help='Train conditional model', metavar='BOOL', type=bool, default=False, show_default=True) -@click.option('--mirror', help='Enable dataset x-flips', metavar='BOOL', type=bool, default=False, show_default=True) -@click.option('--aug', help='Augmentation mode', type=click.Choice(['noaug', 'ada', 'fixed']), default='ada', show_default=True) -@click.option('--resume', help='Resume from given network pickle', metavar='[PATH|URL]', type=str) -@click.option('--freezed', help='Freeze first layers of D', metavar='INT', type=click.IntRange(min=0), default=0, show_default=True) - -# Misc hyperparameters. -@click.option('--p', help='Probability for --aug=fixed', metavar='FLOAT', type=click.FloatRange(min=0, max=1), default=0.2, show_default=True) -@click.option('--target', help='Target value for --aug=ada', metavar='FLOAT', type=click.FloatRange(min=0, max=1), default=0.6, show_default=True) -@click.option('--batch-gpu', help='Limit batch size per GPU', metavar='INT', type=click.IntRange(min=1)) -@click.option('--cbase', help='Capacity multiplier', metavar='INT', type=click.IntRange(min=1), default=32768, show_default=True) -@click.option('--cmax', help='Max. feature maps', metavar='INT', type=click.IntRange(min=1), default=512, show_default=True) -@click.option('--glr', help='G learning rate [default: varies]', metavar='FLOAT', type=click.FloatRange(min=0)) -@click.option('--dlr', help='D learning rate', metavar='FLOAT', type=click.FloatRange(min=0), default=0.002, show_default=True) -@click.option('--map-depth', help='Mapping network depth [default: varies]', metavar='INT', type=click.IntRange(min=1)) -@click.option('--mbstd-group', help='Minibatch std group size', metavar='INT', type=click.IntRange(min=1), default=4, show_default=True) - -# Misc settings. -@click.option('--desc', help='String to include in result dir name', metavar='STR', type=str) -@click.option('--metrics', help='Quality metrics', metavar='[NAME|A,B,C|none]', type=parse_comma_separated_list, default='fid50k_full', show_default=True) -@click.option('--kimg', help='Total training duration', metavar='KIMG', type=click.IntRange(min=1), default=25000, show_default=True) -@click.option('--tick', help='How often to print progress', metavar='KIMG', type=click.IntRange(min=1), default=4, show_default=True) -@click.option('--snap', help='How often to save snapshots', metavar='TICKS', type=click.IntRange(min=1), default=50, show_default=True) -@click.option('--seed', help='Random seed', metavar='INT', type=click.IntRange(min=0), default=0, show_default=True) -@click.option('--fp32', help='Disable mixed-precision', metavar='BOOL', type=bool, default=False, show_default=True) -@click.option('--nobench', help='Disable cuDNN benchmarking', metavar='BOOL', type=bool, default=False, show_default=True) -@click.option('--workers', help='DataLoader worker processes', metavar='INT', type=click.IntRange(min=1), default=3, show_default=True) -@click.option('-n','--dry-run', help='Print training options and exit', is_flag=True) - -def main(**kwargs): - """Train a GAN using the techniques described in the paper - "Alias-Free Generative Adversarial Networks". - - Examples: - - \b - # Train StyleGAN3-T for AFHQv2 using 8 GPUs. - python train.py --outdir=~/training-runs --cfg=stylegan3-t --data=~/datasets/afhqv2-512x512.zip \\ - --gpus=8 --batch=32 --gamma=8.2 --mirror=1 - - \b - # Fine-tune StyleGAN3-R for MetFaces-U using 1 GPU, starting from the pre-trained FFHQ-U pickle. - python train.py --outdir=~/training-runs --cfg=stylegan3-r --data=~/datasets/metfacesu-1024x1024.zip \\ - --gpus=8 --batch=32 --gamma=6.6 --mirror=1 --kimg=5000 --snap=5 \\ - --resume=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-ffhqu-1024x1024.pkl - - \b - # Train StyleGAN2 for FFHQ at 1024x1024 resolution using 8 GPUs. - python train.py --outdir=~/training-runs --cfg=stylegan2 --data=~/datasets/ffhq-1024x1024.zip \\ - --gpus=8 --batch=32 --gamma=10 --mirror=1 --aug=noaug - """ - - # Initialize config. - opts = dnnlib.EasyDict(kwargs) # Command line arguments. - c = dnnlib.EasyDict() # Main config dict. - print('---- square: ',opts.square) - c.G_kwargs = dnnlib.EasyDict(class_name=None, z_dim=512, w_dim=512, mapping_kwargs=dnnlib.EasyDict(),square=opts.square) - c.D_kwargs = dnnlib.EasyDict(class_name='training.networks_stylegan2.Discriminator', block_kwargs=dnnlib.EasyDict(), mapping_kwargs=dnnlib.EasyDict(), epilogue_kwargs=dnnlib.EasyDict(),square=opts.square) - c.G_opt_kwargs = dnnlib.EasyDict(class_name='torch.optim.Adam', betas=[0,0.99], eps=1e-8) - c.D_opt_kwargs = dnnlib.EasyDict(class_name='torch.optim.Adam', betas=[0,0.99], eps=1e-8) - c.loss_kwargs = dnnlib.EasyDict(class_name='training.loss.StyleGAN2Loss') - c.data_loader_kwargs = dnnlib.EasyDict(pin_memory=True, prefetch_factor=2) - - # Training set. - c.training_set_kwargs, dataset_name = init_dataset_kwargs(data=opts.data, square=opts.square) - if opts.cond and not c.training_set_kwargs.use_labels: - raise click.ClickException('--cond=True requires labels specified in dataset.json') - c.training_set_kwargs.use_labels = opts.cond - c.training_set_kwargs.xflip = opts.mirror - - # Hyperparameters & settings. - c.num_gpus = opts.gpus - c.batch_size = opts.batch - c.batch_gpu = opts.batch_gpu or opts.batch // opts.gpus - c.G_kwargs.channel_base = c.D_kwargs.channel_base = opts.cbase - c.G_kwargs.channel_max = c.D_kwargs.channel_max = opts.cmax - c.G_kwargs.mapping_kwargs.num_layers = (8 if opts.cfg == 'stylegan2' else 2) if opts.map_depth is None else opts.map_depth - c.D_kwargs.block_kwargs.freeze_layers = opts.freezed - c.D_kwargs.epilogue_kwargs.mbstd_group_size = opts.mbstd_group - c.loss_kwargs.r1_gamma = opts.gamma - c.G_opt_kwargs.lr = (0.002 if opts.cfg == 'stylegan2' else 0.0025) if opts.glr is None else opts.glr - c.D_opt_kwargs.lr = opts.dlr - c.metrics = opts.metrics - c.total_kimg = opts.kimg - c.kimg_per_tick = opts.tick - c.image_snapshot_ticks = c.network_snapshot_ticks = opts.snap - c.random_seed = c.training_set_kwargs.random_seed = opts.seed - c.data_loader_kwargs.num_workers = opts.workers - - # Sanity checks. - if c.batch_size % c.num_gpus != 0: - raise click.ClickException('--batch must be a multiple of --gpus') - if c.batch_size % (c.num_gpus * c.batch_gpu) != 0: - raise click.ClickException('--batch must be a multiple of --gpus times --batch-gpu') - if c.batch_gpu < c.D_kwargs.epilogue_kwargs.mbstd_group_size: - raise click.ClickException('--batch-gpu cannot be smaller than --mbstd') - if any(not metric_main.is_valid_metric(metric) for metric in c.metrics): - raise click.ClickException('\n'.join(['--metrics can only contain the following values:'] + metric_main.list_valid_metrics())) - - # Base configuration. - c.ema_kimg = c.batch_size * 10 / 32 - if opts.cfg == 'stylegan2': - c.G_kwargs.class_name = 'training.networks_stylegan2.Generator' - c.loss_kwargs.style_mixing_prob = 0.9 # Enable style mixing regularization. - c.loss_kwargs.pl_weight = 2 # Enable path length regularization. - c.G_reg_interval = 4 # Enable lazy regularization for G. - c.G_kwargs.fused_modconv_default = 'inference_only' # Speed up training by using regular convolutions instead of grouped convolutions. - c.loss_kwargs.pl_no_weight_grad = True # Speed up path length regularization by skipping gradient computation wrt. conv2d weights. - else: - c.G_kwargs.class_name = 'training.networks_stylegan3.Generator' - c.G_kwargs.magnitude_ema_beta = 0.5 ** (c.batch_size / (20 * 1e3)) - if opts.cfg == 'stylegan3-r': - c.G_kwargs.conv_kernel = 1 # Use 1x1 convolutions. - c.G_kwargs.channel_base *= 2 # Double the number of feature maps. - c.G_kwargs.channel_max *= 2 - c.G_kwargs.use_radial_filters = True # Use radially symmetric downsampling filters. - c.loss_kwargs.blur_init_sigma = 10 # Blur the images seen by the discriminator. - c.loss_kwargs.blur_fade_kimg = c.batch_size * 200 / 32 # Fade out the blur during the first N kimg. - - # Augmentation. - if opts.aug != 'noaug': - c.augment_kwargs = dnnlib.EasyDict(class_name='training.augment.AugmentPipe', xflip=1, rotate90=1, xint=1, scale=1, rotate=1, aniso=1, xfrac=1, brightness=1, contrast=1, lumaflip=1, hue=1, saturation=1) - if opts.aug == 'ada': - c.ada_target = opts.target - if opts.aug == 'fixed': - c.augment_p = opts.p - - # Resume. - if opts.resume is not None: - c.resume_pkl = opts.resume - c.ada_kimg = 100 # Make ADA react faster at the beginning. - c.ema_rampup = None # Disable EMA rampup. - c.loss_kwargs.blur_init_sigma = 0 # Disable blur rampup. - - # Performance-related toggles. - if opts.fp32: - c.G_kwargs.num_fp16_res = c.D_kwargs.num_fp16_res = 0 - c.G_kwargs.conv_clamp = c.D_kwargs.conv_clamp = None - if opts.nobench: - c.cudnn_benchmark = False - - # Description string. - desc = f'{opts.cfg:s}-{dataset_name:s}-gpus{c.num_gpus:d}-batch{c.batch_size:d}-gamma{c.loss_kwargs.r1_gamma:g}' - if opts.desc is not None: - desc += f'-{opts.desc}' - - # Launch. - launch_training(c=c, desc=desc, outdir=opts.outdir, dry_run=opts.dry_run) - -#---------------------------------------------------------------------------- - -if __name__ == "__main__": - main() # pylint: disable=no-value-for-parameter - -#---------------------------------------------------------------------------- diff --git a/spaces/ECCV2022/bytetrack/yolox/tracker/matching.py b/spaces/ECCV2022/bytetrack/yolox/tracker/matching.py deleted file mode 100644 index d36a6cf5bf758a49bd414f63f402fef3fdd2e18c..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/yolox/tracker/matching.py +++ /dev/null @@ -1,181 +0,0 @@ -import cv2 -import numpy as np -import scipy -import lap -from scipy.spatial.distance import cdist - -from cython_bbox import bbox_overlaps as bbox_ious -from yolox.tracker import kalman_filter -import time - -def merge_matches(m1, m2, shape): - O,P,Q = shape - m1 = np.asarray(m1) - m2 = np.asarray(m2) - - M1 = scipy.sparse.coo_matrix((np.ones(len(m1)), (m1[:, 0], m1[:, 1])), shape=(O, P)) - M2 = scipy.sparse.coo_matrix((np.ones(len(m2)), (m2[:, 0], m2[:, 1])), shape=(P, Q)) - - mask = M1*M2 - match = mask.nonzero() - match = list(zip(match[0], match[1])) - unmatched_O = tuple(set(range(O)) - set([i for i, j in match])) - unmatched_Q = tuple(set(range(Q)) - set([j for i, j in match])) - - return match, unmatched_O, unmatched_Q - - -def _indices_to_matches(cost_matrix, indices, thresh): - matched_cost = cost_matrix[tuple(zip(*indices))] - matched_mask = (matched_cost <= thresh) - - matches = indices[matched_mask] - unmatched_a = tuple(set(range(cost_matrix.shape[0])) - set(matches[:, 0])) - unmatched_b = tuple(set(range(cost_matrix.shape[1])) - set(matches[:, 1])) - - return matches, unmatched_a, unmatched_b - - -def linear_assignment(cost_matrix, thresh): - if cost_matrix.size == 0: - return np.empty((0, 2), dtype=int), tuple(range(cost_matrix.shape[0])), tuple(range(cost_matrix.shape[1])) - matches, unmatched_a, unmatched_b = [], [], [] - cost, x, y = lap.lapjv(cost_matrix, extend_cost=True, cost_limit=thresh) - for ix, mx in enumerate(x): - if mx >= 0: - matches.append([ix, mx]) - unmatched_a = np.where(x < 0)[0] - unmatched_b = np.where(y < 0)[0] - matches = np.asarray(matches) - return matches, unmatched_a, unmatched_b - - -def ious(atlbrs, btlbrs): - """ - Compute cost based on IoU - :type atlbrs: list[tlbr] | np.ndarray - :type atlbrs: list[tlbr] | np.ndarray - - :rtype ious np.ndarray - """ - ious = np.zeros((len(atlbrs), len(btlbrs)), dtype=np.float) - if ious.size == 0: - return ious - - ious = bbox_ious( - np.ascontiguousarray(atlbrs, dtype=np.float), - np.ascontiguousarray(btlbrs, dtype=np.float) - ) - - return ious - - -def iou_distance(atracks, btracks): - """ - Compute cost based on IoU - :type atracks: list[STrack] - :type btracks: list[STrack] - - :rtype cost_matrix np.ndarray - """ - - if (len(atracks)>0 and isinstance(atracks[0], np.ndarray)) or (len(btracks) > 0 and isinstance(btracks[0], np.ndarray)): - atlbrs = atracks - btlbrs = btracks - else: - atlbrs = [track.tlbr for track in atracks] - btlbrs = [track.tlbr for track in btracks] - _ious = ious(atlbrs, btlbrs) - cost_matrix = 1 - _ious - - return cost_matrix - -def v_iou_distance(atracks, btracks): - """ - Compute cost based on IoU - :type atracks: list[STrack] - :type btracks: list[STrack] - - :rtype cost_matrix np.ndarray - """ - - if (len(atracks)>0 and isinstance(atracks[0], np.ndarray)) or (len(btracks) > 0 and isinstance(btracks[0], np.ndarray)): - atlbrs = atracks - btlbrs = btracks - else: - atlbrs = [track.tlwh_to_tlbr(track.pred_bbox) for track in atracks] - btlbrs = [track.tlwh_to_tlbr(track.pred_bbox) for track in btracks] - _ious = ious(atlbrs, btlbrs) - cost_matrix = 1 - _ious - - return cost_matrix - -def embedding_distance(tracks, detections, metric='cosine'): - """ - :param tracks: list[STrack] - :param detections: list[BaseTrack] - :param metric: - :return: cost_matrix np.ndarray - """ - - cost_matrix = np.zeros((len(tracks), len(detections)), dtype=np.float) - if cost_matrix.size == 0: - return cost_matrix - det_features = np.asarray([track.curr_feat for track in detections], dtype=np.float) - #for i, track in enumerate(tracks): - #cost_matrix[i, :] = np.maximum(0.0, cdist(track.smooth_feat.reshape(1,-1), det_features, metric)) - track_features = np.asarray([track.smooth_feat for track in tracks], dtype=np.float) - cost_matrix = np.maximum(0.0, cdist(track_features, det_features, metric)) # Nomalized features - return cost_matrix - - -def gate_cost_matrix(kf, cost_matrix, tracks, detections, only_position=False): - if cost_matrix.size == 0: - return cost_matrix - gating_dim = 2 if only_position else 4 - gating_threshold = kalman_filter.chi2inv95[gating_dim] - measurements = np.asarray([det.to_xyah() for det in detections]) - for row, track in enumerate(tracks): - gating_distance = kf.gating_distance( - track.mean, track.covariance, measurements, only_position) - cost_matrix[row, gating_distance > gating_threshold] = np.inf - return cost_matrix - - -def fuse_motion(kf, cost_matrix, tracks, detections, only_position=False, lambda_=0.98): - if cost_matrix.size == 0: - return cost_matrix - gating_dim = 2 if only_position else 4 - gating_threshold = kalman_filter.chi2inv95[gating_dim] - measurements = np.asarray([det.to_xyah() for det in detections]) - for row, track in enumerate(tracks): - gating_distance = kf.gating_distance( - track.mean, track.covariance, measurements, only_position, metric='maha') - cost_matrix[row, gating_distance > gating_threshold] = np.inf - cost_matrix[row] = lambda_ * cost_matrix[row] + (1 - lambda_) * gating_distance - return cost_matrix - - -def fuse_iou(cost_matrix, tracks, detections): - if cost_matrix.size == 0: - return cost_matrix - reid_sim = 1 - cost_matrix - iou_dist = iou_distance(tracks, detections) - iou_sim = 1 - iou_dist - fuse_sim = reid_sim * (1 + iou_sim) / 2 - det_scores = np.array([det.score for det in detections]) - det_scores = np.expand_dims(det_scores, axis=0).repeat(cost_matrix.shape[0], axis=0) - #fuse_sim = fuse_sim * (1 + det_scores) / 2 - fuse_cost = 1 - fuse_sim - return fuse_cost - - -def fuse_score(cost_matrix, detections): - if cost_matrix.size == 0: - return cost_matrix - iou_sim = 1 - cost_matrix - det_scores = np.array([det.score for det in detections]) - det_scores = np.expand_dims(det_scores, axis=0).repeat(cost_matrix.shape[0], axis=0) - fuse_sim = iou_sim * det_scores - fuse_cost = 1 - fuse_sim - return fuse_cost \ No newline at end of file diff --git a/spaces/EPFL-VILAB/MultiMAE/multimae/__init__.py b/spaces/EPFL-VILAB/MultiMAE/multimae/__init__.py deleted file mode 100644 index ad50db52e0789a001785c0c6ed32ddfd716a426d..0000000000000000000000000000000000000000 --- a/spaces/EPFL-VILAB/MultiMAE/multimae/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -from .criterion import MaskedCrossEntropyLoss, MaskedL1Loss, MaskedMSELoss -from .input_adapters import PatchedInputAdapter, SemSegInputAdapter -from .multimae import MultiMAE, MultiViT -from .output_adapters import (ConvNeXtAdapter, DPTOutputAdapter, - LinearOutputAdapter, - SegmenterMaskTransformerAdapter, - SpatialOutputAdapter) diff --git a/spaces/Ekimetrics/Biomap/biomap/streamlit_app.py b/spaces/Ekimetrics/Biomap/biomap/streamlit_app.py deleted file mode 100644 index 540cd68ef9226ea37e692ed3824abc37fb87c731..0000000000000000000000000000000000000000 --- a/spaces/Ekimetrics/Biomap/biomap/streamlit_app.py +++ /dev/null @@ -1,140 +0,0 @@ -import streamlit as st -from streamlit_folium import st_folium -import folium -import logging -import sys -import hydra -from plot_functions import * -import hydra - -import torch -from model import LitUnsupervisedSegmenter -from helper import inference_on_location_and_month, inference_on_location - -DEFAULT_LATITUDE = 48.81 -DEFAULT_LONGITUDE = 2.98 -DEFAULT_ZOOM = 5 - -MIN_YEAR = 2018 -MAX_YEAR = 2024 - -FOLIUM_WIDTH = 925 -FOLIUM_HEIGHT = 300 - - -st.set_page_config(layout="wide") -@st.cache_resource -def init_cfg(cfg_name): - hydra.initialize(config_path="configs", job_name="corine") - return hydra.compose(config_name=cfg_name) - -@st.cache_resource -def init_app(cfg_name) -> LitUnsupervisedSegmenter: - file_handler = logging.FileHandler(filename='biomap.log') - stdout_handler = logging.StreamHandler(stream=sys.stdout) - handlers = [file_handler, stdout_handler] - - logging.basicConfig(handlers=handlers, encoding='utf-8', level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s") - # # Initialize hydra with configs - # GlobalHydra.instance().clear() - - cfg = init_cfg(cfg_name) - logging.info(f"config : {cfg}") - nbclasses = cfg.dir_dataset_n_classes - model = LitUnsupervisedSegmenter(nbclasses, cfg) - model = model.cpu() - logging.info(f"Model Initialiazed") - - model_path = "biomap/checkpoint/model/model.pt" - saved_state_dict = torch.load(model_path, map_location=torch.device("cpu")) - logging.info(f"Model weights Loaded") - model.load_state_dict(saved_state_dict) - return model - -def app(model): - if "infered" not in st.session_state: - st.session_state["infered"] = False - if "submit" not in st.session_state: - st.session_state["submit"] = False - if "submit2" not in st.session_state: - st.session_state["submit2"] = False - - st.markdown("

🐢 Biomap by Ekimetrics 🐢

", unsafe_allow_html=True) - st.markdown("

Estimate Biodiversity in the world with the help of land cover.

", unsafe_allow_html=True) - st.markdown("

The segmentation model is an association of UNet and DinoV1 trained on the dataset CORINE. Land use is divided into 6 differents classes : Each class is assigned a GBS score from 0 to 1

", unsafe_allow_html=True) - st.markdown("

Buildings : 0.1 | Infrastructure : 0.1 | Cultivation : 0.4 | Wetland : 0.9 | Water : 0.9 | Natural green : 1

", unsafe_allow_html=True) - st.markdown("

The score is then averaged on the full image.

", unsafe_allow_html=True) - - if st.session_state["submit"]: - fig = inference_on_location(model, st.session_state["lat"], st.session_state["long"], st.session_state["start_date"], st.session_state["end_date"], st.session_state["segment_interval"]) - st.session_state["infered"] = True - st.session_state["previous_fig"] = fig - - if st.session_state["submit2"]: - fig = inference_on_location_and_month(model, st.session_state["lat_2"], st.session_state["long_2"], st.session_state["date_2"]) - st.session_state["infered"] = True - st.session_state["previous_fig"] = fig - - if st.session_state["infered"]: - st.plotly_chart(st.session_state["previous_fig"], use_container_width=True) - - col_1, col_2 = st.columns([0.5, 0.5]) - with col_1: - m = folium.Map(location=[DEFAULT_LATITUDE, DEFAULT_LONGITUDE], zoom_start=DEFAULT_ZOOM) - m.add_child(folium.LatLngPopup()) - f_map = st_folium(m, width=FOLIUM_WIDTH, height=FOLIUM_HEIGHT) - - selected_latitude = DEFAULT_LATITUDE - selected_longitude = DEFAULT_LONGITUDE - - if f_map.get("last_clicked"): - selected_latitude = f_map["last_clicked"]["lat"] - selected_longitude = f_map["last_clicked"]["lng"] - - with col_2: - tabs1, tabs2 = st.tabs(["TimeLapse", "Single Image"]) - with tabs1: - submit = st.button("Predict TimeLapse", use_container_width=True, type="primary") - st.session_state["submit"] = submit - - col_tab1_1, col_tab1_2 = st.columns(2) - with col_tab1_1: - lat = st.text_input("latitude", value=selected_latitude) - st.session_state["lat"] = lat - with col_tab1_2: - long = st.text_input("longitude", value=selected_longitude) - st.session_state["long"] = long - - col_tab1_11, col_tab1_22 = st.columns(2) - years = list(range(MIN_YEAR, MAX_YEAR, 1)) - with col_tab1_11: - start_date = st.selectbox("Start date", years) - st.session_state["start_date"] = start_date - - end_years = [year for year in years if year > start_date] - with col_tab1_22: - end_date = st.selectbox("End date", end_years) - st.session_state["end_date"] = end_date - - segment_interval = st.radio("Interval of time between two segmentation", options=['month','2months', 'year'],horizontal=True) - st.session_state["segment_interval"] = segment_interval - - with tabs2: - submit2 = st.button("Predict Single Image", use_container_width=True, type="primary") - st.session_state["submit2"] = submit2 - - col_tab2_1, col_tab2_2 = st.columns(2) - with col_tab2_1: - lat_2 = st.text_input("lat.", value=selected_latitude) - st.session_state["lat_2"] = lat_2 - with col_tab2_2: - long_2 = st.text_input("long.", value=selected_longitude) - st.session_state["long_2"] = long_2 - - date_2 = st.text_input("date", "2021-01-01", placeholder="2021-01-01") - st.session_state["date_2"] = date_2 - - -if __name__ == "__main__": - model = init_app("my_train_config.yml") - app(model) \ No newline at end of file diff --git a/spaces/FSDL-Fashion/fashion_img_search/fis/feature_extraction/run.py b/spaces/FSDL-Fashion/fashion_img_search/fis/feature_extraction/run.py deleted file mode 100644 index 89e151cf75e05d4cbc77dff6e6c5a5daf6f79fe0..0000000000000000000000000000000000000000 --- a/spaces/FSDL-Fashion/fashion_img_search/fis/feature_extraction/run.py +++ /dev/null @@ -1,51 +0,0 @@ -import pandas as pd -from datasets import Dataset -from tqdm import tqdm - -from fis.feature_extraction.pipeline.pipeline import factory -from fis.utils.constants import ORGANISATION -from fis.utils.s3 import list_images_from_bucket, read_image_from_s3 - - -def make_dataset(pipeline_name: str) -> Dataset: - print("Listing images from S3...") - images = list_images_from_bucket() - images = images[:100000] - print(f"{len(images)} images to process.") - - pipeline = factory.get(pipeline_name) - data = [] - - print("Encoding images...") - for image_name in tqdm(images): - image = read_image_from_s3(image_name) - embeddings = pipeline.encode(image) - - for embedding in embeddings: - image_data = { - "path": image_name, - "embedding": embedding, - } - - data.append(image_data) - - df = pd.DataFrame(data) - dataset = Dataset.from_pandas(df) - - return dataset - - -def upload_dataset(dataset: Dataset, pipeline_name: str) -> None: - print("Uploading dataset...") - repo_id = "{}/{}".format(ORGANISATION, pipeline_name) - dataset.push_to_hub(repo_id=repo_id) - - -def main(): - pipeline_name = "dummy_swin_pipe" - dataset = make_dataset(pipeline_name=pipeline_name) - upload_dataset(dataset=dataset, pipeline_name=pipeline_name) - - -if __name__ == "__main__": - main() diff --git a/spaces/Gen-Sim/Gen-Sim/misc/compare_stats.py b/spaces/Gen-Sim/Gen-Sim/misc/compare_stats.py deleted file mode 100644 index c2a1e3272ee1fbe98fbe1280aa85822fce29255b..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/misc/compare_stats.py +++ /dev/null @@ -1,71 +0,0 @@ -import matplotlib as mpl - -mpl.use("Agg") -import argparse -import os -import pandas as pd -import seaborn as sns -import matplotlib.pyplot as plt -import matplotlib -import IPython - -font = { - "size": 22, -} -matplotlib.rc("font", **font) -sns.set_context("paper", font_scale=2.0) - - -def mkdir_if_missing(dst_dir): - if not os.path.exists(dst_dir): - os.makedirs(dst_dir) - - -def save_figure(name, title=""): - print(f"output/output_figures/{name}.png") - if len(title) > 0: - plt.title(title) - plt.tight_layout() - mkdir_if_missing(f"output/output_figures/{name[:30]}") - plt.savefig(f"output/output_figures/{name[:30]}/output.png") - plt.clf() - - -def main(multirun_out, title): - dfs = [] - suffix = "" - run_num = 0 - - for rundir in (sorted(multirun_out.split(","))): - runpath = os.path.join('output/output_stats', rundir) - statspath = os.path.join(runpath, "eval_results.csv") - if os.path.exists(statspath): - run_num += 1 - df = pd.read_csv(statspath) - dfs.append(df) - - # merge dfs, which have shared column names - df = pd.concat(dfs) - title += f" run: {run_num} " - - # rewards - fig, ax = plt.subplots(figsize=(16, 8)) - sns_plot = sns.barplot( - data=df, x="task", y="success", hue='method', errorbar=("sd", 1), palette="deep" - ) - - # label texts - for container in ax.containers: - ax.bar_label(container, label_type="center", fontsize="x-large", fmt="%.2f") - ax.set_xticklabels(['\n'.join(str(xlabel.get_text()).split("-")) for xlabel in ax.get_xticklabels()]) - - # save plot - save_figure(f"{multirun_out}_{title}{suffix}", title) - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--multirun_out", type=str) - parser.add_argument("--title", type=str, default="") - - args = parser.parse_args() - main(args.multirun_out, args.title) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/libra_rcnn/libra_faster_rcnn_x101_64x4d_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/libra_rcnn/libra_faster_rcnn_x101_64x4d_fpn_1x_coco.py deleted file mode 100644 index e94553294294fa49952f2dfe0e3c64a5e00bc878..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/libra_rcnn/libra_faster_rcnn_x101_64x4d_fpn_1x_coco.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = './libra_faster_rcnn_r50_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_64x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=64, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/sabl/sabl_retinanet_r101_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/sabl/sabl_retinanet_r101_fpn_1x_coco.py deleted file mode 100644 index ed3a96c7dec922fcc73a3ab1446ffdf4a756c152..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/sabl/sabl_retinanet_r101_fpn_1x_coco.py +++ /dev/null @@ -1,52 +0,0 @@ -_base_ = [ - '../_base_/models/retinanet_r50_fpn.py', - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] -# model settings -model = dict( - pretrained='torchvision://resnet101', - backbone=dict(depth=101), - bbox_head=dict( - _delete_=True, - type='SABLRetinaHead', - num_classes=80, - in_channels=256, - stacked_convs=4, - feat_channels=256, - approx_anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=4, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - square_anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - scales=[4], - strides=[8, 16, 32, 64, 128]), - bbox_coder=dict( - type='BucketingBBoxCoder', num_buckets=14, scale_factor=3.0), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.5), - loss_bbox_reg=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.5)), - # training and testing settings - train_cfg=dict( - assigner=dict( - type='ApproxMaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.4, - min_pos_iou=0.0, - ignore_iof_thr=-1), - allowed_border=-1, - pos_weight=-1, - debug=False)) -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/gcnet/gcnet_r101-d8_512x1024_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/gcnet/gcnet_r101-d8_512x1024_80k_cityscapes.py deleted file mode 100644 index 7f0f83fe39da31fe9a5b497e0481e1c79a33e764..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/gcnet/gcnet_r101-d8_512x1024_80k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './gcnet_r50-d8_512x1024_80k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Grezz/generate_human_motion/pyrender/pyrender/__init__.py b/spaces/Grezz/generate_human_motion/pyrender/pyrender/__init__.py deleted file mode 100644 index ee3709846823b7c4b71b22da0e24d63d805528a8..0000000000000000000000000000000000000000 --- a/spaces/Grezz/generate_human_motion/pyrender/pyrender/__init__.py +++ /dev/null @@ -1,24 +0,0 @@ -from .camera import (Camera, PerspectiveCamera, OrthographicCamera, - IntrinsicsCamera) -from .light import Light, PointLight, DirectionalLight, SpotLight -from .sampler import Sampler -from .texture import Texture -from .material import Material, MetallicRoughnessMaterial -from .primitive import Primitive -from .mesh import Mesh -from .node import Node -from .scene import Scene -from .renderer import Renderer -from .viewer import Viewer -from .offscreen import OffscreenRenderer -from .version import __version__ -from .constants import RenderFlags, TextAlign, GLTF - -__all__ = [ - 'Camera', 'PerspectiveCamera', 'OrthographicCamera', 'IntrinsicsCamera', - 'Light', 'PointLight', 'DirectionalLight', 'SpotLight', - 'Sampler', 'Texture', 'Material', 'MetallicRoughnessMaterial', - 'Primitive', 'Mesh', 'Node', 'Scene', 'Renderer', 'Viewer', - 'OffscreenRenderer', '__version__', 'RenderFlags', 'TextAlign', - 'GLTF' -] diff --git a/spaces/GroNLP/neural-acoustic-distance/neural_acoustic_distance.py b/spaces/GroNLP/neural-acoustic-distance/neural_acoustic_distance.py deleted file mode 100644 index 30d7460b173fad293806dac58b06275f09e63c86..0000000000000000000000000000000000000000 --- a/spaces/GroNLP/neural-acoustic-distance/neural_acoustic_distance.py +++ /dev/null @@ -1,238 +0,0 @@ -import os.path -from typing import Optional - -import matplotlib.pyplot as plt -import numpy as np -import soundfile as sf -import streamlit as st -import torch -import transformers -from dtw import dtw -from scipy import signal -from transformers import AutoConfig -from transformers.models.wav2vec2 import Wav2Vec2Model -from datetime import datetime -from random import randrange - -import os -import psutil - -def play_audio(filename): - audio_file = open(filename, "rb") - audio_bytes = audio_file.read() - st.audio(audio_bytes, format="audio/wav") - - -def aligner(x, y): - return dtw(x, y, keep_internals=True) - - -def compute_costs(gcm): - res = [[] for _ in range(gcm.N)] - - for i in range(gcm.index1.shape[0]): - d = gcm.localCostMatrix[gcm.index1[i], gcm.index2[i]] - res[gcm.index1[i]].append(d) - - n = [len(x) for x in res] - res = [np.mean(x) for x in res] - return res, n - - -#@st.cache(show_spinner=False, hash_funcs={torch.nn.parameter.Parameter: lambda _: None}, max_entries=1) -def load_wav2vec2_featurizer(model_id: str, layer: Optional[int] = None): - transformers.logging.set_verbosity(transformers.logging.ERROR) - - model_kwargs = {} - if layer is not None: - model_kwargs["num_hidden_layers"] = int(layer) if layer > 0 else 0 - - with st.spinner("Loading model..."): - model = Wav2Vec2Model.from_pretrained(model_id, **model_kwargs) - model.eval() - if torch.cuda.is_available(): - model.cuda() - # st.success("Done!") - - return model - - -#@st.cache(persist=True, show_spinner=False, max_entries=3) -def run(model_id, layer, filename_x, filename_y): - model = load_wav2vec2_featurizer(model_id, layer) - - @torch.no_grad() - def _featurize(path): - input_values, rate = sf.read(path, dtype=np.float32) - if len(input_values.shape) == 2: - input_values = input_values.mean(1) - if rate != 16_000: - new_length = int(input_values.shape[0] / rate * 16_000) - input_values = signal.resample(input_values, new_length) - - input_values = torch.from_numpy(input_values).unsqueeze(0) - if torch.cuda.is_available(): - input_values = input_values.cuda() - - if layer is None: - hidden_states = model(input_values, output_hidden_states=True).hidden_states - hidden_states = [s.squeeze(0).cpu().numpy() for s in hidden_states] - return hidden_states - - if layer >= 0: - hidden_state = model(input_values).last_hidden_state.squeeze(0).cpu().numpy() - else: - hidden_state = model.feature_extractor(input_values) - hidden_state = hidden_state.transpose(1, 2) - if layer == -1: - hidden_state = model.feature_projection(hidden_state) - hidden_state = hidden_state.squeeze(0).cpu().numpy() - - return hidden_state - - with st.spinner("Measuring distance..."): - feats_x = _featurize(filename_x) - feats_y = _featurize(filename_y) - print('3. Features computed', datetime.now().strftime('%d-%m-%Y %H:%M:%S')) # test - gcm = aligner(feats_x, feats_y) - print('4. Alignments computed', datetime.now().strftime('%d-%m-%Y %H:%M:%S')) # test - - d = gcm.normalizedDistance - print("Distance:", d) - - c, n = compute_costs(gcm) - print('5. Costs computed', datetime.now().strftime('%d-%m-%Y %H:%M:%S')) # test - - del model - return d, c, n - - -def main(): - st.title("Word-level Neural Acoustic Distance Visualizer") - - st.write( - "This tool visualizes pronunciation differences between two recordings of the same word. The two recordings have to be wave files containing a single spoken word. \n\n\ -Choose any wav2vec 2.0 compatible model identifier on the [Hugging Face Model Hub](https://huggingface.co/models?filter=wav2vec2) and select the output layer you want to use.\n\n\ -To upload your own recordings select 'custom upload' in the audio file selection step. The first recording is put on the x-axis of the plot and the second one will be the reference recording for computing distance.\n\ -You should already see an example plot of two sample recordings.\n\n\ -This visualization tool is part of [neural representations for modeling variation in speech](https://doi.org/10.1016/j.wocn.2022.101137). \n\ -Please see our paper for further details.") - - st.subheader("Model selection:") - - model_id = st.selectbox("Select the wav2vec 2.0 model you want to use:", - ("facebook/wav2vec2-large-960h", "facebook/wav2vec2-large", "facebook/wav2vec2-large-xlsr-53", - "facebook/wav2vec2-xls-r-300m", "other"), - index=0) - - if model_id == "other": - model_id = st.text_input("Enter the wav2vec 2.0 model you want to use:", - value="facebook/wav2vec2-large-960h", - key="model") - - print(f"\n### Start new run\n") # test - - try: - cfg = AutoConfig.from_pretrained(model_id) - layer = st.number_input("Select the layer you want to use:", min_value=1, max_value=cfg.num_hidden_layers, value=10) - except OSError: - st.error( - "Please select a wav2vec 2.0 compatible model identifier on the [Hugging Face Model Hub](https://huggingface.co/models?filter=wav2vec2)." - ) - layer = None - - print('1. Model selected', datetime.now().strftime('%d-%m-%Y %H:%M:%S')) # test - - st.subheader("Audio file selection:") - - filename_x = st.selectbox("Filename (x-axis):", - ("falling_huud_mobiel_201145.wav", "falling_hood_mobiel_203936.wav", "custom upload")) - - if filename_x == "falling_huud_mobiel_201145.wav": - filename_x = "./examples/falling_huud_mobiel_201145.wav" - play_audio(filename_x) - if filename_x == "falling_hood_mobiel_203936.wav": - filename_x = "./examples/falling_hood_mobiel_203936.wav" - play_audio(filename_x) - - filename_y = st.selectbox("Filename (y-axis):", - ("falling_hood_mobiel_203936.wav", "falling_huud_mobiel_201145.wav", "custom upload")) - - if filename_y == "falling_huud_mobiel_201145.wav": - filename_y = "./examples/falling_huud_mobiel_201145.wav" - play_audio(filename_y) - if filename_y == "falling_hood_mobiel_203936.wav": - filename_y = "./examples/falling_hood_mobiel_203936.wav" - play_audio(filename_y) - - if filename_x == "custom upload": - filename_x = st.file_uploader("Choose a file (x-axis)", key="f_x") - if filename_y == "custom upload": - filename_y = st.file_uploader("Choose a file (y-axis)", key="f_y") - - print('2. Files selected', datetime.now().strftime('%d-%m-%Y %H:%M:%S')) # test - - if filename_x is not None and filename_y is not None and layer is not None: - print(f"\nX: {filename_x}\nY: {filename_y}") - - d, c, n = run(model_id, layer, filename_x, filename_y) - # d_b, c_b, n_b = run(featurizer_b) - - fig, axes = plt.subplots(figsize=(4, 2.5)) - - print('6. Plot init', datetime.now().strftime('%d-%m-%Y %H:%M:%S')) # test - - window_size = 9 - rate = 20 - x = np.arange(0, len(c) * rate, rate) - offset = (window_size - 1) // 2 - x_ = x[offset:-offset] - - # Target layer - axes.plot(x, c, alpha=0.5, color="gray", linestyle="--") - axes.scatter(x, c, np.array(n) * 10, color="gray") - c_ = np.convolve(c, np.ones(window_size) / window_size, mode="valid") - axes.plot(x_, c_) - - # Last layer - # axes.plot(x, c_b, alpha=0.5, color="gray", linestyle="--") - # axes.scatter(x, c_b, np.array(n_b) * 10, color="gray") - # c_b_ = np.convolve(c_b, np.ones(window_size) / window_size, mode="valid") - # axes.plot(x_, c_b_, linestyle="--") - - axes.set_xlabel("time (ms)") - axes.set_ylabel("distance per frame") - axes.hlines(y=d, xmin=0, xmax=np.max(x), linestyles="dashdot") - - plt.tight_layout(pad=0) - plt_id = randrange(0, 10) - plt.savefig("./output/plot" + str(plt_id) + ".pdf") - st.pyplot(fig) - -main() - -print('7. Plot filled', datetime.now().strftime('%d-%m-%Y %H:%M:%S')) # test - -if os.path.isfile("./output/plot.pdf"): - st.caption(" Visualization of neural acoustic distances\ - per frame (based on wav2vec 2.0) with the pronunciation of\ - the first filename on the x-axis and distances to the pronunciation\ - of second filename on the y-axis. The horizontal line represents\ - the global distance value (i.e. the average of all individual frames).\ - The blue continuous line represents the moving average distance based on 9 frames,\ - corresponding to 180ms. As a result of the moving average, the blue line does not cover the entire duration of\ - the sample. Larger bullet sizes indicate that multiple\ - frames in the pronunciation on the y-axis are aligned to a single frame in the pronunciation on the x-axis.") - -with open("./output/plot.pdf", "rb") as file: - btn = st.download_button(label="Download plot", data=file, file_name="plot.pdf", mime="image/pdf") - -print('8. End', datetime.now().strftime('%d-%m-%Y %H:%M:%S')) # test -print(f"9. RAM used: {psutil.Process().memory_info().rss / (1024 * 1024):.2f} MB") # test - -for name in dir(): - if not name.startswith('_'): - del globals()[name] - -import gc -gc.collect() \ No newline at end of file diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/dedup_all.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/dedup_all.py deleted file mode 100644 index ef39c05ee606aaeda1d9e94970932d2241a8b281..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/dedup_all.py +++ /dev/null @@ -1,52 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - - -import os -import glob -import argparse -from utils.dedup import deup - -import sys -WORKDIR_ROOT = os.environ.get('WORKDIR_ROOT', None) - -if WORKDIR_ROOT is None or not WORKDIR_ROOT.strip(): - print('please specify your working directory root in OS environment variable WORKDIR_ROOT. Exitting..."') - sys.exit(-1) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--from-folder", type=str, required=True, - help="the data folder to be dedup") - parser.add_argument("--to-folder", type=str, required=True, - help="the data folder to save deduped data") - parser.add_argument('--directions', type=str, default=None, required=False) - - args = parser.parse_args() - - if args.directions is None: - raw_files = glob.glob(f'{args.from_folder}/train*') - - directions = [os.path.split(file_path)[-1].split('.')[1] for file_path in raw_files] - else: - directions = args.directions.split(',') - directions = sorted(set(directions)) - - for direction in directions: - src, tgt = direction.split('-') - src_file = f'{args.from_folder}/train.{src}-{tgt}.{src}' - tgt_file = f'{args.from_folder}/train.{src}-{tgt}.{tgt}' - src_file_out = f'{args.to_folder}/train.{src}-{tgt}.{src}' - tgt_file_out = f'{args.to_folder}/train.{src}-{tgt}.{tgt}' - assert src_file != src_file_out - assert tgt_file != tgt_file_out - print(f'deduping {src_file}, {tgt_file}') - deup(src_file, tgt_file, src_file_out, tgt_file_out) - - -if __name__ == "__main__": - main() diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/lr_scheduler/manual_lr_scheduler.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/lr_scheduler/manual_lr_scheduler.py deleted file mode 100644 index 0269a1e2853854745e23b07931294f37b67d0295..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/lr_scheduler/manual_lr_scheduler.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import LegacyFairseqLRScheduler, register_lr_scheduler -import logging -import ast - -logger = logging.getLogger(__name__) -logger.setLevel(logging.WARNING) - - -@register_lr_scheduler("manual") -class ManualSchedule(LegacyFairseqLRScheduler): - """Decay the LR on a manual schedule.""" - - def __init__(self, args, optimizer): - super().__init__(args, optimizer) - - self.epoch2lr = self.parse_manuallr_args(args.epoch2lr) - self.update2lr = self.parse_manuallr_args(args.update2lr) - logger.info("@@@ ManualSchedule epoch2lr={}".format(self.epoch2lr)) - logger.info("@@@ ManualSchedule update2lr={}".format(self.update2lr)) - - if 1 in self.epoch2lr: - self.lr = self.epoch2lr[1] - elif 1 in self.update2lr: - self.lr = self.update2lr[1] - else: - self.lr = args.lr[0] - self.optimizer.set_lr(self.lr) # Set the beginning of the epoch. - - def parse_manuallr_args(self, lr_args_str): - lr_dict = ast.literal_eval(lr_args_str.replace(' ', '')) - if not isinstance(lr_dict, dict): - raise ValueError("epoch2lr/update2lr must be abel to evaluated to a dict") - - lr_args = {} - logger.info("@@@ after parsing input dictionary lr_dict = {}".format(lr_dict)) - for key, val in lr_dict.items(): - if "," in key: - for k in key.split(","): - lr_args[int(k)] = float(val) - elif "-" in key: - s = int(key.split("-")[0]) - e = int(key.split("-")[1]) - for k in range(s, e + 1, 1): - lr_args[k] = float(val) - else: - lr_args[int(key)] = float(val) - - return lr_args - - @staticmethod - def add_args(parser): - """Add arguments to the parser for this LR scheduler.""" - # fmt: off - parser.add_argument( - "--epoch2lr", - type=str, - metavar="DICT", - default="{}", - help="a dictionary used to set lr for each epoch manually", - ) - parser.add_argument( - "--update2lr", - type=str, - metavar="DICT", - default="{}", - help="a dictionary used to set lr for each update manually", - ) - # fmt: on - - def state_dict(self): - return {"lr": self.lr} - - def load_state_dict(self, state_dict): - if "lr" in state_dict: - self.lr = state_dict["lr"] - - def get_next_lr(self, epoch): - manual_keys = [k for k in self.epoch2lr if k <= epoch] - if manual_keys: - manual_lr = self.epoch2lr[max(manual_keys)] - else: - logger.warning("@@@ epoch={} does not exist in manual lr input. epoch2lr={}...".format( - epoch, list(self.epoch2lr.items())[:min(10, len(self.epoch2lr.keys())-1)] - )) - manual_lr = self.optimizer.get_lr() - return manual_lr - - def step_begin_epoch(self, epoch): - """Update the learning rate at the beginning of the given epoch.""" - self.lr = self.get_next_lr(epoch) - self.optimizer.set_lr(self.lr) - return self.optimizer.get_lr() - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - manual_keys = [k for k in self.update2lr if k <= num_updates] - if manual_keys: - manual_lr = self.update2lr[max(manual_keys)] - else: - logger.warning("epoch={} does not exist in manual lr input update2lr={}...".format( - num_updates, list(self.update2lr.items())[:min(10, len(self.update2lr.keys())-1)])) - manual_lr = self.optimizer.get_lr() - - self.optimizer.set_lr(manual_lr) - return self.optimizer.get_lr() diff --git a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/glow_tts/utils.py b/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/glow_tts/utils.py deleted file mode 100644 index a591aa319ccb264110111cda55c4a232b41aae74..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/glow_tts/utils.py +++ /dev/null @@ -1,282 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location="cpu") - iteration = 1 - if "iteration" in checkpoint_dict.keys(): - iteration = checkpoint_dict["iteration"] - if "learning_rate" in checkpoint_dict.keys(): - learning_rate = checkpoint_dict["learning_rate"] - if optimizer is not None and "optimizer" in checkpoint_dict.keys(): - optimizer.load_state_dict(checkpoint_dict["optimizer"]) - saved_state_dict = checkpoint_dict["model"] - if hasattr(model, "module"): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, "module"): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info( - "Loaded checkpoint '{}' (iteration {})".format(checkpoint_path, iteration) - ) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info( - "Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path - ) - ) - if hasattr(model, "module"): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save( - { - "model": state_dict, - "iteration": iteration, - "optimizer": optimizer.state_dict(), - "learning_rate": learning_rate, - }, - checkpoint_path, - ) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats="HWC") - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger("matplotlib") - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots() - im = ax.imshow(spectrogram, aspect="auto", origin="lower", interpolation="none") - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="") - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger("matplotlib") - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment, aspect="auto", origin="lower", interpolation="none") - fig.colorbar(im, ax=ax) - xlabel = "Decoder timestep" - if info is not None: - xlabel += "\n\n" + info - plt.xlabel(xlabel) - plt.ylabel("Encoder timestep") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="") - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding="utf-8") as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument("-c", "--config", type=str, help="JSON file for configuration") - parser.add_argument("-m", "--model", type=str, help="Model name") - # parser.add_argument('-g', '--gan', type=str, - # help='Model name') - parser.add_argument("-l", "--logs", type=str, help="logs name") - # parser.add_argument('-s', '--mels', type=str, - # help='logs name') - - args = parser.parse_args() - # model_dir = os.path.join("./logs", args.model) - model_dir = args.model - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - - # if not config_path : config_path = config_save_path - - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - hparams.log_dir = args.logs - # hparams.mels_dir = args.mels - # hparams.gan_dir = args.gan - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn( - "{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - ) - ) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn( - "git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8] - ) - ) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams: - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/cli/cliparser.py b/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/cli/cliparser.py deleted file mode 100644 index eb8c8d712668e0814c0f25c162d7a73b329a4da4..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/cli/cliparser.py +++ /dev/null @@ -1,266 +0,0 @@ -import argparse -import sys - -from indicnlp import loader -from indicnlp.tokenize import indic_tokenize -from indicnlp.tokenize import indic_detokenize -from indicnlp.normalize import indic_normalize -from indicnlp.morph import unsupervised_morph -from indicnlp.tokenize import sentence_tokenize -from indicnlp.syllable import syllabifier -from indicnlp.transliterate import unicode_transliterate -from indicnlp.transliterate import script_unifier - -DEFAULT_ENCODING='utf-8' - -def run_detokenize(args): - for line in args.infile: - args.outfile.write(indic_detokenize.trivial_detokenize(line,args.lang)) - -def run_tokenize(args): - for line in args.infile: - args.outfile.write(' '.join( - indic_tokenize.trivial_tokenize(line,args.lang))) - -def run_sentence_split(args): - text=' '.join([ l.replace('\n','').replace('\r','') for l in args.infile]) - outlines=sentence_tokenize.sentence_split(text,args.lang) - for line in outlines: - args.outfile.write(line+'\n') - -def run_normalize(args): - - # TODO: add more options to cli - remove_nuktas=False - normalize_nasals='do_nothing' - - # create normalizer - factory=indic_normalize.IndicNormalizerFactory() - normalizer=factory.get_normalizer(args.lang, - remove_nuktas=remove_nuktas, - nasals_mode=normalize_nasals) - - # DO normalization - for line in args.infile: - normalized_line=normalizer.normalize(line) - args.outfile.write(normalized_line) - -def run_morph(args): - - add_marker=False - analyzer=unsupervised_morph.UnsupervisedMorphAnalyzer(args.lang,add_marker) - for line in args.infile: - morph_tokens=analyzer.morph_analyze_document(line.strip().split(' ')) - args.outfile.write(' '.join(morph_tokens) + '\n') - -def run_syllabify(args): - for line in args.infile: - new_line = ' '.join( - [ ' '.join(syllabifier.orthographic_syllabify(w,args.lang)) - for w in line.strip().split(' ') ] - ) - args.outfile.write(new_line+'\n') - -def run_wc(args): - # if args.l==False and args.w==False and args.c==False: - # args.l, args.w, args.c= True, True, True - - nl=0 - nw=0 - nc=0 - - for line in args.infile: - nl+=1 - nw+=len(line.strip(' ').split(' ')) - nc+=len(line) - - print('{} {} {}'.format(nl,nw,nc)) - -def run_indic2roman(args): - for line in args.infile: - transliterated_line=unicode_transliterate.ItransTransliterator.to_itrans( - line,args.lang) - args.outfile.write(transliterated_line) - -def run_roman2indic(args): - for line in args.infile: - transliterated_line=unicode_transliterate.ItransTransliterator.from_itrans( - line,args.lang) - args.outfile.write(transliterated_line) - -def run_script_unify(args): - - unifier=None - - if args.mode=='aggressive': - unifier=script_unifier.AggressiveScriptUnifier(nasals_mode='to_anusvaara_relaxed', common_lang=args.common_lang) - - elif args.mode=='basic': - unifier=script_unifier.BasicScriptUnifier(nasals_mode='do_nothing', - common_lang=args.common_lang) - - elif args.mode=='naive': - unifier=script_unifier.NaiveScriptUnifier(common_lang=args.common_lang) - - assert(unifier is not None) - - for line in args.infile: - transliterated_line=unifier.transform(line,args.lang) - args.outfile.write(transliterated_line) - -def run_script_convert(args): - for line in args.infile: - transliterated_line=unicode_transliterate.UnicodeIndicTransliterator.transliterate( - line,args.srclang,args.tgtlang) - args.outfile.write(transliterated_line) - -def add_common_monolingual_args(task_parser): - task_parser.add_argument('infile', - type=argparse.FileType('r',encoding=DEFAULT_ENCODING), - nargs='?', - default=sys.stdin, - help='Input File path', - ) - task_parser.add_argument('outfile', - type=argparse.FileType('w',encoding=DEFAULT_ENCODING), - nargs='?', - default=sys.stdout, - help='Output File path', - ) - task_parser.add_argument('-l', '--lang', - help='Language', - ) - -def add_common_bilingual_args(task_parser): - task_parser.add_argument('infile', - type=argparse.FileType('r',encoding=DEFAULT_ENCODING), - nargs='?', - default=sys.stdin, - help='Input File path', - ) - task_parser.add_argument('outfile', - type=argparse.FileType('w',encoding=DEFAULT_ENCODING), - nargs='?', - default=sys.stdout, - help='Output File path', - ) - task_parser.add_argument('-s', '--srclang', - help='Source Language', - ) - - task_parser.add_argument('-t', '--tgtlang', - help='Target Language', - ) - -def add_tokenize_parser(subparsers): - task_parser=subparsers.add_parser('tokenize', - help='tokenizer help') - add_common_monolingual_args(task_parser) - task_parser.set_defaults(func=run_tokenize) - -def add_detokenize_parser(subparsers): - task_parser=subparsers.add_parser('detokenize', - help='de-tokenizer help') - add_common_monolingual_args(task_parser) - task_parser.set_defaults(func=run_detokenize) - -def add_sentence_split_parser(subparsers): - task_parser=subparsers.add_parser('sentence_split', help='sentence split help') - add_common_monolingual_args(task_parser) - task_parser.set_defaults(func=run_sentence_split) - -def add_normalize_parser(subparsers): - task_parser=subparsers.add_parser('normalize', help='normalizer help') - add_common_monolingual_args(task_parser) - task_parser.set_defaults(func=run_normalize) - -def add_morph_parser(subparsers): - task_parser=subparsers.add_parser('morph', help='morph help') - add_common_monolingual_args(task_parser) - task_parser.set_defaults(func=run_morph) - -def add_syllabify_parser(subparsers): - task_parser=subparsers.add_parser('syllabify', help='syllabify help') - add_common_monolingual_args(task_parser) - task_parser.set_defaults(func=run_syllabify) - -def add_wc_parser(subparsers): - task_parser=subparsers.add_parser('wc', help='wc help') - - task_parser.add_argument('infile', - type=argparse.FileType('r',encoding=DEFAULT_ENCODING), - nargs='?', - default=sys.stdin, - help='Input File path', - ) - # task_parser.add_argument('-l', action='store_true') - # task_parser.add_argument('-w', action='store_true') - # task_parser.add_argument('-c', action='store_true') - # task_parser.set_defaults(l=False) - # task_parser.set_defaults(w=False) - # task_parser.set_defaults(c=False) - - task_parser.set_defaults(func=run_wc) - -def add_indic2roman_parser(subparsers): - task_parser=subparsers.add_parser('indic2roman', help='indic2roman help') - add_common_monolingual_args(task_parser) - task_parser.set_defaults(func=run_indic2roman) - -def add_roman2indic_parser(subparsers): - task_parser=subparsers.add_parser('roman2indic', help='roman2indic help') - add_common_monolingual_args(task_parser) - task_parser.set_defaults(func=run_indic2roman) - -def add_script_unify_parser(subparsers): - task_parser=subparsers.add_parser('script_unify', help='script_unify help') - add_common_monolingual_args(task_parser) - task_parser.add_argument('-m','--mode', - default='basic', - choices=['naive', 'basic', 'aggressive'] , - help='Script unification mode', - ) - task_parser.add_argument('-c','--common_lang', - default='hi', - help='Common language in which all languages are represented', - ) - - task_parser.set_defaults(func=run_script_unify) - -def add_script_convert_parser(subparsers): - task_parser=subparsers.add_parser('script_convert', help='script convert help') - add_common_bilingual_args(task_parser) - task_parser.set_defaults(func=run_script_convert) - -def get_parser(): - parser = argparse.ArgumentParser(prog='indicnlp') - subparsers = parser.add_subparsers(help='Invoke each operation with one of the subcommands', dest='subcommand') - - add_tokenize_parser(subparsers) - add_detokenize_parser(subparsers) - add_sentence_split_parser(subparsers) - add_normalize_parser(subparsers) - - add_morph_parser(subparsers) - add_syllabify_parser(subparsers) - - add_wc_parser(subparsers) - - add_indic2roman_parser(subparsers) - add_roman2indic_parser(subparsers) - add_script_unify_parser(subparsers) - - add_script_convert_parser(subparsers) - - return parser - -def main(): - parser=get_parser() - args=parser.parse_args() - # print(args) - args.func(args) - -if __name__ == '__main__': - loader.load() - main() - diff --git a/spaces/Hobis/bark-voice-cloning-polish-HuBERT-quantizer/README.md b/spaces/Hobis/bark-voice-cloning-polish-HuBERT-quantizer/README.md deleted file mode 100644 index 0201ebf6de813acfb8bfd4997583bc5f5c0d036e..0000000000000000000000000000000000000000 --- a/spaces/Hobis/bark-voice-cloning-polish-HuBERT-quantizer/README.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: Bark Voice Cloning -emoji: 🐶 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.29.0 -python_version: 3.10.11 -app_file: app.py -models: -- facebook/hubert-base-ls960 -- GitMylo/bark-voice-cloning -pinned: false -license: mit -duplicated_from: GitMylo/bark-voice-cloning ---- diff --git a/spaces/HopeMan/DoomGuy/README.md b/spaces/HopeMan/DoomGuy/README.md deleted file mode 100644 index 3d8a7ef70cd42eae164ac861ce2b888c0997c387..0000000000000000000000000000000000000000 --- a/spaces/HopeMan/DoomGuy/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: DoomGuy -emoji: 🔥 -colorFrom: gray -colorTo: red -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/iterative_refinement_generator.py b/spaces/ICML2022/OFA/fairseq/fairseq/iterative_refinement_generator.py deleted file mode 100644 index 4fb0946f499329ceb130761b59675d761df1c158..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/iterative_refinement_generator.py +++ /dev/null @@ -1,359 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from collections import namedtuple - -import numpy as np -import torch -from fairseq import utils - - -DecoderOut = namedtuple( - "IterativeRefinementDecoderOut", - ["output_tokens", "output_scores", "attn", "step", "max_step", "history"], -) - - -class IterativeRefinementGenerator(object): - def __init__( - self, - tgt_dict, - models=None, - eos_penalty=0.0, - max_iter=10, - max_ratio=2, - beam_size=1, - decoding_format=None, - retain_dropout=False, - adaptive=True, - retain_history=False, - reranking=False, - ): - """ - Generates translations based on iterative refinement. - - Args: - tgt_dict: target dictionary - eos_penalty: if > 0.0, it penalized early-stopping in decoding - max_iter: maximum number of refinement iterations - max_ratio: generate sequences of maximum length ax, where x is the source length - decoding_format: decoding mode in {'unigram', 'ensemble', 'vote', 'dp', 'bs'} - retain_dropout: retaining dropout in the inference - adaptive: decoding with early stop - """ - self.bos = tgt_dict.bos() - self.pad = tgt_dict.pad() - self.unk = tgt_dict.unk() - self.eos = tgt_dict.eos() - self.vocab_size = len(tgt_dict) - self.eos_penalty = eos_penalty - self.max_iter = max_iter - self.max_ratio = max_ratio - self.beam_size = beam_size - self.reranking = reranking - self.decoding_format = decoding_format - self.retain_dropout = retain_dropout - self.retain_history = retain_history - self.adaptive = adaptive - self.models = models - - def generate_batched_itr( - self, - data_itr, - maxlen_a=None, - maxlen_b=None, - cuda=False, - timer=None, - prefix_size=0, - ): - """Iterate over a batched dataset and yield individual translations. - - Args: - maxlen_a/b: generate sequences of maximum length ax + b, - where x is the source sentence length. - cuda: use GPU for generation - timer: StopwatchMeter for timing generations. - """ - - for sample in data_itr: - if "net_input" not in sample: - continue - if timer is not None: - timer.start() - with torch.no_grad(): - hypos = self.generate( - self.models, - sample, - prefix_tokens=sample["target"][:, :prefix_size] - if prefix_size > 0 - else None, - ) - if timer is not None: - timer.stop(sample["ntokens"]) - for i, id in enumerate(sample["id"]): - # remove padding - src = utils.strip_pad(sample["net_input"]["src_tokens"][i, :], self.pad) - ref = utils.strip_pad(sample["target"][i, :], self.pad) - yield id, src, ref, hypos[i] - - @torch.no_grad() - def generate(self, models, sample, prefix_tokens=None, constraints=None): - if constraints is not None: - raise NotImplementedError( - "Constrained decoding with the IterativeRefinementGenerator is not supported" - ) - - # TODO: iterative refinement generator does not support ensemble for now. - if not self.retain_dropout: - for model in models: - model.eval() - - model, reranker = models[0], None - if self.reranking: - assert len(models) > 1, "Assuming the last checkpoint is the reranker" - assert ( - self.beam_size > 1 - ), "Reranking requires multiple translation for each example" - - reranker = models[-1] - models = models[:-1] - - if len(models) > 1 and hasattr(model, "enable_ensemble"): - assert model.allow_ensemble, "{} does not support ensembling".format( - model.__class__.__name__ - ) - model.enable_ensemble(models) - - # TODO: better encoder inputs? - src_tokens = sample["net_input"]["src_tokens"] - src_lengths = sample["net_input"]["src_lengths"] - bsz, src_len = src_tokens.size() - - # initialize - encoder_out = model.forward_encoder([src_tokens, src_lengths]) - prev_decoder_out = model.initialize_output_tokens(encoder_out, src_tokens) - - if self.beam_size > 1: - assert ( - model.allow_length_beam - ), "{} does not support decoding with length beam.".format( - model.__class__.__name__ - ) - - # regenerate data based on length-beam - length_beam_order = ( - utils.new_arange(src_tokens, self.beam_size, bsz).t().reshape(-1) - ) - encoder_out = model.encoder.reorder_encoder_out( - encoder_out, length_beam_order - ) - prev_decoder_out = model.regenerate_length_beam( - prev_decoder_out, self.beam_size - ) - bsz = bsz * self.beam_size - - sent_idxs = torch.arange(bsz) - prev_output_tokens = prev_decoder_out.output_tokens.clone() - - if self.retain_history: - prev_decoder_out = prev_decoder_out._replace(history=[prev_output_tokens]) - - finalized = [[] for _ in range(bsz)] - - def is_a_loop(x, y, s, a): - b, l_x, l_y = x.size(0), x.size(1), y.size(1) - if l_x > l_y: - y = torch.cat([y, x.new_zeros(b, l_x - l_y).fill_(self.pad)], 1) - s = torch.cat([s, s.new_zeros(b, l_x - l_y)], 1) - if a is not None: - a = torch.cat([a, a.new_zeros(b, l_x - l_y, a.size(2))], 1) - elif l_x < l_y: - x = torch.cat([x, y.new_zeros(b, l_y - l_x).fill_(self.pad)], 1) - return (x == y).all(1), y, s, a - - def finalized_hypos(step, prev_out_token, prev_out_score, prev_out_attn): - cutoff = prev_out_token.ne(self.pad) - tokens = prev_out_token[cutoff] - if prev_out_score is None: - scores, score = None, None - else: - scores = prev_out_score[cutoff] - score = scores.mean() - - if prev_out_attn is None: - hypo_attn, alignment = None, None - else: - hypo_attn = prev_out_attn[cutoff] - alignment = hypo_attn.max(dim=1)[1] - return { - "steps": step, - "tokens": tokens, - "positional_scores": scores, - "score": score, - "hypo_attn": hypo_attn, - "alignment": alignment, - } - - for step in range(self.max_iter + 1): - - decoder_options = { - "eos_penalty": self.eos_penalty, - "max_ratio": self.max_ratio, - "decoding_format": self.decoding_format, - } - prev_decoder_out = prev_decoder_out._replace( - step=step, - max_step=self.max_iter + 1, - ) - - decoder_out = model.forward_decoder( - prev_decoder_out, encoder_out, **decoder_options - ) - - if self.adaptive: - # terminate if there is a loop - terminated, out_tokens, out_scores, out_attn = is_a_loop( - prev_output_tokens, - decoder_out.output_tokens, - decoder_out.output_scores, - decoder_out.attn, - ) - decoder_out = decoder_out._replace( - output_tokens=out_tokens, - output_scores=out_scores, - attn=out_attn, - ) - - else: - terminated = decoder_out.output_tokens.new_zeros( - decoder_out.output_tokens.size(0) - ).bool() - - if step == self.max_iter: # reach last iteration, terminate - terminated.fill_(1) - - # collect finalized sentences - finalized_idxs = sent_idxs[terminated] - finalized_tokens = decoder_out.output_tokens[terminated] - finalized_scores = decoder_out.output_scores[terminated] - finalized_attn = ( - None - if (decoder_out.attn is None or decoder_out.attn.size(0) == 0) - else decoder_out.attn[terminated] - ) - - if self.retain_history: - finalized_history_tokens = [h[terminated] for h in decoder_out.history] - - for i in range(finalized_idxs.size(0)): - finalized[finalized_idxs[i]] = [ - finalized_hypos( - step, - finalized_tokens[i], - finalized_scores[i], - None if finalized_attn is None else finalized_attn[i], - ) - ] - - if self.retain_history: - finalized[finalized_idxs[i]][0]["history"] = [] - for j in range(len(finalized_history_tokens)): - finalized[finalized_idxs[i]][0]["history"].append( - finalized_hypos( - step, finalized_history_tokens[j][i], None, None - ) - ) - - # check if all terminated - if terminated.sum() == terminated.size(0): - break - - # for next step - not_terminated = ~terminated - prev_decoder_out = decoder_out._replace( - output_tokens=decoder_out.output_tokens[not_terminated], - output_scores=decoder_out.output_scores[not_terminated], - attn=decoder_out.attn[not_terminated] - if (decoder_out.attn is not None and decoder_out.attn.size(0) > 0) - else None, - history=[h[not_terminated] for h in decoder_out.history] - if decoder_out.history is not None - else None, - ) - encoder_out = model.encoder.reorder_encoder_out( - encoder_out, not_terminated.nonzero(as_tuple=False).squeeze() - ) - sent_idxs = sent_idxs[not_terminated] - prev_output_tokens = prev_decoder_out.output_tokens.clone() - - if self.beam_size > 1: - if reranker is not None: - finalized = self.rerank( - reranker, finalized, [src_tokens, src_lengths], self.beam_size - ) - - # aggregate information from length beam - finalized = [ - finalized[ - np.argmax( - [ - finalized[self.beam_size * i + j][0]["score"] - for j in range(self.beam_size) - ] - ) - + self.beam_size * i - ] - for i in range(len(finalized) // self.beam_size) - ] - - return finalized - - def rerank(self, reranker, finalized, encoder_input, beam_size): - def rebuild_batch(finalized): - finalized_tokens = [f[0]["tokens"] for f in finalized] - finalized_maxlen = max(f.size(0) for f in finalized_tokens) - final_output_tokens = ( - finalized_tokens[0] - .new_zeros(len(finalized_tokens), finalized_maxlen) - .fill_(self.pad) - ) - for i, f in enumerate(finalized_tokens): - final_output_tokens[i, : f.size(0)] = f - return final_output_tokens - - final_output_tokens = rebuild_batch(finalized) - final_output_tokens[ - :, 0 - ] = self.eos # autoregressive model assumes starting with EOS - - reranker_encoder_out = reranker.encoder(*encoder_input) - length_beam_order = ( - utils.new_arange( - final_output_tokens, beam_size, reranker_encoder_out.encoder_out.size(1) - ) - .t() - .reshape(-1) - ) - reranker_encoder_out = reranker.encoder.reorder_encoder_out( - reranker_encoder_out, length_beam_order - ) - reranking_scores = reranker.get_normalized_probs( - reranker.decoder(final_output_tokens[:, :-1], reranker_encoder_out), - True, - None, - ) - reranking_scores = reranking_scores.gather(2, final_output_tokens[:, 1:, None]) - reranking_masks = final_output_tokens[:, 1:].ne(self.pad) - reranking_scores = ( - reranking_scores[:, :, 0].masked_fill_(~reranking_masks, 0).sum(1) - ) - reranking_scores = reranking_scores / reranking_masks.sum(1).type_as( - reranking_scores - ) - - for i in range(len(finalized)): - finalized[i][0]["score"] = reranking_scores[i] - - return finalized diff --git a/spaces/ICML2022/resefa/third_party/stylegan2_official_ops/upfirdn2d.py b/spaces/ICML2022/resefa/third_party/stylegan2_official_ops/upfirdn2d.py deleted file mode 100644 index 1d154bc17430366f375f6e7263854f7063285250..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/resefa/third_party/stylegan2_official_ops/upfirdn2d.py +++ /dev/null @@ -1,401 +0,0 @@ -# python3.7 - -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Custom operators for efficient resampling of 2D images. - -`upfirdn` means executing upsampling, FIR filtering, downsampling in sequence. - -Please refer to https://github.com/NVlabs/stylegan2-ada-pytorch -""" - -# pylint: disable=line-too-long -# pylint: disable=missing-class-docstring -# pylint: disable=global-variable-not-assigned -# pylint: disable=bare-except - -import os -import warnings -import traceback -import numpy as np -import torch - -from . import custom_ops -from . import misc -from . import conv2d_gradfix - -#---------------------------------------------------------------------------- - -_inited = False -_plugin = None - -def _init(): - global _inited, _plugin - if not _inited: - sources = ['upfirdn2d.cpp', 'upfirdn2d.cu'] - sources = [os.path.join(os.path.dirname(__file__), s) for s in sources] - try: - _plugin = custom_ops.get_plugin('upfirdn2d_plugin', sources=sources, extra_cuda_cflags=['--use_fast_math']) - except: - warnings.warn('Failed to build CUDA kernels for upfirdn2d. Falling back to slow reference implementation. Details:\n\n' + traceback.format_exc()) - return _plugin is not None - -def _parse_scaling(scaling): - if isinstance(scaling, int): - scaling = [scaling, scaling] - assert isinstance(scaling, (list, tuple)) - assert all(isinstance(x, int) for x in scaling) - sx, sy = scaling - assert sx >= 1 and sy >= 1 - return sx, sy - -def _parse_padding(padding): - if isinstance(padding, int): - padding = [padding, padding] - assert isinstance(padding, (list, tuple)) - assert all(isinstance(x, int) for x in padding) - if len(padding) == 2: - padx, pady = padding - padding = [padx, padx, pady, pady] - padx0, padx1, pady0, pady1 = padding - return padx0, padx1, pady0, pady1 - -def _get_filter_size(f): - if f is None: - return 1, 1 - assert isinstance(f, torch.Tensor) and f.ndim in [1, 2] - fw = f.shape[-1] - fh = f.shape[0] - with misc.suppress_tracer_warnings(): - fw = int(fw) - fh = int(fh) - misc.assert_shape(f, [fh, fw][:f.ndim]) - assert fw >= 1 and fh >= 1 - return fw, fh - -#---------------------------------------------------------------------------- - -def setup_filter(f, device=torch.device('cpu'), normalize=True, flip_filter=False, gain=1, separable=None): - r"""Convenience function to setup 2D FIR filter for `upfirdn2d()`. - - Args: - f: Torch tensor, numpy array, or python list of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), - `[]` (impulse), or - `None` (identity). - device: Result device (default: cpu). - normalize: Normalize the filter so that it retains the magnitude - for constant input signal (DC)? (default: True). - flip_filter: Flip the filter? (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - separable: Return a separable filter? (default: select automatically). - - Returns: - Float32 tensor of the shape - `[filter_height, filter_width]` (non-separable) or - `[filter_taps]` (separable). - """ - # Validate. - if f is None: - f = 1 - f = torch.as_tensor(f, dtype=torch.float32) - assert f.ndim in [0, 1, 2] - assert f.numel() > 0 - if f.ndim == 0: - f = f[np.newaxis] - - # Separable? - if separable is None: - separable = (f.ndim == 1 and f.numel() >= 8) - if f.ndim == 1 and not separable: - f = f.ger(f) - assert f.ndim == (1 if separable else 2) - - # Apply normalize, flip, gain, and device. - if normalize: - f /= f.sum() - if flip_filter: - f = f.flip(list(range(f.ndim))) - f = f * (gain ** (f.ndim / 2)) - f = f.to(device=device) - return f - -#---------------------------------------------------------------------------- - -def upfirdn2d(x, f, up=1, down=1, padding=0, flip_filter=False, gain=1, impl='cuda'): - r"""Pad, upsample, filter, and downsample a batch of 2D images. - - Performs the following sequence of operations for each channel: - - 1. Upsample the image by inserting N-1 zeros after each pixel (`up`). - - 2. Pad the image with the specified number of zeros on each side (`padding`). - Negative padding corresponds to cropping the image. - - 3. Convolve the image with the specified 2D FIR filter (`f`), shrinking it - so that the footprint of all output pixels lies within the input image. - - 4. Downsample the image by keeping every Nth pixel (`down`). - - This sequence of operations bears close resemblance to scipy.signal.upfirdn(). - The fused op is considerably more efficient than performing the same calculation - using standard PyTorch ops. It supports gradients of arbitrary order. - - Args: - x: Float32/float64/float16 input tensor of the shape - `[batch_size, num_channels, in_height, in_width]`. - f: Float32 FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - up: Integer upsampling factor. Can be a single int or a list/tuple - `[x, y]` (default: 1). - down: Integer downsampling factor. Can be a single int or a list/tuple - `[x, y]` (default: 1). - padding: Padding with respect to the upsampled image. Can be a single number - or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - flip_filter: False = convolution, True = correlation (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - assert isinstance(x, torch.Tensor) - assert impl in ['ref', 'cuda'] - if impl == 'cuda' and x.device.type == 'cuda' and _init(): - return _upfirdn2d_cuda(up=up, down=down, padding=padding, flip_filter=flip_filter, gain=gain).apply(x, f) - return _upfirdn2d_ref(x, f, up=up, down=down, padding=padding, flip_filter=flip_filter, gain=gain) - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def _upfirdn2d_ref(x, f, up=1, down=1, padding=0, flip_filter=False, gain=1): - """Slow reference implementation of `upfirdn2d()` using standard PyTorch ops. - """ - # Validate arguments. - assert isinstance(x, torch.Tensor) and x.ndim == 4 - if f is None: - f = torch.ones([1, 1], dtype=torch.float32, device=x.device) - assert isinstance(f, torch.Tensor) and f.ndim in [1, 2] - assert f.dtype == torch.float32 and not f.requires_grad - batch_size, num_channels, in_height, in_width = x.shape - upx, upy = _parse_scaling(up) - downx, downy = _parse_scaling(down) - padx0, padx1, pady0, pady1 = _parse_padding(padding) - - # Upsample by inserting zeros. - x = x.reshape([batch_size, num_channels, in_height, 1, in_width, 1]) - x = torch.nn.functional.pad(x, [0, upx - 1, 0, 0, 0, upy - 1]) - x = x.reshape([batch_size, num_channels, in_height * upy, in_width * upx]) - - # Pad or crop. - x = torch.nn.functional.pad(x, [max(padx0, 0), max(padx1, 0), max(pady0, 0), max(pady1, 0)]) - x = x[:, :, max(-pady0, 0) : x.shape[2] - max(-pady1, 0), max(-padx0, 0) : x.shape[3] - max(-padx1, 0)] - - # Setup filter. - f = f * (gain ** (f.ndim / 2)) - f = f.to(x.dtype) - if not flip_filter: - f = f.flip(list(range(f.ndim))) - - # Convolve with the filter. - f = f[np.newaxis, np.newaxis].repeat([num_channels, 1] + [1] * f.ndim) - if f.ndim == 4: - x = conv2d_gradfix.conv2d(input=x, weight=f, groups=num_channels) - else: - x = conv2d_gradfix.conv2d(input=x, weight=f.unsqueeze(2), groups=num_channels) - x = conv2d_gradfix.conv2d(input=x, weight=f.unsqueeze(3), groups=num_channels) - - # Downsample by throwing away pixels. - x = x[:, :, ::downy, ::downx] - return x - -#---------------------------------------------------------------------------- - -_upfirdn2d_cuda_cache = dict() - -def _upfirdn2d_cuda(up=1, down=1, padding=0, flip_filter=False, gain=1): - """Fast CUDA implementation of `upfirdn2d()` using custom ops. - """ - # Parse arguments. - upx, upy = _parse_scaling(up) - downx, downy = _parse_scaling(down) - padx0, padx1, pady0, pady1 = _parse_padding(padding) - - # Lookup from cache. - key = (upx, upy, downx, downy, padx0, padx1, pady0, pady1, flip_filter, gain) - if key in _upfirdn2d_cuda_cache: - return _upfirdn2d_cuda_cache[key] - - # Forward op. - class Upfirdn2dCuda(torch.autograd.Function): - @staticmethod - def forward(ctx, x, f): # pylint: disable=arguments-differ - assert isinstance(x, torch.Tensor) and x.ndim == 4 - if f is None: - f = torch.ones([1, 1], dtype=torch.float32, device=x.device) - assert isinstance(f, torch.Tensor) and f.ndim in [1, 2] - y = x - if f.ndim == 2: - y = _plugin.upfirdn2d(y, f, upx, upy, downx, downy, padx0, padx1, pady0, pady1, flip_filter, gain) - else: - y = _plugin.upfirdn2d(y, f.unsqueeze(0), upx, 1, downx, 1, padx0, padx1, 0, 0, flip_filter, np.sqrt(gain)) - y = _plugin.upfirdn2d(y, f.unsqueeze(1), 1, upy, 1, downy, 0, 0, pady0, pady1, flip_filter, np.sqrt(gain)) - ctx.save_for_backward(f) - ctx.x_shape = x.shape - return y - - @staticmethod - def backward(ctx, dy): # pylint: disable=arguments-differ - f, = ctx.saved_tensors - _, _, ih, iw = ctx.x_shape - _, _, oh, ow = dy.shape - fw, fh = _get_filter_size(f) - p = [ - fw - padx0 - 1, - iw * upx - ow * downx + padx0 - upx + 1, - fh - pady0 - 1, - ih * upy - oh * downy + pady0 - upy + 1, - ] - dx = None - df = None - - if ctx.needs_input_grad[0]: - dx = _upfirdn2d_cuda(up=down, down=up, padding=p, flip_filter=(not flip_filter), gain=gain).apply(dy, f) - - assert not ctx.needs_input_grad[1] - return dx, df - - # Add to cache. - _upfirdn2d_cuda_cache[key] = Upfirdn2dCuda - return Upfirdn2dCuda - -#---------------------------------------------------------------------------- - -def filter2d(x, f, padding=0, flip_filter=False, gain=1, impl='cuda'): - r"""Filter a batch of 2D images using the given 2D FIR filter. - - By default, the result is padded so that its shape matches the input. - User-specified padding is applied on top of that, with negative values - indicating cropping. Pixels outside the image are assumed to be zero. - - Args: - x: Float32/float64/float16 input tensor of the shape - `[batch_size, num_channels, in_height, in_width]`. - f: Float32 FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - padding: Padding with respect to the output. Can be a single number or a - list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - flip_filter: False = convolution, True = correlation (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - padx0, padx1, pady0, pady1 = _parse_padding(padding) - fw, fh = _get_filter_size(f) - p = [ - padx0 + fw // 2, - padx1 + (fw - 1) // 2, - pady0 + fh // 2, - pady1 + (fh - 1) // 2, - ] - return upfirdn2d(x, f, padding=p, flip_filter=flip_filter, gain=gain, impl=impl) - -#---------------------------------------------------------------------------- - -def upsample2d(x, f, up=2, padding=0, flip_filter=False, gain=1, impl='cuda'): - r"""Upsample a batch of 2D images using the given 2D FIR filter. - - By default, the result is padded so that its shape is a multiple of the input. - User-specified padding is applied on top of that, with negative values - indicating cropping. Pixels outside the image are assumed to be zero. - - Args: - x: Float32/float64/float16 input tensor of the shape - `[batch_size, num_channels, in_height, in_width]`. - f: Float32 FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - up: Integer upsampling factor. Can be a single int or a list/tuple - `[x, y]` (default: 1). - padding: Padding with respect to the output. Can be a single number or a - list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - flip_filter: False = convolution, True = correlation (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - upx, upy = _parse_scaling(up) - padx0, padx1, pady0, pady1 = _parse_padding(padding) - fw, fh = _get_filter_size(f) - p = [ - padx0 + (fw + upx - 1) // 2, - padx1 + (fw - upx) // 2, - pady0 + (fh + upy - 1) // 2, - pady1 + (fh - upy) // 2, - ] - return upfirdn2d(x, f, up=up, padding=p, flip_filter=flip_filter, gain=gain*upx*upy, impl=impl) - -#---------------------------------------------------------------------------- - -def downsample2d(x, f, down=2, padding=0, flip_filter=False, gain=1, impl='cuda'): - r"""Downsample a batch of 2D images using the given 2D FIR filter. - - By default, the result is padded so that its shape is a fraction of the input. - User-specified padding is applied on top of that, with negative values - indicating cropping. Pixels outside the image are assumed to be zero. - - Args: - x: Float32/float64/float16 input tensor of the shape - `[batch_size, num_channels, in_height, in_width]`. - f: Float32 FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - down: Integer downsampling factor. Can be a single int or a list/tuple - `[x, y]` (default: 1). - padding: Padding with respect to the input. Can be a single number or a - list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - flip_filter: False = convolution, True = correlation (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - downx, downy = _parse_scaling(down) - padx0, padx1, pady0, pady1 = _parse_padding(padding) - fw, fh = _get_filter_size(f) - p = [ - padx0 + (fw - downx + 1) // 2, - padx1 + (fw - downx) // 2, - pady0 + (fh - downy + 1) // 2, - pady1 + (fh - downy) // 2, - ] - return upfirdn2d(x, f, down=down, padding=p, flip_filter=flip_filter, gain=gain, impl=impl) - -#---------------------------------------------------------------------------- - -# pylint: enable=line-too-long -# pylint: enable=missing-class-docstring -# pylint: enable=global-variable-not-assigned -# pylint: enable=bare-except diff --git a/spaces/Iceclear/StableSR/StableSR/taming/data/helper_types.py b/spaces/Iceclear/StableSR/StableSR/taming/data/helper_types.py deleted file mode 100644 index fb51e301da08602cfead5961c4f7e1d89f6aba79..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/taming/data/helper_types.py +++ /dev/null @@ -1,49 +0,0 @@ -from typing import Dict, Tuple, Optional, NamedTuple, Union -from PIL.Image import Image as pil_image -from torch import Tensor - -try: - from typing import Literal -except ImportError: - from typing_extensions import Literal - -Image = Union[Tensor, pil_image] -BoundingBox = Tuple[float, float, float, float] # x0, y0, w, h -CropMethodType = Literal['none', 'random', 'center', 'random-2d'] -SplitType = Literal['train', 'validation', 'test'] - - -class ImageDescription(NamedTuple): - id: int - file_name: str - original_size: Tuple[int, int] # w, h - url: Optional[str] = None - license: Optional[int] = None - coco_url: Optional[str] = None - date_captured: Optional[str] = None - flickr_url: Optional[str] = None - flickr_id: Optional[str] = None - coco_id: Optional[str] = None - - -class Category(NamedTuple): - id: str - super_category: Optional[str] - name: str - - -class Annotation(NamedTuple): - area: float - image_id: str - bbox: BoundingBox - category_no: int - category_id: str - id: Optional[int] = None - source: Optional[str] = None - confidence: Optional[float] = None - is_group_of: Optional[bool] = None - is_truncated: Optional[bool] = None - is_occluded: Optional[bool] = None - is_depiction: Optional[bool] = None - is_inside: Optional[bool] = None - segmentation: Optional[Dict] = None diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py b/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py deleted file mode 100644 index 72981aebe18478b320a7d397924b925c6dd6ef5e..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py +++ /dev/null @@ -1,535 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -from typing import Callable, List, Optional, Union - -import numpy as np -import torch - -import PIL -from diffusers.utils import is_accelerate_available -from transformers import CLIPTextModel, CLIPTokenizer - -from ...models import AutoencoderKL, UNet2DConditionModel -from ...pipeline_utils import DiffusionPipeline, ImagePipelineOutput -from ...schedulers import DDIMScheduler, DDPMScheduler, LMSDiscreteScheduler, PNDMScheduler -from ...utils import logging - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -def preprocess(image): - # resize to multiple of 64 - width, height = image.size - width = width - width % 64 - height = height - height % 64 - image = image.resize((width, height)) - - image = np.array(image.convert("RGB")) - image = image[None].transpose(0, 3, 1, 2) - image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0 - return image - - -class StableDiffusionUpscalePipeline(DiffusionPipeline): - r""" - Pipeline for text-guided image super-resolution using Stable Diffusion 2. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - low_res_scheduler ([`SchedulerMixin`]): - A scheduler used to add initial noise to the low res conditioning image. It must be an instance of - [`DDPMScheduler`]. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - """ - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - low_res_scheduler: DDPMScheduler, - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler], - max_noise_level: int = 350, - ): - super().__init__() - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - low_res_scheduler=low_res_scheduler, - scheduler=scheduler, - ) - self.register_to_config(max_noise_level=max_noise_level) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_attention_slicing - def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"): - r""" - Enable sliced attention computation. - - When this option is enabled, the attention module will split the input tensor in slices, to compute attention - in several steps. This is useful to save some memory in exchange for a small speed decrease. - - Args: - slice_size (`str` or `int`, *optional*, defaults to `"auto"`): - When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If - a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case, - `attention_head_dim` must be a multiple of `slice_size`. - """ - if slice_size == "auto": - if isinstance(self.unet.config.attention_head_dim, int): - # half the attention head size is usually a good trade-off between - # speed and memory - slice_size = self.unet.config.attention_head_dim // 2 - else: - # if `attention_head_dim` is a list, take the smallest head size - slice_size = min(self.unet.config.attention_head_dim) - - self.unet.set_attention_slice(slice_size) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_attention_slicing - def disable_attention_slicing(self): - r""" - Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go - back to computing attention in one step. - """ - # set slice_size = `None` to disable `attention slicing` - self.enable_attention_slicing(None) - - def enable_sequential_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, - text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a - `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. - """ - if is_accelerate_available(): - from accelerate import cpu_offload - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - device = torch.device(f"cuda:{gpu_id}") - - for cpu_offloaded_model in [self.unet, self.text_encoder]: - if cpu_offloaded_model is not None: - cpu_offload(cpu_offloaded_model, device) - - @property - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module - hooks. - """ - if self.device != torch.device("meta") or not hasattr(self.unet, "_hf_hook"): - return self.device - for module in self.unet.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt - def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `list(int)`): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - """ - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="pt").input_ids - - if not torch.equal(text_input_ids, untruncated_ids): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - text_embeddings = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - text_embeddings = text_embeddings[0] - - # duplicate text embeddings for each generation per prompt, using mps friendly method - bs_embed, seq_len, _ = text_embeddings.shape - text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1) - text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - uncond_embeddings = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - uncond_embeddings = uncond_embeddings[0] - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = uncond_embeddings.shape[1] - uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1) - uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) - - return text_embeddings - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents with 0.18215->0.08333 - def decode_latents(self, latents): - latents = 1 / 0.08333 * latents - image = self.vae.decode(latents).sample - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - def check_inputs(self, prompt, image, noise_level, callback_steps): - if not isinstance(prompt, str) and not isinstance(prompt, list): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if ( - not isinstance(image, torch.Tensor) - and not isinstance(image, PIL.Image.Image) - and not isinstance(image, list) - ): - raise ValueError( - f"`image` has to be of type `torch.Tensor`, `PIL.Image.Image` or `list` but is {type(image)}" - ) - - # verify batch size of prompt and image are same if image is a list or tensor - if isinstance(image, list) or isinstance(image, torch.Tensor): - if isinstance(prompt, str): - batch_size = 1 - else: - batch_size = len(prompt) - if isinstance(image, list): - image_batch_size = len(image) - else: - image_batch_size = image.shape[0] - if batch_size != image_batch_size: - raise ValueError( - f"`prompt` has batch size {batch_size} and `image` has batch size {image_batch_size}." - " Please make sure that passed `prompt` matches the batch size of `image`." - ) - - # check noise level - if noise_level > self.config.max_noise_level: - raise ValueError(f"`noise_level` has to be <= {self.config.max_noise_level} but is {noise_level}") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): - shape = (batch_size, num_channels_latents, height, width) - if latents is None: - if device.type == "mps": - # randn does not work reproducibly on mps - latents = torch.randn(shape, generator=generator, device="cpu", dtype=dtype).to(device) - else: - latents = torch.randn(shape, generator=generator, device=device, dtype=dtype) - else: - if latents.shape != shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}") - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - image: Union[torch.FloatTensor, PIL.Image.Image, List[PIL.Image.Image]], - num_inference_steps: int = 75, - guidance_scale: float = 9.0, - noise_level: int = 20, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[torch.Generator] = None, - latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: Optional[int] = 1, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - image (`PIL.Image.Image` or List[`PIL.Image.Image`] or `torch.FloatTensor`): - `Image`, or tensor representing an image batch which will be upscaled. * - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator`, *optional*): - A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation - deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - - # 1. Check inputs - self.check_inputs(prompt, image, noise_level, callback_steps) - - # 2. Define call parameters - batch_size = 1 if isinstance(prompt, str) else len(prompt) - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - text_embeddings = self._encode_prompt( - prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - # 4. Preprocess image - image = [image] if isinstance(image, PIL.Image.Image) else image - if isinstance(image, list): - image = [preprocess(img) for img in image] - image = torch.cat(image, dim=0) - image = image.to(dtype=text_embeddings.dtype, device=device) - - # 5. set timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 5. Add noise to image - noise_level = torch.tensor([noise_level], dtype=torch.long, device=device) - if device.type == "mps": - # randn does not work reproducibly on mps - noise = torch.randn(image.shape, generator=generator, device="cpu", dtype=text_embeddings.dtype).to(device) - else: - noise = torch.randn(image.shape, generator=generator, device=device, dtype=text_embeddings.dtype) - image = self.low_res_scheduler.add_noise(image, noise, noise_level) - image = torch.cat([image] * 2) if do_classifier_free_guidance else image - noise_level = torch.cat([noise_level] * 2) if do_classifier_free_guidance else noise_level - - # 6. Prepare latent variables - height, width = image.shape[2:] - num_channels_latents = self.vae.config.latent_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - text_embeddings.dtype, - device, - generator, - latents, - ) - - # 7. Check that sizes of image and latents match - num_channels_image = image.shape[1] - if num_channels_latents + num_channels_image != self.unet.config.in_channels: - raise ValueError( - f"Incorrect configuration settings! The config of `pipeline.unet`: {self.unet.config} expects" - f" {self.unet.config.in_channels} but received `num_channels_latents`: {num_channels_latents} +" - f" `num_channels_image`: {num_channels_image} " - f" = {num_channels_latents+num_channels_image}. Please verify the config of" - " `pipeline.unet` or your `image` input." - ) - - # 8. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 9. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - - # concat latents, mask, masked_image_latents in the channel dimension - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - latent_model_input = torch.cat([latent_model_input, image], dim=1) - - # predict the noise residual - noise_pred = self.unet( - latent_model_input, t, encoder_hidden_states=text_embeddings, class_labels=noise_level - ).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # 10. Post-processing - # make sure the VAE is in float32 mode, as it overflows in float16 - self.vae.to(dtype=torch.float32) - image = self.decode_latents(latents.float()) - - # 11. Convert to PIL - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image,) - - return ImagePipelineOutput(images=image) diff --git a/spaces/Jarvis2301/Aku/monotonic_align/__init__.py b/spaces/Jarvis2301/Aku/monotonic_align/__init__.py deleted file mode 100644 index e97eecc595dd3bd97d0104ec62799e2e5efea57c..0000000000000000000000000000000000000000 --- a/spaces/Jarvis2301/Aku/monotonic_align/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - - -def maximum_path(neg_cent, mask): - """ numba optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/KarloDarlo/3D_Photo_Inpainting/utils.py b/spaces/KarloDarlo/3D_Photo_Inpainting/utils.py deleted file mode 100644 index 808e48b1979d16f32c050f43f1f6c0ca36d8d18b..0000000000000000000000000000000000000000 --- a/spaces/KarloDarlo/3D_Photo_Inpainting/utils.py +++ /dev/null @@ -1,1416 +0,0 @@ -import os -import glob -import cv2 -import scipy.misc as misc -from skimage.transform import resize -import numpy as np -from functools import reduce -from operator import mul -import torch -from torch import nn -import matplotlib.pyplot as plt -import re -try: - import cynetworkx as netx -except ImportError: - import networkx as netx -from scipy.ndimage import gaussian_filter -from skimage.feature import canny -import collections -import shutil -import imageio -import copy -from matplotlib import pyplot as plt -from mpl_toolkits.mplot3d import Axes3D -import time -from scipy.interpolate import interp1d -from collections import namedtuple - -def path_planning(num_frames, x, y, z, path_type=''): - if path_type == 'straight-line': - corner_points = np.array([[0, 0, 0], [(0 + x) * 0.5, (0 + y) * 0.5, (0 + z) * 0.5], [x, y, z]]) - corner_t = np.linspace(0, 1, len(corner_points)) - t = np.linspace(0, 1, num_frames) - cs = interp1d(corner_t, corner_points, axis=0, kind='quadratic') - spline = cs(t) - xs, ys, zs = [xx.squeeze() for xx in np.split(spline, 3, 1)] - elif path_type == 'double-straight-line': - corner_points = np.array([[-x, -y, -z], [0, 0, 0], [x, y, z]]) - corner_t = np.linspace(0, 1, len(corner_points)) - t = np.linspace(0, 1, num_frames) - cs = interp1d(corner_t, corner_points, axis=0, kind='quadratic') - spline = cs(t) - xs, ys, zs = [xx.squeeze() for xx in np.split(spline, 3, 1)] - elif path_type == 'circle': - xs, ys, zs = [], [], [] - for frame_id, bs_shift_val in enumerate(np.arange(-2.0, 2.0, (4./num_frames))): - xs += [np.cos(bs_shift_val * np.pi) * 1 * x] - ys += [np.sin(bs_shift_val * np.pi) * 1 * y] - zs += [np.cos(bs_shift_val * np.pi/2.) * 1 * z] - xs, ys, zs = np.array(xs), np.array(ys), np.array(zs) - - return xs, ys, zs - -def open_small_mask(mask, context, open_iteration, kernel): - np_mask = mask.cpu().data.numpy().squeeze().astype(np.uint8) - raw_mask = np_mask.copy() - np_context = context.cpu().data.numpy().squeeze().astype(np.uint8) - np_input = np_mask + np_context - for _ in range(open_iteration): - np_input = cv2.erode(cv2.dilate(np_input, np.ones((kernel, kernel)), iterations=1), np.ones((kernel,kernel)), iterations=1) - np_mask[(np_input - np_context) > 0] = 1 - out_mask = torch.FloatTensor(np_mask).to(mask)[None, None, ...] - - return out_mask - -def filter_irrelevant_edge_new(self_edge, comp_edge, other_edges, other_edges_with_id, current_edge_id, context, depth, mesh, context_cc, spdb=False): - other_edges = other_edges.squeeze().astype(np.uint8) - other_edges_with_id = other_edges_with_id.squeeze() - self_edge = self_edge.squeeze() - dilate_bevel_self_edge = cv2.dilate((self_edge + comp_edge).astype(np.uint8), np.array([[1,1,1],[1,1,1],[1,1,1]]), iterations=1) - dilate_cross_self_edge = cv2.dilate((self_edge + comp_edge).astype(np.uint8), np.array([[0,1,0],[1,1,1],[0,1,0]]).astype(np.uint8), iterations=1) - edge_ids = np.unique(other_edges_with_id * context + (-1) * (1 - context)).astype(np.int) - end_depth_maps = np.zeros_like(self_edge) - self_edge_ids = np.sort(np.unique(other_edges_with_id[self_edge > 0]).astype(np.int)) - self_edge_ids = self_edge_ids[1:] if self_edge_ids.shape[0] > 0 and self_edge_ids[0] == -1 else self_edge_ids - self_comp_ids = np.sort(np.unique(other_edges_with_id[comp_edge > 0]).astype(np.int)) - self_comp_ids = self_comp_ids[1:] if self_comp_ids.shape[0] > 0 and self_comp_ids[0] == -1 else self_comp_ids - edge_ids = edge_ids[1:] if edge_ids[0] == -1 else edge_ids - other_edges_info = [] - extend_other_edges = np.zeros_like(other_edges) - if spdb is True: - f, ((ax1, ax2, ax3)) = plt.subplots(1, 3, sharex=True, sharey=True); ax1.imshow(self_edge); ax2.imshow(context); ax3.imshow(other_edges_with_id * context + (-1) * (1 - context)); plt.show() - import pdb; pdb.set_trace() - filter_self_edge = np.zeros_like(self_edge) - for self_edge_id in self_edge_ids: - filter_self_edge[other_edges_with_id == self_edge_id] = 1 - dilate_self_comp_edge = cv2.dilate(comp_edge, kernel=np.ones((3, 3)), iterations=2) - valid_self_comp_edge = np.zeros_like(comp_edge) - for self_comp_id in self_comp_ids: - valid_self_comp_edge[self_comp_id == other_edges_with_id] = 1 - self_comp_edge = dilate_self_comp_edge * valid_self_comp_edge - filter_self_edge = (filter_self_edge + self_comp_edge).clip(0, 1) - for edge_id in edge_ids: - other_edge_locs = (other_edges_with_id == edge_id).astype(np.uint8) - condition = (other_edge_locs * other_edges * context.astype(np.uint8)) - end_cross_point = dilate_cross_self_edge * condition * (1 - filter_self_edge) - end_bevel_point = dilate_bevel_self_edge * condition * (1 - filter_self_edge) - if end_bevel_point.max() != 0: - end_depth_maps[end_bevel_point != 0] = depth[end_bevel_point != 0] - if end_cross_point.max() == 0: - nxs, nys = np.where(end_bevel_point != 0) - for nx, ny in zip(nxs, nys): - bevel_node = [xx for xx in context_cc if xx[0] == nx and xx[1] == ny][0] - for ne in mesh.neighbors(bevel_node): - if other_edges_with_id[ne[0], ne[1]] > -1 and dilate_cross_self_edge[ne[0], ne[1]] > 0: - extend_other_edges[ne[0], ne[1]] = 1 - break - else: - other_edges[other_edges_with_id == edge_id] = 0 - other_edges = (other_edges + extend_other_edges).clip(0, 1) * context - - return other_edges, end_depth_maps, other_edges_info - -def clean_far_edge_new(input_edge, end_depth_maps, mask, context, global_mesh, info_on_pix, self_edge, inpaint_id, config): - mesh = netx.Graph() - hxs, hys = np.where(input_edge * mask > 0) - valid_near_edge = (input_edge != 0).astype(np.uint8) * context - valid_map = mask + context - invalid_edge_ids = [] - for hx, hy in zip(hxs, hys): - node = (hx ,hy) - mesh.add_node((hx, hy)) - eight_nes = [ne for ne in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1), \ - (hx + 1, hy + 1), (hx - 1, hy - 1), (hx - 1, hy + 1), (hx + 1, hy - 1)]\ - if 0 <= ne[0] < input_edge.shape[0] and 0 <= ne[1] < input_edge.shape[1] and 0 < input_edge[ne[0], ne[1]]] # or end_depth_maps[ne[0], ne[1]] != 0] - for ne in eight_nes: - mesh.add_edge(node, ne, length=np.hypot(ne[0] - hx, ne[1] - hy)) - if end_depth_maps[ne[0], ne[1]] != 0: - mesh.nodes[ne[0], ne[1]]['cnt'] = True - if end_depth_maps[ne[0], ne[1]] == 0: - import pdb; pdb.set_trace() - mesh.nodes[ne[0], ne[1]]['depth'] = end_depth_maps[ne[0], ne[1]] - elif mask[ne[0], ne[1]] != 1: - four_nes = [nne for nne in [(ne[0] + 1, ne[1]), (ne[0] - 1, ne[1]), (ne[0], ne[1] + 1), (ne[0], ne[1] - 1)]\ - if nne[0] < end_depth_maps.shape[0] and nne[0] >= 0 and nne[1] < end_depth_maps.shape[1] and nne[1] >= 0] - for nne in four_nes: - if end_depth_maps[nne[0], nne[1]] != 0: - mesh.add_edge(nne, ne, length=np.hypot(nne[0] - ne[0], nne[1] - ne[1])) - mesh.nodes[nne[0], nne[1]]['cnt'] = True - mesh.nodes[nne[0], nne[1]]['depth'] = end_depth_maps[nne[0], nne[1]] - ccs = [*netx.connected_components(mesh)] - end_pts = [] - for cc in ccs: - end_pts.append(set()) - for node in cc: - if mesh.nodes[node].get('cnt') is not None: - end_pts[-1].add((node[0], node[1], mesh.nodes[node]['depth'])) - predef_npaths = [None for _ in range(len(ccs))] - fpath_map = np.zeros_like(input_edge) - 1 - npath_map = np.zeros_like(input_edge) - 1 - npaths, fpaths = dict(), dict() - break_flag = False - end_idx = 0 - while end_idx < len(end_pts): - end_pt, cc = [*zip(end_pts, ccs)][end_idx] - end_idx += 1 - sorted_end_pt = [] - fpath = [] - iter_fpath = [] - if len(end_pt) > 2 or len(end_pt) == 0: - if len(end_pt) > 2: - continue - continue - if len(end_pt) == 2: - ravel_end = [*end_pt] - tmp_sub_mesh = mesh.subgraph(list(cc)).copy() - tmp_npath = [*netx.shortest_path(tmp_sub_mesh, (ravel_end[0][0], ravel_end[0][1]), (ravel_end[1][0], ravel_end[1][1]), weight='length')] - fpath_map1, npath_map1, disp_diff1 = plan_path(mesh, info_on_pix, cc, ravel_end[0:1], global_mesh, input_edge, mask, valid_map, inpaint_id, npath_map=None, fpath_map=None, npath=tmp_npath) - fpath_map2, npath_map2, disp_diff2 = plan_path(mesh, info_on_pix, cc, ravel_end[1:2], global_mesh, input_edge, mask, valid_map, inpaint_id, npath_map=None, fpath_map=None, npath=tmp_npath) - tmp_disp_diff = [disp_diff1, disp_diff2] - self_end = [] - edge_len = [] - ds_edge = cv2.dilate(self_edge.astype(np.uint8), np.ones((3, 3)), iterations=1) - if ds_edge[ravel_end[0][0], ravel_end[0][1]] > 0: - self_end.append(1) - else: - self_end.append(0) - if ds_edge[ravel_end[1][0], ravel_end[1][1]] > 0: - self_end.append(1) - else: - self_end.append(0) - edge_len = [np.count_nonzero(npath_map1), np.count_nonzero(npath_map2)] - sorted_end_pts = [xx[0] for xx in sorted(zip(ravel_end, self_end, edge_len, [disp_diff1, disp_diff2]), key=lambda x: (x[1], x[2]), reverse=True)] - re_npath_map1, re_fpath_map1 = (npath_map1 != -1).astype(np.uint8), (fpath_map1 != -1).astype(np.uint8) - re_npath_map2, re_fpath_map2 = (npath_map2 != -1).astype(np.uint8), (fpath_map2 != -1).astype(np.uint8) - if np.count_nonzero(re_npath_map1 * re_npath_map2 * mask) / \ - (np.count_nonzero((re_npath_map1 + re_npath_map2) * mask) + 1e-6) > 0.5\ - and np.count_nonzero(re_fpath_map1 * re_fpath_map2 * mask) / \ - (np.count_nonzero((re_fpath_map1 + re_fpath_map2) * mask) + 1e-6) > 0.5\ - and tmp_disp_diff[0] != -1 and tmp_disp_diff[1] != -1: - my_fpath_map, my_npath_map, npath, fpath = \ - plan_path_e2e(mesh, cc, sorted_end_pts, global_mesh, input_edge, mask, valid_map, inpaint_id, npath_map=None, fpath_map=None) - npath_map[my_npath_map != -1] = my_npath_map[my_npath_map != -1] - fpath_map[my_fpath_map != -1] = my_fpath_map[my_fpath_map != -1] - if len(fpath) > 0: - edge_id = global_mesh.nodes[[*sorted_end_pts][0]]['edge_id'] - fpaths[edge_id] = fpath - npaths[edge_id] = npath - invalid_edge_ids.append(edge_id) - else: - if tmp_disp_diff[0] != -1: - ratio_a = tmp_disp_diff[0] / (np.sum(tmp_disp_diff) + 1e-8) - else: - ratio_a = 0 - if tmp_disp_diff[1] != -1: - ratio_b = tmp_disp_diff[1] / (np.sum(tmp_disp_diff) + 1e-8) - else: - ratio_b = 0 - npath_len = len(tmp_npath) - if npath_len > config['depth_edge_dilate_2'] * 2: - npath_len = npath_len - (config['depth_edge_dilate_2'] * 1) - tmp_npath_a = tmp_npath[:int(np.floor(npath_len * ratio_a))] - tmp_npath_b = tmp_npath[::-1][:int(np.floor(npath_len * ratio_b))] - tmp_merge = [] - if len(tmp_npath_a) > 0 and sorted_end_pts[0][0] == tmp_npath_a[0][0] and sorted_end_pts[0][1] == tmp_npath_a[0][1]: - if len(tmp_npath_a) > 0 and mask[tmp_npath_a[-1][0], tmp_npath_a[-1][1]] > 0: - tmp_merge.append([sorted_end_pts[:1], tmp_npath_a]) - if len(tmp_npath_b) > 0 and mask[tmp_npath_b[-1][0], tmp_npath_b[-1][1]] > 0: - tmp_merge.append([sorted_end_pts[1:2], tmp_npath_b]) - elif len(tmp_npath_b) > 0 and sorted_end_pts[0][0] == tmp_npath_b[0][0] and sorted_end_pts[0][1] == tmp_npath_b[0][1]: - if len(tmp_npath_b) > 0 and mask[tmp_npath_b[-1][0], tmp_npath_b[-1][1]] > 0: - tmp_merge.append([sorted_end_pts[:1], tmp_npath_b]) - if len(tmp_npath_a) > 0 and mask[tmp_npath_a[-1][0], tmp_npath_a[-1][1]] > 0: - tmp_merge.append([sorted_end_pts[1:2], tmp_npath_a]) - for tmp_idx in range(len(tmp_merge)): - if len(tmp_merge[tmp_idx][1]) == 0: - continue - end_pts.append(tmp_merge[tmp_idx][0]) - ccs.append(set(tmp_merge[tmp_idx][1])) - if len(end_pt) == 1: - sub_mesh = mesh.subgraph(list(cc)).copy() - pnodes = netx.periphery(sub_mesh) - if len(end_pt) == 1: - ends = [*end_pt] - elif len(sorted_end_pt) == 1: - ends = [*sorted_end_pt] - else: - import pdb; pdb.set_trace() - try: - edge_id = global_mesh.nodes[ends[0]]['edge_id'] - except: - import pdb; pdb.set_trace() - pnodes = sorted(pnodes, - key=lambda x: np.hypot((x[0] - ends[0][0]), (x[1] - ends[0][1])), - reverse=True)[0] - npath = [*netx.shortest_path(sub_mesh, (ends[0][0], ends[0][1]), pnodes, weight='length')] - for np_node in npath: - npath_map[np_node[0], np_node[1]] = edge_id - fpath = [] - if global_mesh.nodes[ends[0]].get('far') is None: - print("None far") - else: - fnodes = global_mesh.nodes[ends[0]].get('far') - dmask = mask + 0 - did = 0 - while True: - did += 1 - dmask = cv2.dilate(dmask, np.ones((3, 3)), iterations=1) - if did > 3: - break - ffnode = [fnode for fnode in fnodes if (dmask[fnode[0], fnode[1]] > 0 and mask[fnode[0], fnode[1]] == 0 and\ - global_mesh.nodes[fnode].get('inpaint_id') != inpaint_id + 1)] - if len(ffnode) > 0: - fnode = ffnode[0] - break - if len(ffnode) == 0: - continue - fpath.append((fnode[0], fnode[1])) - barrel_dir = np.array([[1, 0], [1, 1], [0, 1], [-1, 1], [-1, 0], [-1, -1], [0, -1], [1, -1]]) - n2f_dir = (int(fnode[0] - npath[0][0]), int(fnode[1] - npath[0][1])) - while True: - if barrel_dir[0, 0] == n2f_dir[0] and barrel_dir[0, 1] == n2f_dir[1]: - n2f_barrel = barrel_dir.copy() - break - barrel_dir = np.roll(barrel_dir, 1, axis=0) - for step in range(0, len(npath)): - if step == 0: - continue - elif step == 1: - next_dir = (npath[step][0] - npath[step - 1][0], npath[step][1] - npath[step - 1][1]) - while True: - if barrel_dir[0, 0] == next_dir[0] and barrel_dir[0, 1] == next_dir[1]: - next_barrel = barrel_dir.copy() - break - barrel_dir = np.roll(barrel_dir, 1, axis=0) - barrel_pair = np.stack((n2f_barrel, next_barrel), axis=0) - n2f_dir = (barrel_pair[0, 0, 0], barrel_pair[0, 0, 1]) - elif step > 1: - next_dir = (npath[step][0] - npath[step - 1][0], npath[step][1] - npath[step - 1][1]) - while True: - if barrel_pair[1, 0, 0] == next_dir[0] and barrel_pair[1, 0, 1] == next_dir[1]: - next_barrel = barrel_pair.copy() - break - barrel_pair = np.roll(barrel_pair, 1, axis=1) - n2f_dir = (barrel_pair[0, 0, 0], barrel_pair[0, 0, 1]) - new_locs = [] - if abs(n2f_dir[0]) == 1: - new_locs.append((npath[step][0] + n2f_dir[0], npath[step][1])) - if abs(n2f_dir[1]) == 1: - new_locs.append((npath[step][0], npath[step][1] + n2f_dir[1])) - if len(new_locs) > 1: - new_locs = sorted(new_locs, key=lambda xx: np.hypot((xx[0] - fpath[-1][0]), (xx[1] - fpath[-1][1]))) - break_flag = False - for new_loc in new_locs: - new_loc_nes = [xx for xx in [(new_loc[0] + 1, new_loc[1]), (new_loc[0] - 1, new_loc[1]), - (new_loc[0], new_loc[1] + 1), (new_loc[0], new_loc[1] - 1)]\ - if xx[0] >= 0 and xx[0] < fpath_map.shape[0] and xx[1] >= 0 and xx[1] < fpath_map.shape[1]] - if np.all([(fpath_map[nlne[0], nlne[1]] == -1) for nlne in new_loc_nes]) != True: - break - if npath_map[new_loc[0], new_loc[1]] != -1: - if npath_map[new_loc[0], new_loc[1]] != edge_id: - break_flag = True - break - else: - continue - if valid_map[new_loc[0], new_loc[1]] == 0: - break_flag = True - break - fpath.append(new_loc) - if break_flag is True: - break - if step != len(npath) - 1: - for xx in npath[step:]: - if npath_map[xx[0], xx[1]] == edge_id: - npath_map[xx[0], xx[1]] = -1 - npath = npath[:step] - if len(fpath) > 0: - for fp_node in fpath: - fpath_map[fp_node[0], fp_node[1]] = edge_id - fpaths[edge_id] = fpath - npaths[edge_id] = npath - fpath_map[valid_near_edge != 0] = -1 - if len(fpath) > 0: - iter_fpath = copy.deepcopy(fpaths[edge_id]) - for node in iter_fpath: - if valid_near_edge[node[0], node[1]] != 0: - fpaths[edge_id].remove(node) - - return fpath_map, npath_map, False, npaths, fpaths, invalid_edge_ids - -def plan_path_e2e(mesh, cc, end_pts, global_mesh, input_edge, mask, valid_map, inpaint_id, npath_map=None, fpath_map=None): - my_npath_map = np.zeros_like(input_edge) - 1 - my_fpath_map = np.zeros_like(input_edge) - 1 - sub_mesh = mesh.subgraph(list(cc)).copy() - ends_1, ends_2 = end_pts[0], end_pts[1] - edge_id = global_mesh.nodes[ends_1]['edge_id'] - npath = [*netx.shortest_path(sub_mesh, (ends_1[0], ends_1[1]), (ends_2[0], ends_2[1]), weight='length')] - for np_node in npath: - my_npath_map[np_node[0], np_node[1]] = edge_id - fpath = [] - if global_mesh.nodes[ends_1].get('far') is None: - print("None far") - else: - fnodes = global_mesh.nodes[ends_1].get('far') - dmask = mask + 0 - while True: - dmask = cv2.dilate(dmask, np.ones((3, 3)), iterations=1) - ffnode = [fnode for fnode in fnodes if (dmask[fnode[0], fnode[1]] > 0 and mask[fnode[0], fnode[1]] == 0 and\ - global_mesh.nodes[fnode].get('inpaint_id') != inpaint_id + 1)] - if len(ffnode) > 0: - fnode = ffnode[0] - break - e_fnodes = global_mesh.nodes[ends_2].get('far') - dmask = mask + 0 - while True: - dmask = cv2.dilate(dmask, np.ones((3, 3)), iterations=1) - e_ffnode = [e_fnode for e_fnode in e_fnodes if (dmask[e_fnode[0], e_fnode[1]] > 0 and mask[e_fnode[0], e_fnode[1]] == 0 and\ - global_mesh.nodes[e_fnode].get('inpaint_id') != inpaint_id + 1)] - if len(e_ffnode) > 0: - e_fnode = e_ffnode[0] - break - fpath.append((fnode[0], fnode[1])) - if len(e_ffnode) == 0 or len(ffnode) == 0: - return my_npath_map, my_fpath_map, [], [] - barrel_dir = np.array([[1, 0], [1, 1], [0, 1], [-1, 1], [-1, 0], [-1, -1], [0, -1], [1, -1]]) - n2f_dir = (int(fnode[0] - npath[0][0]), int(fnode[1] - npath[0][1])) - while True: - if barrel_dir[0, 0] == n2f_dir[0] and barrel_dir[0, 1] == n2f_dir[1]: - n2f_barrel = barrel_dir.copy() - break - barrel_dir = np.roll(barrel_dir, 1, axis=0) - for step in range(0, len(npath)): - if step == 0: - continue - elif step == 1: - next_dir = (npath[step][0] - npath[step - 1][0], npath[step][1] - npath[step - 1][1]) - while True: - if barrel_dir[0, 0] == next_dir[0] and barrel_dir[0, 1] == next_dir[1]: - next_barrel = barrel_dir.copy() - break - barrel_dir = np.roll(barrel_dir, 1, axis=0) - barrel_pair = np.stack((n2f_barrel, next_barrel), axis=0) - n2f_dir = (barrel_pair[0, 0, 0], barrel_pair[0, 0, 1]) - elif step > 1: - next_dir = (npath[step][0] - npath[step - 1][0], npath[step][1] - npath[step - 1][1]) - while True: - if barrel_pair[1, 0, 0] == next_dir[0] and barrel_pair[1, 0, 1] == next_dir[1]: - next_barrel = barrel_pair.copy() - break - barrel_pair = np.roll(barrel_pair, 1, axis=1) - n2f_dir = (barrel_pair[0, 0, 0], barrel_pair[0, 0, 1]) - new_locs = [] - if abs(n2f_dir[0]) == 1: - new_locs.append((npath[step][0] + n2f_dir[0], npath[step][1])) - if abs(n2f_dir[1]) == 1: - new_locs.append((npath[step][0], npath[step][1] + n2f_dir[1])) - if len(new_locs) > 1: - new_locs = sorted(new_locs, key=lambda xx: np.hypot((xx[0] - fpath[-1][0]), (xx[1] - fpath[-1][1]))) - break_flag = False - for new_loc in new_locs: - new_loc_nes = [xx for xx in [(new_loc[0] + 1, new_loc[1]), (new_loc[0] - 1, new_loc[1]), - (new_loc[0], new_loc[1] + 1), (new_loc[0], new_loc[1] - 1)]\ - if xx[0] >= 0 and xx[0] < my_fpath_map.shape[0] and xx[1] >= 0 and xx[1] < my_fpath_map.shape[1]] - if fpath_map is not None and np.sum([fpath_map[nlne[0], nlne[1]] for nlne in new_loc_nes]) != 0: - break_flag = True - break - if my_npath_map[new_loc[0], new_loc[1]] != -1: - continue - if npath_map is not None and npath_map[new_loc[0], new_loc[1]] != edge_id: - break_flag = True - break - fpath.append(new_loc) - if break_flag is True: - break - if (e_fnode[0], e_fnode[1]) not in fpath: - fpath.append((e_fnode[0], e_fnode[1])) - if step != len(npath) - 1: - for xx in npath[step:]: - if my_npath_map[xx[0], xx[1]] == edge_id: - my_npath_map[xx[0], xx[1]] = -1 - npath = npath[:step] - if len(fpath) > 0: - for fp_node in fpath: - my_fpath_map[fp_node[0], fp_node[1]] = edge_id - - return my_fpath_map, my_npath_map, npath, fpath - -def plan_path(mesh, info_on_pix, cc, end_pt, global_mesh, input_edge, mask, valid_map, inpaint_id, npath_map=None, fpath_map=None, npath=None): - my_npath_map = np.zeros_like(input_edge) - 1 - my_fpath_map = np.zeros_like(input_edge) - 1 - sub_mesh = mesh.subgraph(list(cc)).copy() - pnodes = netx.periphery(sub_mesh) - ends = [*end_pt] - edge_id = global_mesh.nodes[ends[0]]['edge_id'] - pnodes = sorted(pnodes, - key=lambda x: np.hypot((x[0] - ends[0][0]), (x[1] - ends[0][1])), - reverse=True)[0] - if npath is None: - npath = [*netx.shortest_path(sub_mesh, (ends[0][0], ends[0][1]), pnodes, weight='length')] - else: - if (ends[0][0], ends[0][1]) == npath[0]: - npath = npath - elif (ends[0][0], ends[0][1]) == npath[-1]: - npath = npath[::-1] - else: - import pdb; pdb.set_trace() - for np_node in npath: - my_npath_map[np_node[0], np_node[1]] = edge_id - fpath = [] - if global_mesh.nodes[ends[0]].get('far') is None: - print("None far") - else: - fnodes = global_mesh.nodes[ends[0]].get('far') - dmask = mask + 0 - did = 0 - while True: - did += 1 - if did > 3: - return my_fpath_map, my_npath_map, -1 - dmask = cv2.dilate(dmask, np.ones((3, 3)), iterations=1) - ffnode = [fnode for fnode in fnodes if (dmask[fnode[0], fnode[1]] > 0 and mask[fnode[0], fnode[1]] == 0 and\ - global_mesh.nodes[fnode].get('inpaint_id') != inpaint_id + 1)] - if len(ffnode) > 0: - fnode = ffnode[0] - break - - fpath.append((fnode[0], fnode[1])) - disp_diff = 0. - for n_loc in npath: - if mask[n_loc[0], n_loc[1]] != 0: - disp_diff = abs(abs(1. / info_on_pix[(n_loc[0], n_loc[1])][0]['depth']) - abs(1. / ends[0][2])) - break - barrel_dir = np.array([[1, 0], [1, 1], [0, 1], [-1, 1], [-1, 0], [-1, -1], [0, -1], [1, -1]]) - n2f_dir = (int(fnode[0] - npath[0][0]), int(fnode[1] - npath[0][1])) - while True: - if barrel_dir[0, 0] == n2f_dir[0] and barrel_dir[0, 1] == n2f_dir[1]: - n2f_barrel = barrel_dir.copy() - break - barrel_dir = np.roll(barrel_dir, 1, axis=0) - for step in range(0, len(npath)): - if step == 0: - continue - elif step == 1: - next_dir = (npath[step][0] - npath[step - 1][0], npath[step][1] - npath[step - 1][1]) - while True: - if barrel_dir[0, 0] == next_dir[0] and barrel_dir[0, 1] == next_dir[1]: - next_barrel = barrel_dir.copy() - break - barrel_dir = np.roll(barrel_dir, 1, axis=0) - barrel_pair = np.stack((n2f_barrel, next_barrel), axis=0) - n2f_dir = (barrel_pair[0, 0, 0], barrel_pair[0, 0, 1]) - elif step > 1: - next_dir = (npath[step][0] - npath[step - 1][0], npath[step][1] - npath[step - 1][1]) - while True: - if barrel_pair[1, 0, 0] == next_dir[0] and barrel_pair[1, 0, 1] == next_dir[1]: - next_barrel = barrel_pair.copy() - break - barrel_pair = np.roll(barrel_pair, 1, axis=1) - n2f_dir = (barrel_pair[0, 0, 0], barrel_pair[0, 0, 1]) - new_locs = [] - if abs(n2f_dir[0]) == 1: - new_locs.append((npath[step][0] + n2f_dir[0], npath[step][1])) - if abs(n2f_dir[1]) == 1: - new_locs.append((npath[step][0], npath[step][1] + n2f_dir[1])) - if len(new_locs) > 1: - new_locs = sorted(new_locs, key=lambda xx: np.hypot((xx[0] - fpath[-1][0]), (xx[1] - fpath[-1][1]))) - break_flag = False - for new_loc in new_locs: - new_loc_nes = [xx for xx in [(new_loc[0] + 1, new_loc[1]), (new_loc[0] - 1, new_loc[1]), - (new_loc[0], new_loc[1] + 1), (new_loc[0], new_loc[1] - 1)]\ - if xx[0] >= 0 and xx[0] < my_fpath_map.shape[0] and xx[1] >= 0 and xx[1] < my_fpath_map.shape[1]] - if fpath_map is not None and np.all([(fpath_map[nlne[0], nlne[1]] == -1) for nlne in new_loc_nes]) != True: - break_flag = True - break - if np.all([(my_fpath_map[nlne[0], nlne[1]] == -1) for nlne in new_loc_nes]) != True: - break_flag = True - break - if my_npath_map[new_loc[0], new_loc[1]] != -1: - continue - if npath_map is not None and npath_map[new_loc[0], new_loc[1]] != edge_id: - break_flag = True - break - if valid_map[new_loc[0], new_loc[1]] == 0: - break_flag = True - break - fpath.append(new_loc) - if break_flag is True: - break - if step != len(npath) - 1: - for xx in npath[step:]: - if my_npath_map[xx[0], xx[1]] == edge_id: - my_npath_map[xx[0], xx[1]] = -1 - npath = npath[:step] - if len(fpath) > 0: - for fp_node in fpath: - my_fpath_map[fp_node[0], fp_node[1]] = edge_id - - return my_fpath_map, my_npath_map, disp_diff - -def refresh_node(old_node, old_feat, new_node, new_feat, mesh, stime=False): - mesh.add_node(new_node) - mesh.nodes[new_node].update(new_feat) - mesh.nodes[new_node].update(old_feat) - for ne in mesh.neighbors(old_node): - mesh.add_edge(new_node, ne) - if mesh.nodes[new_node].get('far') is not None: - tmp_far_nodes = mesh.nodes[new_node]['far'] - for far_node in tmp_far_nodes: - if mesh.has_node(far_node) is False: - mesh.nodes[new_node]['far'].remove(far_node) - continue - if mesh.nodes[far_node].get('near') is not None: - for idx in range(len(mesh.nodes[far_node].get('near'))): - if mesh.nodes[far_node]['near'][idx][0] == new_node[0] and mesh.nodes[far_node]['near'][idx][1] == new_node[1]: - if len(mesh.nodes[far_node]['near'][idx]) == len(old_node): - mesh.nodes[far_node]['near'][idx] = new_node - if mesh.nodes[new_node].get('near') is not None: - tmp_near_nodes = mesh.nodes[new_node]['near'] - for near_node in tmp_near_nodes: - if mesh.has_node(near_node) is False: - mesh.nodes[new_node]['near'].remove(near_node) - continue - if mesh.nodes[near_node].get('far') is not None: - for idx in range(len(mesh.nodes[near_node].get('far'))): - if mesh.nodes[near_node]['far'][idx][0] == new_node[0] and mesh.nodes[near_node]['far'][idx][1] == new_node[1]: - if len(mesh.nodes[near_node]['far'][idx]) == len(old_node): - mesh.nodes[near_node]['far'][idx] = new_node - if new_node != old_node: - mesh.remove_node(old_node) - if stime is False: - return mesh - else: - return mesh, None, None - - -def create_placeholder(context, mask, depth, fpath_map, npath_map, mesh, inpaint_id, edge_ccs, extend_edge_cc, all_edge_maps, self_edge_id): - add_node_time = 0 - add_edge_time = 0 - add_far_near_time = 0 - valid_area = context + mask - H, W = mesh.graph['H'], mesh.graph['W'] - edge_cc = edge_ccs[self_edge_id] - num_com = len(edge_cc) + len(extend_edge_cc) - hxs, hys = np.where(mask > 0) - for hx, hy in zip(hxs, hys): - mesh.add_node((hx, hy), inpaint_id=inpaint_id + 1, num_context=num_com) - for hx, hy in zip(hxs, hys): - four_nes = [(x, y) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1)] if\ - 0 <= x < mesh.graph['H'] and 0 <= y < mesh.graph['W'] and valid_area[x, y] != 0] - for ne in four_nes: - if mask[ne[0], ne[1]] != 0: - if not mesh.has_edge((hx, hy), ne): - mesh.add_edge((hx, hy), ne) - elif depth[ne[0], ne[1]] != 0: - if mesh.has_node((ne[0], ne[1], depth[ne[0], ne[1]])) and\ - not mesh.has_edge((hx, hy), (ne[0], ne[1], depth[ne[0], ne[1]])): - mesh.add_edge((hx, hy), (ne[0], ne[1], depth[ne[0], ne[1]])) - else: - print("Undefined context node.") - import pdb; pdb.set_trace() - near_ids = np.unique(npath_map) - if near_ids[0] == -1: near_ids = near_ids[1:] - for near_id in near_ids: - hxs, hys = np.where((fpath_map == near_id) & (mask > 0)) - if hxs.shape[0] > 0: - mesh.graph['max_edge_id'] = mesh.graph['max_edge_id'] + 1 - else: - break - for hx, hy in zip(hxs, hys): - mesh.nodes[(hx, hy)]['edge_id'] = int(round(mesh.graph['max_edge_id'])) - four_nes = [(x, y) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1)] if\ - x < mesh.graph['H'] and x >= 0 and y < mesh.graph['W'] and y >= 0 and npath_map[x, y] == near_id] - for xx in four_nes: - xx_n = copy.deepcopy(xx) - if not mesh.has_node(xx_n): - if mesh.has_node((xx_n[0], xx_n[1], depth[xx_n[0], xx_n[1]])): - xx_n = (xx_n[0], xx_n[1], depth[xx_n[0], xx_n[1]]) - if mesh.has_edge((hx, hy), xx_n): - # pass - mesh.remove_edge((hx, hy), xx_n) - if mesh.nodes[(hx, hy)].get('near') is None: - mesh.nodes[(hx, hy)]['near'] = [] - mesh.nodes[(hx, hy)]['near'].append(xx_n) - connect_point_exception = set() - hxs, hys = np.where((npath_map == near_id) & (all_edge_maps > -1)) - for hx, hy in zip(hxs, hys): - unknown_id = int(round(all_edge_maps[hx, hy])) - if unknown_id != near_id and unknown_id != self_edge_id: - unknown_node = set([xx for xx in edge_ccs[unknown_id] if xx[0] == hx and xx[1] == hy]) - connect_point_exception |= unknown_node - hxs, hys = np.where((npath_map == near_id) & (mask > 0)) - if hxs.shape[0] > 0: - mesh.graph['max_edge_id'] = mesh.graph['max_edge_id'] + 1 - else: - break - for hx, hy in zip(hxs, hys): - mesh.nodes[(hx, hy)]['edge_id'] = int(round(mesh.graph['max_edge_id'])) - mesh.nodes[(hx, hy)]['connect_point_id'] = int(round(near_id)) - mesh.nodes[(hx, hy)]['connect_point_exception'] = connect_point_exception - four_nes = [(x, y) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1)] if\ - x < mesh.graph['H'] and x >= 0 and y < mesh.graph['W'] and y >= 0 and fpath_map[x, y] == near_id] - for xx in four_nes: - xx_n = copy.deepcopy(xx) - if not mesh.has_node(xx_n): - if mesh.has_node((xx_n[0], xx_n[1], depth[xx_n[0], xx_n[1]])): - xx_n = (xx_n[0], xx_n[1], depth[xx_n[0], xx_n[1]]) - if mesh.has_edge((hx, hy), xx_n): - mesh.remove_edge((hx, hy), xx_n) - if mesh.nodes[(hx, hy)].get('far') is None: - mesh.nodes[(hx, hy)]['far'] = [] - mesh.nodes[(hx, hy)]['far'].append(xx_n) - - return mesh, add_node_time, add_edge_time, add_far_near_time - -def clean_far_edge(mask_edge, mask_edge_with_id, context_edge, mask, info_on_pix, global_mesh, anchor): - if isinstance(mask_edge, torch.Tensor): - if mask_edge.is_cuda: - mask_edge = mask_edge.cpu() - mask_edge = mask_edge.data - mask_edge = mask_edge.numpy() - if isinstance(context_edge, torch.Tensor): - if context_edge.is_cuda: - context_edge = context_edge.cpu() - context_edge = context_edge.data - context_edge = context_edge.numpy() - if isinstance(mask, torch.Tensor): - if mask.is_cuda: - mask = mask.cpu() - mask = mask.data - mask = mask.numpy() - mask = mask.squeeze() - mask_edge = mask_edge.squeeze() - context_edge = context_edge.squeeze() - valid_near_edge = np.zeros_like(mask_edge) - far_edge = np.zeros_like(mask_edge) - far_edge_with_id = np.ones_like(mask_edge) * -1 - near_edge_with_id = np.ones_like(mask_edge) * -1 - uncleaned_far_edge = np.zeros_like(mask_edge) - # Detect if there is any valid pixel mask_edge, if not ==> return default value - if mask_edge.sum() == 0: - return far_edge, uncleaned_far_edge, far_edge_with_id, near_edge_with_id - mask_edge_ids = dict(collections.Counter(mask_edge_with_id.flatten())).keys() - for edge_id in mask_edge_ids: - if edge_id < 0: - continue - specific_edge_map = (mask_edge_with_id == edge_id).astype(np.uint8) - _, sub_specific_edge_maps = cv2.connectedComponents(specific_edge_map.astype(np.uint8), connectivity=8) - for sub_edge_id in range(1, sub_specific_edge_maps.max() + 1): - specific_edge_map = (sub_specific_edge_maps == sub_edge_id).astype(np.uint8) - edge_pxs, edge_pys = np.where(specific_edge_map > 0) - edge_mesh = netx.Graph() - for edge_px, edge_py in zip(edge_pxs, edge_pys): - edge_mesh.add_node((edge_px, edge_py)) - for ex in [edge_px-1, edge_px, edge_px+1]: - for ey in [edge_py-1, edge_py, edge_py+1]: - if edge_px == ex and edge_py == ey: - continue - if ex < 0 or ex >= specific_edge_map.shape[0] or ey < 0 or ey >= specific_edge_map.shape[1]: - continue - if specific_edge_map[ex, ey] == 1: - if edge_mesh.has_node((ex, ey)): - edge_mesh.add_edge((ex, ey), (edge_px, edge_py)) - periphery_nodes = netx.periphery(edge_mesh) - path_diameter = netx.diameter(edge_mesh) - start_near_node = None - for node_s in periphery_nodes: - for node_e in periphery_nodes: - if node_s != node_e: - if netx.shortest_path_length(edge_mesh, node_s, node_e) == path_diameter: - if np.any(context_edge[node_s[0]-1:node_s[0]+2, node_s[1]-1:node_s[1]+2].flatten()): - start_near_node = (node_s[0], node_s[1]) - end_near_node = (node_e[0], node_e[1]) - break - if np.any(context_edge[node_e[0]-1:node_e[0]+2, node_e[1]-1:node_e[1]+2].flatten()): - start_near_node = (node_e[0], node_e[1]) - end_near_node = (node_s[0], node_s[1]) - break - if start_near_node is not None: - break - if start_near_node is None: - continue - new_specific_edge_map = np.zeros_like(mask) - for path_node in netx.shortest_path(edge_mesh, start_near_node, end_near_node): - new_specific_edge_map[path_node[0], path_node[1]] = 1 - context_near_pxs, context_near_pys = np.where(context_edge[start_near_node[0]-1:start_near_node[0]+2, start_near_node[1]-1:start_near_node[1]+2] > 0) - distance = np.abs((context_near_pxs - 1)) + np.abs((context_near_pys - 1)) - if (np.where(distance == distance.min())[0].shape[0]) > 1: - closest_pxs = context_near_pxs[np.where(distance == distance.min())[0]] - closest_pys = context_near_pys[np.where(distance == distance.min())[0]] - closest_depths = [] - for closest_px, closest_py in zip(closest_pxs, closest_pys): - if info_on_pix.get((closest_px + start_near_node[0] - 1 + anchor[0], closest_py + start_near_node[1] - 1 + anchor[2])) is not None: - for info in info_on_pix.get((closest_px + start_near_node[0] - 1 + anchor[0], closest_py + start_near_node[1] - 1 + anchor[2])): - if info['synthesis'] is False: - closest_depths.append(abs(info['depth'])) - context_near_px, context_near_py = closest_pxs[np.array(closest_depths).argmax()], closest_pys[np.array(closest_depths).argmax()] - else: - context_near_px, context_near_py = context_near_pxs[distance.argmin()], context_near_pys[distance.argmin()] - context_near_node = (start_near_node[0]-1 + context_near_px, start_near_node[1]-1 + context_near_py) - far_node_list = [] - global_context_near_node = (context_near_node[0] + anchor[0], context_near_node[1] + anchor[2]) - if info_on_pix.get(global_context_near_node) is not None: - for info in info_on_pix[global_context_near_node]: - if info['synthesis'] is False: - context_near_node_3d = (global_context_near_node[0], global_context_near_node[1], info['depth']) - if global_mesh.nodes[context_near_node_3d].get('far') is not None: - for far_node in global_mesh.nodes[context_near_node_3d].get('far'): - far_node = (far_node[0] - anchor[0], far_node[1] - anchor[2], far_node[2]) - if mask[far_node[0], far_node[1]] == 0: - far_node_list.append([far_node[0], far_node[1]]) - if len(far_node_list) > 0: - far_nodes_dist = np.sum(np.abs(np.array(far_node_list) - np.array([[edge_px, edge_py]])), axis=1) - context_far_node = tuple(far_node_list[far_nodes_dist.argmin()]) - corresponding_far_edge = np.zeros_like(mask_edge) - corresponding_far_edge[context_far_node[0], context_far_node[1]] = 1 - surround_map = cv2.dilate(new_specific_edge_map.astype(np.uint8), - np.array([[1,1,1],[1,1,1],[1,1,1]]).astype(np.uint8), - iterations=1) - specific_edge_map_wo_end_pt = new_specific_edge_map.copy() - specific_edge_map_wo_end_pt[end_near_node[0], end_near_node[1]] = 0 - surround_map_wo_end_pt = cv2.dilate(specific_edge_map_wo_end_pt.astype(np.uint8), - np.array([[1,1,1],[1,1,1],[1,1,1]]).astype(np.uint8), - iterations=1) - surround_map_wo_end_pt[new_specific_edge_map > 0] = 0 - surround_map_wo_end_pt[context_near_node[0], context_near_node[1]] = 0 - surround_map = surround_map_wo_end_pt.copy() - _, far_edge_cc = cv2.connectedComponents(surround_map.astype(np.uint8), connectivity=4) - start_far_node = None - accompany_far_node = None - if surround_map[context_far_node[0], context_far_node[1]] == 1: - start_far_node = context_far_node - else: - four_nes = [(context_far_node[0] - 1, context_far_node[1]), - (context_far_node[0] + 1, context_far_node[1]), - (context_far_node[0], context_far_node[1] - 1), - (context_far_node[0], context_far_node[1] + 1)] - candidate_bevel = [] - for ne in four_nes: - if surround_map[ne[0], ne[1]] == 1: - start_far_node = (ne[0], ne[1]) - break - elif (ne[0] != context_near_node[0] or ne[1] != context_near_node[1]) and \ - (ne[0] != start_near_node[0] or ne[1] != start_near_node[1]): - candidate_bevel.append((ne[0], ne[1])) - if start_far_node is None: - for ne in candidate_bevel: - if ne[0] == context_far_node[0]: - bevel_xys = [[ne[0] + 1, ne[1]], [ne[0] - 1, ne[1]]] - if ne[1] == context_far_node[1]: - bevel_xys = [[ne[0], ne[1] + 1], [ne[0], ne[1] - 1]] - for bevel_x, bevel_y in bevel_xys: - if surround_map[bevel_x, bevel_y] == 1: - start_far_node = (bevel_x, bevel_y) - accompany_far_node = (ne[0], ne[1]) - break - if start_far_node is not None: - break - if start_far_node is not None: - for far_edge_id in range(1, far_edge_cc.max() + 1): - specific_far_edge = (far_edge_cc == far_edge_id).astype(np.uint8) - if specific_far_edge[start_far_node[0], start_far_node[1]] == 1: - if accompany_far_node is not None: - specific_far_edge[accompany_far_node] = 1 - far_edge[specific_far_edge > 0] = 1 - far_edge_with_id[specific_far_edge > 0] = edge_id - end_far_candidates = np.zeros_like(far_edge) - end_far_candidates[end_near_node[0], end_near_node[1]] = 1 - end_far_candidates = cv2.dilate(end_far_candidates.astype(np.uint8), - np.array([[0,1,0],[1,1,1],[0,1,0]]).astype(np.uint8), - iterations=1) - end_far_candidates[end_near_node[0], end_near_node[1]] = 0 - invalid_nodes = (((far_edge_cc != far_edge_id).astype(np.uint8) * \ - (far_edge_cc != 0).astype(np.uint8)).astype(np.uint8) + \ - (new_specific_edge_map).astype(np.uint8) + \ - (mask == 0).astype(np.uint8)).clip(0, 1) - end_far_candidates[invalid_nodes > 0] = 0 - far_edge[end_far_candidates > 0] = 1 - far_edge_with_id[end_far_candidates > 0] = edge_id - - far_edge[context_far_node[0], context_far_node[1]] = 1 - far_edge_with_id[context_far_node[0], context_far_node[1]] = edge_id - near_edge_with_id[(mask_edge_with_id == edge_id) > 0] = edge_id - uncleaned_far_edge = far_edge.copy() - far_edge[mask == 0] = 0 - - return far_edge, uncleaned_far_edge, far_edge_with_id, near_edge_with_id - -def get_MiDaS_samples(image_folder, depth_folder, config, specific=None, aft_certain=None): - lines = [os.path.splitext(os.path.basename(xx))[0] for xx in glob.glob(os.path.join(image_folder, '*' + config['img_format']))] - samples = [] - generic_pose = np.eye(4) - assert len(config['traj_types']) == len(config['x_shift_range']) ==\ - len(config['y_shift_range']) == len(config['z_shift_range']) == len(config['video_postfix']), \ - "The number of elements in 'traj_types', 'x_shift_range', 'y_shift_range', 'z_shift_range' and \ - 'video_postfix' should be equal." - tgt_pose = [[generic_pose * 1]] - tgts_poses = [] - for traj_idx in range(len(config['traj_types'])): - tgt_poses = [] - sx, sy, sz = path_planning(config['num_frames'], config['x_shift_range'][traj_idx], config['y_shift_range'][traj_idx], - config['z_shift_range'][traj_idx], path_type=config['traj_types'][traj_idx]) - for xx, yy, zz in zip(sx, sy, sz): - tgt_poses.append(generic_pose * 1.) - tgt_poses[-1][:3, -1] = np.array([xx, yy, zz]) - tgts_poses += [tgt_poses] - tgt_pose = generic_pose * 1 - - aft_flag = True - if aft_certain is not None and len(aft_certain) > 0: - aft_flag = False - for seq_dir in lines: - if specific is not None and len(specific) > 0: - if specific != seq_dir: - continue - if aft_certain is not None and len(aft_certain) > 0: - if aft_certain == seq_dir: - aft_flag = True - if aft_flag is False: - continue - samples.append({}) - sdict = samples[-1] - sdict['depth_fi'] = os.path.join(depth_folder, seq_dir + config['depth_format']) - sdict['ref_img_fi'] = os.path.join(image_folder, seq_dir + config['img_format']) - H, W = imageio.imread(sdict['ref_img_fi']).shape[:2] - sdict['int_mtx'] = np.array([[max(H, W), 0, W//2], [0, max(H, W), H//2], [0, 0, 1]]).astype(np.float32) - if sdict['int_mtx'].max() > 1: - sdict['int_mtx'][0, :] = sdict['int_mtx'][0, :] / float(W) - sdict['int_mtx'][1, :] = sdict['int_mtx'][1, :] / float(H) - sdict['ref_pose'] = np.eye(4) - sdict['tgt_pose'] = tgt_pose - sdict['tgts_poses'] = tgts_poses - sdict['video_postfix'] = config['video_postfix'] - sdict['tgt_name'] = [os.path.splitext(os.path.basename(sdict['depth_fi']))[0]] - sdict['src_pair_name'] = sdict['tgt_name'][0] - - return samples - -def get_valid_size(imap): - x_max = np.where(imap.sum(1).squeeze() > 0)[0].max() + 1 - x_min = np.where(imap.sum(1).squeeze() > 0)[0].min() - y_max = np.where(imap.sum(0).squeeze() > 0)[0].max() + 1 - y_min = np.where(imap.sum(0).squeeze() > 0)[0].min() - size_dict = {'x_max':x_max, 'y_max':y_max, 'x_min':x_min, 'y_min':y_min} - - return size_dict - -def dilate_valid_size(isize_dict, imap, dilate=[0, 0]): - osize_dict = copy.deepcopy(isize_dict) - osize_dict['x_min'] = max(0, osize_dict['x_min'] - dilate[0]) - osize_dict['x_max'] = min(imap.shape[0], osize_dict['x_max'] + dilate[0]) - osize_dict['y_min'] = max(0, osize_dict['y_min'] - dilate[0]) - osize_dict['y_max'] = min(imap.shape[1], osize_dict['y_max'] + dilate[1]) - - return osize_dict - -def crop_maps_by_size(size, *imaps): - omaps = [] - for imap in imaps: - omaps.append(imap[size['x_min']:size['x_max'], size['y_min']:size['y_max']].copy()) - - return omaps - -def smooth_cntsyn_gap(init_depth_map, mask_region, context_region, init_mask_region=None): - if init_mask_region is not None: - curr_mask_region = init_mask_region * 1 - else: - curr_mask_region = mask_region * 0 - depth_map = init_depth_map.copy() - for _ in range(2): - cm_mask = context_region + curr_mask_region - depth_s1 = np.roll(depth_map, 1, 0) - depth_s2 = np.roll(depth_map, -1, 0) - depth_s3 = np.roll(depth_map, 1, 1) - depth_s4 = np.roll(depth_map, -1, 1) - mask_s1 = np.roll(cm_mask, 1, 0) - mask_s2 = np.roll(cm_mask, -1, 0) - mask_s3 = np.roll(cm_mask, 1, 1) - mask_s4 = np.roll(cm_mask, -1, 1) - fluxin_depths = (depth_s1 * mask_s1 + depth_s2 * mask_s2 + depth_s3 * mask_s3 + depth_s4 * mask_s4) / \ - ((mask_s1 + mask_s2 + mask_s3 + mask_s4) + 1e-6) - fluxin_mask = (fluxin_depths != 0) * mask_region - init_mask = (fluxin_mask * (curr_mask_region >= 0).astype(np.float32) > 0).astype(np.uint8) - depth_map[init_mask > 0] = fluxin_depths[init_mask > 0] - if init_mask.shape[-1] > curr_mask_region.shape[-1]: - curr_mask_region[init_mask.sum(-1, keepdims=True) > 0] = 1 - else: - curr_mask_region[init_mask > 0] = 1 - depth_map[fluxin_mask > 0] = fluxin_depths[fluxin_mask > 0] - - return depth_map - -def read_MiDaS_depth(disp_fi, disp_rescale=10., h=None, w=None): - if 'npy' in os.path.splitext(disp_fi)[-1]: - disp = np.load(disp_fi) - else: - disp = imageio.imread(disp_fi).astype(np.float32) - disp = disp - disp.min() - disp = cv2.blur(disp / disp.max(), ksize=(3, 3)) * disp.max() - disp = (disp / disp.max()) * disp_rescale - if h is not None and w is not None: - disp = resize(disp / disp.max(), (h, w), order=1) * disp.max() - depth = 1. / np.maximum(disp, 0.05) - - return depth - -def follow_image_aspect_ratio(depth, image): - H, W = image.shape[:2] - image_aspect_ratio = H / W - dH, dW = depth.shape[:2] - depth_aspect_ratio = dH / dW - if depth_aspect_ratio > image_aspect_ratio: - resize_H = dH - resize_W = dH / image_aspect_ratio - else: - resize_W = dW - resize_H = dW * image_aspect_ratio - depth = resize(depth / depth.max(), - (int(resize_H), - int(resize_W)), - order=0) * depth.max() - - return depth - -def depth_resize(depth, origin_size, image_size): - if origin_size[0] is not 0: - max_depth = depth.max() - depth = depth / max_depth - depth = resize(depth, origin_size, order=1, mode='edge') - depth = depth * max_depth - else: - max_depth = depth.max() - depth = depth / max_depth - depth = resize(depth, image_size, order=1, mode='edge') - depth = depth * max_depth - - return depth - -def filter_irrelevant_edge(self_edge, other_edges, other_edges_with_id, current_edge_id, context, edge_ccs, mesh, anchor): - other_edges = other_edges.squeeze() - other_edges_with_id = other_edges_with_id.squeeze() - - self_edge = self_edge.squeeze() - dilate_self_edge = cv2.dilate(self_edge.astype(np.uint8), np.array([[1,1,1],[1,1,1],[1,1,1]]).astype(np.uint8), iterations=1) - edge_ids = collections.Counter(other_edges_with_id.flatten()).keys() - other_edges_info = [] - # import ipdb - # ipdb.set_trace() - for edge_id in edge_ids: - edge_id = int(edge_id) - if edge_id >= 0: - condition = ((other_edges_with_id == edge_id) * other_edges * context).astype(np.uint8) - if dilate_self_edge[condition > 0].sum() == 0: - other_edges[other_edges_with_id == edge_id] = 0 - else: - num_condition, condition_labels = cv2.connectedComponents(condition, connectivity=8) - for condition_id in range(1, num_condition): - isolate_condition = ((condition_labels == condition_id) > 0).astype(np.uint8) - num_end_group, end_group = cv2.connectedComponents(((dilate_self_edge * isolate_condition) > 0).astype(np.uint8), connectivity=8) - if num_end_group == 1: - continue - for end_id in range(1, num_end_group): - end_pxs, end_pys = np.where((end_group == end_id)) - end_px, end_py = end_pxs[0], end_pys[0] - other_edges_info.append({}) - other_edges_info[-1]['edge_id'] = edge_id - # other_edges_info[-1]['near_depth'] = None - other_edges_info[-1]['diff'] = None - other_edges_info[-1]['edge_map'] = np.zeros_like(self_edge) - other_edges_info[-1]['end_point_map'] = np.zeros_like(self_edge) - other_edges_info[-1]['end_point_map'][(end_group == end_id)] = 1 - other_edges_info[-1]['forbidden_point_map'] = np.zeros_like(self_edge) - other_edges_info[-1]['forbidden_point_map'][(end_group != end_id) * (end_group != 0)] = 1 - other_edges_info[-1]['forbidden_point_map'] = cv2.dilate(other_edges_info[-1]['forbidden_point_map'], kernel=np.array([[1,1,1],[1,1,1],[1,1,1]]), iterations=2) - for x in edge_ccs[edge_id]: - nx = x[0] - anchor[0] - ny = x[1] - anchor[1] - if nx == end_px and ny == end_py: - # other_edges_info[-1]['near_depth'] = abs(nx) - if mesh.nodes[x].get('far') is not None and len(mesh.nodes[x].get('far')) == 1: - other_edges_info[-1]['diff'] = abs(1./abs([*mesh.nodes[x].get('far')][0][2]) - 1./abs(x[2])) - else: - other_edges_info[-1]['diff'] = 0 - # if end_group[nx, ny] != end_id and end_group[nx, ny] > 0: - # continue - try: - if isolate_condition[nx, ny] == 1: - other_edges_info[-1]['edge_map'][nx, ny] = 1 - except: - pass - try: - other_edges_info = sorted(other_edges_info, key=lambda x : x['diff'], reverse=True) - except: - import pdb - pdb.set_trace() - # import pdb - # pdb.set_trace() - # other_edges = other_edges[..., None] - for other_edge in other_edges_info: - if other_edge['end_point_map'] is None: - import pdb - pdb.set_trace() - - other_edges = other_edges * context - - return other_edges, other_edges_info - -def require_depth_edge(context_edge, mask): - dilate_mask = cv2.dilate(mask, np.array([[1,1,1],[1,1,1],[1,1,1]]).astype(np.uint8), iterations=1) - if (dilate_mask * context_edge).max() == 0: - return False - else: - return True - -def refine_color_around_edge(mesh, info_on_pix, edge_ccs, config, spdb=False): - H, W = mesh.graph['H'], mesh.graph['W'] - tmp_edge_ccs = copy.deepcopy(edge_ccs) - for edge_id, edge_cc in enumerate(edge_ccs): - if len(edge_cc) == 0: - continue - near_maps = np.zeros((H, W)).astype(np.bool) - far_maps = np.zeros((H, W)).astype(np.bool) - tmp_far_nodes = set() - far_nodes = set() - near_nodes = set() - end_nodes = set() - for i in range(5): - if i == 0: - for edge_node in edge_cc: - if mesh.nodes[edge_node].get('depth_edge_dilate_2_color_flag') is not True: - break - if mesh.nodes[edge_node].get('inpaint_id') == 1: - near_nodes.add(edge_node) - tmp_node = mesh.nodes[edge_node].get('far') - tmp_node = set(tmp_node) if tmp_node is not None else set() - tmp_far_nodes |= tmp_node - rmv_tmp_far_nodes = set() - for far_node in tmp_far_nodes: - if not(mesh.has_node(far_node) and mesh.nodes[far_node].get('inpaint_id') == 1): - rmv_tmp_far_nodes.add(far_node) - if len(tmp_far_nodes - rmv_tmp_far_nodes) == 0: - break - else: - for near_node in near_nodes: - near_maps[near_node[0], near_node[1]] = True - mesh.nodes[near_node]['refine_rgbd'] = True - mesh.nodes[near_node]['backup_depth'] = near_node[2] \ - if mesh.nodes[near_node].get('real_depth') is None else mesh.nodes[near_node]['real_depth'] - mesh.nodes[near_node]['backup_color'] = mesh.nodes[near_node]['color'] - for far_node in tmp_far_nodes: - if mesh.has_node(far_node) and mesh.nodes[far_node].get('inpaint_id') == 1: - far_nodes.add(far_node) - far_maps[far_node[0], far_node[1]] = True - mesh.nodes[far_node]['refine_rgbd'] = True - mesh.nodes[far_node]['backup_depth'] = far_node[2] \ - if mesh.nodes[far_node].get('real_depth') is None else mesh.nodes[far_node]['real_depth'] - mesh.nodes[far_node]['backup_color'] = mesh.nodes[far_node]['color'] - tmp_far_nodes = far_nodes - tmp_near_nodes = near_nodes - else: - tmp_far_nodes = new_tmp_far_nodes - tmp_near_nodes = new_tmp_near_nodes - new_tmp_far_nodes = None - new_tmp_near_nodes = None - new_tmp_far_nodes = set() - new_tmp_near_nodes = set() - for node in tmp_near_nodes: - for ne_node in mesh.neighbors(node): - if far_maps[ne_node[0], ne_node[1]] == False and \ - near_maps[ne_node[0], ne_node[1]] == False: - if mesh.nodes[ne_node].get('inpaint_id') == 1: - new_tmp_near_nodes.add(ne_node) - near_maps[ne_node[0], ne_node[1]] = True - mesh.nodes[ne_node]['refine_rgbd'] = True - mesh.nodes[ne_node]['backup_depth'] = ne_node[2] \ - if mesh.nodes[ne_node].get('real_depth') is None else mesh.nodes[ne_node]['real_depth'] - mesh.nodes[ne_node]['backup_color'] = mesh.nodes[ne_node]['color'] - else: - mesh.nodes[ne_node]['backup_depth'] = ne_node[2] \ - if mesh.nodes[ne_node].get('real_depth') is None else mesh.nodes[ne_node]['real_depth'] - mesh.nodes[ne_node]['backup_color'] = mesh.nodes[ne_node]['color'] - end_nodes.add(node) - near_nodes.update(new_tmp_near_nodes) - for node in tmp_far_nodes: - for ne_node in mesh.neighbors(node): - if far_maps[ne_node[0], ne_node[1]] == False and \ - near_maps[ne_node[0], ne_node[1]] == False: - if mesh.nodes[ne_node].get('inpaint_id') == 1: - new_tmp_far_nodes.add(ne_node) - far_maps[ne_node[0], ne_node[1]] = True - mesh.nodes[ne_node]['refine_rgbd'] = True - mesh.nodes[ne_node]['backup_depth'] = ne_node[2] \ - if mesh.nodes[ne_node].get('real_depth') is None else mesh.nodes[ne_node]['real_depth'] - mesh.nodes[ne_node]['backup_color'] = mesh.nodes[ne_node]['color'] - else: - mesh.nodes[ne_node]['backup_depth'] = ne_node[2] \ - if mesh.nodes[ne_node].get('real_depth') is None else mesh.nodes[ne_node]['real_depth'] - mesh.nodes[ne_node]['backup_color'] = mesh.nodes[ne_node]['color'] - end_nodes.add(node) - far_nodes.update(new_tmp_far_nodes) - if len(far_nodes) == 0: - tmp_edge_ccs[edge_id] = set() - continue - for node in new_tmp_far_nodes | new_tmp_near_nodes: - for ne_node in mesh.neighbors(node): - if far_maps[ne_node[0], ne_node[1]] == False and near_maps[ne_node[0], ne_node[1]] == False: - end_nodes.add(node) - mesh.nodes[ne_node]['backup_depth'] = ne_node[2] \ - if mesh.nodes[ne_node].get('real_depth') is None else mesh.nodes[ne_node]['real_depth'] - mesh.nodes[ne_node]['backup_color'] = mesh.nodes[ne_node]['color'] - tmp_end_nodes = end_nodes - - refine_nodes = near_nodes | far_nodes - remain_refine_nodes = copy.deepcopy(refine_nodes) - accum_idx = 0 - while len(remain_refine_nodes) > 0: - accum_idx += 1 - if accum_idx > 100: - break - new_tmp_end_nodes = None - new_tmp_end_nodes = set() - survive_tmp_end_nodes = set() - for node in tmp_end_nodes: - re_depth, re_color, re_count = 0, np.array([0., 0., 0.]), 0 - for ne_node in mesh.neighbors(node): - if mesh.nodes[ne_node].get('refine_rgbd') is True: - if ne_node not in tmp_end_nodes: - new_tmp_end_nodes.add(ne_node) - else: - try: - re_depth += mesh.nodes[ne_node]['backup_depth'] - re_color += mesh.nodes[ne_node]['backup_color'].astype(np.float32) - re_count += 1. - except: - import pdb; pdb.set_trace() - if re_count > 0: - re_depth = re_depth / re_count - re_color = re_color / re_count - mesh.nodes[node]['backup_depth'] = re_depth - mesh.nodes[node]['backup_color'] = re_color - mesh.nodes[node]['refine_rgbd'] = False - else: - survive_tmp_end_nodes.add(node) - for node in tmp_end_nodes - survive_tmp_end_nodes: - if node in remain_refine_nodes: - remain_refine_nodes.remove(node) - tmp_end_nodes = new_tmp_end_nodes - if spdb == True: - bfrd_canvas = np.zeros((H, W)) - bfrc_canvas = np.zeros((H, W, 3)).astype(np.uint8) - aftd_canvas = np.zeros((H, W)) - aftc_canvas = np.zeros((H, W, 3)).astype(np.uint8) - for node in refine_nodes: - bfrd_canvas[node[0], node[1]] = abs(node[2]) - aftd_canvas[node[0], node[1]] = abs(mesh.nodes[node]['backup_depth']) - bfrc_canvas[node[0], node[1]] = mesh.nodes[node]['color'].astype(np.uint8) - aftc_canvas[node[0], node[1]] = mesh.nodes[node]['backup_color'].astype(np.uint8) - f, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, sharex=True, sharey=True); - ax1.imshow(bfrd_canvas); - ax2.imshow(aftd_canvas); - ax3.imshow(bfrc_canvas); - ax4.imshow(aftc_canvas); - plt.show() - import pdb; pdb.set_trace() - for node in refine_nodes: - if mesh.nodes[node].get('refine_rgbd') is not None: - mesh.nodes[node].pop('refine_rgbd') - mesh.nodes[node]['color'] = mesh.nodes[node]['backup_color'] - for info in info_on_pix[(node[0], node[1])]: - if info['depth'] == node[2]: - info['color'] = mesh.nodes[node]['backup_color'] - - return mesh, info_on_pix - -def refine_depth_around_edge(mask_depth, far_edge, uncleaned_far_edge, near_edge, mask, all_depth, config): - if isinstance(mask_depth, torch.Tensor): - if mask_depth.is_cuda: - mask_depth = mask_depth.cpu() - mask_depth = mask_depth.data - mask_depth = mask_depth.numpy() - if isinstance(far_edge, torch.Tensor): - if far_edge.is_cuda: - far_edge = far_edge.cpu() - far_edge = far_edge.data - far_edge = far_edge.numpy() - if isinstance(uncleaned_far_edge, torch.Tensor): - if uncleaned_far_edge.is_cuda: - uncleaned_far_edge = uncleaned_far_edge.cpu() - uncleaned_far_edge = uncleaned_far_edge.data - uncleaned_far_edge = uncleaned_far_edge.numpy() - if isinstance(near_edge, torch.Tensor): - if near_edge.is_cuda: - near_edge = near_edge.cpu() - near_edge = near_edge.data - near_edge = near_edge.numpy() - if isinstance(mask, torch.Tensor): - if mask.is_cuda: - mask = mask.cpu() - mask = mask.data - mask = mask.numpy() - mask = mask.squeeze() - uncleaned_far_edge = uncleaned_far_edge.squeeze() - far_edge = far_edge.squeeze() - near_edge = near_edge.squeeze() - mask_depth = mask_depth.squeeze() - dilate_far_edge = cv2.dilate(uncleaned_far_edge.astype(np.uint8), kernel=np.array([[0,1,0],[1,1,1],[0,1,0]]).astype(np.uint8), iterations=1) - near_edge[dilate_far_edge == 0] = 0 - dilate_near_edge = cv2.dilate(near_edge.astype(np.uint8), kernel=np.array([[0,1,0],[1,1,1],[0,1,0]]).astype(np.uint8), iterations=1) - far_edge[dilate_near_edge == 0] = 0 - init_far_edge = far_edge.copy() - init_near_edge = near_edge.copy() - for i in range(config['depth_edge_dilate_2']): - init_far_edge = cv2.dilate(init_far_edge, kernel=np.array([[0,1,0],[1,1,1],[0,1,0]]).astype(np.uint8), iterations=1) - init_far_edge[init_near_edge == 1] = 0 - init_near_edge = cv2.dilate(init_near_edge, kernel=np.array([[0,1,0],[1,1,1],[0,1,0]]).astype(np.uint8), iterations=1) - init_near_edge[init_far_edge == 1] = 0 - init_far_edge[mask == 0] = 0 - init_near_edge[mask == 0] = 0 - hole_far_edge = 1 - init_far_edge - hole_near_edge = 1 - init_near_edge - change = None - while True: - change = False - hole_far_edge[init_near_edge == 1] = 0 - hole_near_edge[init_far_edge == 1] = 0 - far_pxs, far_pys = np.where((hole_far_edge == 0) * (init_far_edge == 1) > 0) - current_hole_far_edge = hole_far_edge.copy() - for far_px, far_py in zip(far_pxs, far_pys): - min_px = max(far_px - 1, 0) - max_px = min(far_px + 2, mask.shape[0]-1) - min_py = max(far_py - 1, 0) - max_py = min(far_py + 2, mask.shape[1]-1) - hole_far = current_hole_far_edge[min_px: max_px, min_py: max_py] - tmp_mask = mask[min_px: max_px, min_py: max_py] - all_depth_patch = all_depth[min_px: max_px, min_py: max_py] * 0 - all_depth_mask = (all_depth_patch != 0).astype(np.uint8) - cross_element = np.array([[0,1,0],[1,1,1],[0,1,0]])[min_px - (far_px - 1): max_px - (far_px - 1), min_py - (far_py - 1): max_py - (far_py - 1)] - combine_mask = (tmp_mask + all_depth_mask).clip(0, 1) * hole_far * cross_element - tmp_patch = combine_mask * (mask_depth[min_px: max_px, min_py: max_py] + all_depth_patch) - number = np.count_nonzero(tmp_patch) - if number > 0: - mask_depth[far_px, far_py] = np.sum(tmp_patch).astype(np.float32) / max(number, 1e-6) - hole_far_edge[far_px, far_py] = 1 - change = True - near_pxs, near_pys = np.where((hole_near_edge == 0) * (init_near_edge == 1) > 0) - current_hole_near_edge = hole_near_edge.copy() - for near_px, near_py in zip(near_pxs, near_pys): - min_px = max(near_px - 1, 0) - max_px = min(near_px + 2, mask.shape[0]-1) - min_py = max(near_py - 1, 0) - max_py = min(near_py + 2, mask.shape[1]-1) - hole_near = current_hole_near_edge[min_px: max_px, min_py: max_py] - tmp_mask = mask[min_px: max_px, min_py: max_py] - all_depth_patch = all_depth[min_px: max_px, min_py: max_py] * 0 - all_depth_mask = (all_depth_patch != 0).astype(np.uint8) - cross_element = np.array([[0,1,0],[1,1,1],[0,1,0]])[min_px - near_px + 1:max_px - near_px + 1, min_py - near_py + 1:max_py - near_py + 1] - combine_mask = (tmp_mask + all_depth_mask).clip(0, 1) * hole_near * cross_element - tmp_patch = combine_mask * (mask_depth[min_px: max_px, min_py: max_py] + all_depth_patch) - number = np.count_nonzero(tmp_patch) - if number > 0: - mask_depth[near_px, near_py] = np.sum(tmp_patch) / max(number, 1e-6) - hole_near_edge[near_px, near_py] = 1 - change = True - if change is False: - break - - return mask_depth - - - -def vis_depth_edge_connectivity(depth, config): - disp = 1./depth - u_diff = (disp[1:, :] - disp[:-1, :])[:-1, 1:-1] - b_diff = (disp[:-1, :] - disp[1:, :])[1:, 1:-1] - l_diff = (disp[:, 1:] - disp[:, :-1])[1:-1, :-1] - r_diff = (disp[:, :-1] - disp[:, 1:])[1:-1, 1:] - u_over = (np.abs(u_diff) > config['depth_threshold']).astype(np.float32) - b_over = (np.abs(b_diff) > config['depth_threshold']).astype(np.float32) - l_over = (np.abs(l_diff) > config['depth_threshold']).astype(np.float32) - r_over = (np.abs(r_diff) > config['depth_threshold']).astype(np.float32) - concat_diff = np.stack([u_diff, b_diff, r_diff, l_diff], axis=-1) - concat_over = np.stack([u_over, b_over, r_over, l_over], axis=-1) - over_diff = concat_diff * concat_over - pos_over = (over_diff > 0).astype(np.float32).sum(-1).clip(0, 1) - neg_over = (over_diff < 0).astype(np.float32).sum(-1).clip(0, 1) - neg_over[(over_diff > 0).astype(np.float32).sum(-1) > 0] = 0 - _, edge_label = cv2.connectedComponents(pos_over.astype(np.uint8), connectivity=8) - T_junction_maps = np.zeros_like(pos_over) - for edge_id in range(1, edge_label.max() + 1): - edge_map = (edge_label == edge_id).astype(np.uint8) - edge_map = np.pad(edge_map, pad_width=((1,1),(1,1)), mode='constant') - four_direc = np.roll(edge_map, 1, 1) + np.roll(edge_map, -1, 1) + np.roll(edge_map, 1, 0) + np.roll(edge_map, -1, 0) - eight_direc = np.roll(np.roll(edge_map, 1, 1), 1, 0) + np.roll(np.roll(edge_map, 1, 1), -1, 0) + \ - np.roll(np.roll(edge_map, -1, 1), 1, 0) + np.roll(np.roll(edge_map, -1, 1), -1, 0) - eight_direc = (eight_direc + four_direc)[1:-1,1:-1] - pos_over[eight_direc > 2] = 0 - T_junction_maps[eight_direc > 2] = 1 - _, edge_label = cv2.connectedComponents(pos_over.astype(np.uint8), connectivity=8) - edge_label = np.pad(edge_label, 1, mode='constant') - - return edge_label - - - -def max_size(mat, value=0): - if not (mat and mat[0]): return (0, 0) - it = iter(mat) - prev = [(el==value) for el in next(it)] - max_size = max_rectangle_size(prev) - for row in it: - hist = [(1+h) if el == value else 0 for h, el in zip(prev, row)] - max_size = max(max_size, max_rectangle_size(hist), key=get_area) - prev = hist - return max_size - -def max_rectangle_size(histogram): - Info = namedtuple('Info', 'start height') - stack = [] - top = lambda: stack[-1] - max_size = (0, 0) # height, width of the largest rectangle - pos = 0 # current position in the histogram - for pos, height in enumerate(histogram): - start = pos # position where rectangle starts - while True: - if not stack or height > top().height: - stack.append(Info(start, height)) # push - if stack and height < top().height: - max_size = max(max_size, (top().height, (pos-top().start)), - key=get_area) - start, _ = stack.pop() - continue - break # height == top().height goes here - - pos += 1 - for start, height in stack: - max_size = max(max_size, (height, (pos-start)), - key=get_area) - - return max_size - -def get_area(size): - return reduce(mul, size) - -def find_anchors(matrix): - matrix = [[*x] for x in matrix] - mh, mw = max_size(matrix) - matrix = np.array(matrix) - # element = np.zeros((mh, mw)) - for i in range(matrix.shape[0] + 1 - mh): - for j in range(matrix.shape[1] + 1 - mw): - if matrix[i:i + mh, j:j + mw].max() == 0: - return i, i + mh, j, j + mw - -def find_largest_rect(dst_img, bg_color=(128, 128, 128)): - valid = np.any(dst_img[..., :3] != bg_color, axis=-1) - dst_h, dst_w = dst_img.shape[:2] - ret, labels = cv2.connectedComponents(np.uint8(valid == False)) - red_mat = np.zeros_like(labels) - # denoise - for i in range(1, np.max(labels)+1, 1): - x, y, w, h = cv2.boundingRect(np.uint8(labels==i)) - if x == 0 or (x+w) == dst_h or y == 0 or (y+h) == dst_w: - red_mat[labels==i] = 1 - # crop - t, b, l, r = find_anchors(red_mat) - - return t, b, l, r diff --git a/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/deep_sort/sort/__init__.py b/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/deep_sort/sort/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Kimata/Sanskrit-TTS/indic_nlp_library/indicnlp/morph/unsupervised_morph.py b/spaces/Kimata/Sanskrit-TTS/indic_nlp_library/indicnlp/morph/unsupervised_morph.py deleted file mode 100644 index 55c70f13e0ff7d4e89726e6b9c7932649afdf068..0000000000000000000000000000000000000000 --- a/spaces/Kimata/Sanskrit-TTS/indic_nlp_library/indicnlp/morph/unsupervised_morph.py +++ /dev/null @@ -1,142 +0,0 @@ -# -# Copyright (c) 2013-present, Anoop Kunchukuttan -# All rights reserved. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# - -import codecs, sys, itertools,re,os -import morfessor - -from functools import lru_cache - -from indicnlp import langinfo -from indicnlp import common -from indicnlp.tokenize import indic_tokenize - -# Unsupervised Morphological Analyser for Indian languages. -# -# @author Anoop Kunchukuttan -# - -class MorphAnalyzerI(object): - """ - Interface for Morph Analyzer - """ - - def morph_analyze(word): - pass - - def morph_analyze_document(tokens): - pass - -class UnsupervisedMorphAnalyzer(MorphAnalyzerI): - """ - Unsupervised Morphological analyser built using Morfessor 2.0 - """ - - def __init__(self,lang,add_marker=False): - self.lang=lang - self.add_marker=add_marker - - io = morfessor.MorfessorIO() - self._morfessor_model=io.read_any_model(os.path.join(common.INDIC_RESOURCES_PATH,'morph','morfessor','{}.model'.format(lang))) - - self._script_range_pat=r'^[{}-{}]+$'.format(chr(langinfo.SCRIPT_RANGES[lang][0]),chr(langinfo.SCRIPT_RANGES[lang][1])) - self._script_check_re=re.compile(self._script_range_pat) - - def _contains_number(self,text): - if self.lang in langinfo.SCRIPT_RANGES: - for c in text: - offset=ord(c)-langinfo.SCRIPT_RANGES[self.lang][0] - if offset >=langinfo.NUMERIC_OFFSET_START and offset <= langinfo.NUMERIC_OFFSET_END: - return True - return False - - def _morphanalysis_needed(self,word): - return self._script_check_re.match(word) and not self._contains_number(word) - - @lru_cache(maxsize=16384) - def morph_analyze(self,word): - """ - Morphanalyzes a single word and returns a list of component morphemes - - @param word: string input word - """ - m_list=[] - if self._morphanalysis_needed(word): - val=self._morfessor_model.viterbi_segment(word) - m_list=val[0] - if self.add_marker: - m_list= [ '{}_S_'.format(m) if i>0 else '{}_R_'.format(m) for i,m in enumerate(m_list)] - else: - if self.add_marker: - word='{}_E_'.format(word) - m_list=[word] - return m_list - - ### Older implementation - #val=self._morfessor_model.viterbi_segment(word) - #m_list=val[0] - #if self.add_marker: - # m_list= [ u'{}_S_'.format(m) if i>0 else u'{}_R_'.format(m) for i,m in enumerate(m_list)] - #return m_list - - - def morph_analyze_document(self,tokens): - """ - Morphanalyzes a document, represented as a list of tokens - Each word is morphanalyzed and result is a list of morphemes constituting the document - - @param tokens: string sequence of words - - @return list of segments in the document after morph analysis - """ - - out_tokens=[] - for token in tokens: - morphs=self.morph_analyze(token) - out_tokens.extend(morphs) - return out_tokens - - #### Older implementation - #out_tokens=[] - #for token in tokens: - # if self._morphanalysis_needed(token): - # morphs=self.morph_analyze(token) - # out_tokens.extend(morphs) - # else: - # if self.add_marker: - # token=u'{}_E_'.format(token) - # out_tokens.append(token) - #return out_tokens - - -if __name__ == '__main__': - - if len(sys.argv)<4: - print("Usage: python unsupervised_morph.py []") - sys.exit(1) - - language=sys.argv[3] - common.INDIC_RESOURCES_PATH=sys.argv[4] - - add_marker=False - - if len(sys.argv)==6: - add_marker= True if sys.argv[5] == 'True' else False - - print('Loading morph analyser for ' + language) - analyzer=UnsupervisedMorphAnalyzer(language,add_marker) - print('Loaded morph analyser for ' + language) - - with codecs.open(sys.argv[1],'r','utf-8') as ifile: - with codecs.open(sys.argv[2],'w','utf-8') as ofile: - for line in ifile.readlines(): - line=line.strip() - tokens=indic_tokenize.trivial_tokenize(line) - morph_tokens=analyzer.morph_analyze_document(tokens) - ofile.write(' '.join(morph_tokens)) - ofile.write('\n') - diff --git a/spaces/KyanChen/RSPrompter/configs/rsprompter/maskrcnn_whu_config.py b/spaces/KyanChen/RSPrompter/configs/rsprompter/maskrcnn_whu_config.py deleted file mode 100644 index 6fc3d0b2624b1765bbd6edad80f83ee541915c40..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/configs/rsprompter/maskrcnn_whu_config.py +++ /dev/null @@ -1,349 +0,0 @@ -custom_imports = dict(imports=['mmseg.datasets', 'mmseg.models', 'mmdet.models'], allow_failed_imports=False) - -max_epochs = 150 - -optimizer = dict( - type='AdamW', - lr=0.0005, - weight_decay=1e-4 -) - -param_scheduler = [ - # warm up learning rate scheduler - dict( - type='LinearLR', - start_factor=1e-4, - by_epoch=True, - begin=0, - end=1, - # update by iter - convert_to_iter_based=True), - # main learning rate scheduler - dict( - type='CosineAnnealingLR', - T_max=max_epochs, - by_epoch=True, - begin=1, - end=max_epochs, - ) -] - -param_scheduler_callback = dict( - type='ParamSchedulerHook' -) - -evaluator_ = dict( - type='MeanAveragePrecision', - # iou_type='segm', - iou_type='bbox', - # dist_sync_on_step=True, - # compute_on_cpu=True, -) - -evaluator_ = dict( - type='CocoPLMetric', - metric=['bbox', 'segm'], - proposal_nums=[1, 10, 100] -) - -evaluator = dict( - val_evaluator=evaluator_, -) - - -image_size = (512, 512) -data_preprocessor = dict( - type='mmdet.DetDataPreprocessor', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - bgr_to_rgb=True, - pad_mask=True, - mask_pad_value=0, - pad_size_divisor=32 -) - -num_things_classes = 1 -num_stuff_classes = 0 -num_classes = num_things_classes + num_stuff_classes -num_queries = 90 - -# model settings -model = dict( - type='mmdet.MaskRCNN', - data_preprocessor=data_preprocessor, - backbone=dict( - type='mmdet.ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch', - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50') - ), - neck=dict( - type='mmdet.FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='mmdet.RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='mmdet.AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='mmdet.DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='mmdet.CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='mmdet.L1Loss', loss_weight=1.0)), - roi_head=dict( - type='mmdet.StandardRoIHead', - bbox_roi_extractor=dict( - type='mmdet.SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=dict( - type='mmdet.Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=num_classes, - bbox_coder=dict( - type='mmdet.DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='mmdet.CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='mmdet.L1Loss', loss_weight=1.0)), - mask_roi_extractor=dict( - type='mmdet.SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - mask_head=dict( - type='mmdet.FCNMaskHead', - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=num_classes, - loss_mask=dict( - type='mmdet.CrossEntropyLoss', use_mask=True, loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='mmdet.MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='mmdet.RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=-1, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=2000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - assigner=dict( - type='mmdet.MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='mmdet.RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5) - ) -) - - -model_cfg = dict( - type='MMDetPLer', - hyperparameters=dict( - optimizer=optimizer, - param_scheduler=param_scheduler, - evaluator=evaluator, - ), - whole_model=model, -) - -task_name = 'whu_ins' -exp_name = 'E20230525_0' -logger = dict( - type='WandbLogger', - project=task_name, - group='maskrcnn', - name=exp_name -) -# logger = None - - -callbacks = [ - param_scheduler_callback, - dict( - type='ModelCheckpoint', - dirpath=f'results/{task_name}/{exp_name}/checkpoints', - save_last=True, - mode='max', - monitor='valmap_0', - save_top_k=2, - filename='epoch_{epoch}-map_{valmap_0:.4f}' - ), - dict( - type='LearningRateMonitor', - logging_interval='step' - ) -] - - -trainer_cfg = dict( - compiled_model=False, - accelerator="auto", - strategy="auto", - # strategy="ddp", - # strategy='ddp_find_unused_parameters_true', - # precision='32', - # precision='16-mixed', - devices=4, - default_root_dir=f'results/{task_name}/{exp_name}', - # default_root_dir='results/tmp', - max_epochs=max_epochs, - logger=logger, - callbacks=callbacks, - log_every_n_steps=20, - check_val_every_n_epoch=10, - benchmark=True, - # sync_batchnorm=True, - # fast_dev_run=True, - - # limit_train_batches=1, - # limit_val_batches=0, - # limit_test_batches=None, - # limit_predict_batches=None, - # overfit_batches=0.0, - - # val_check_interval=None, - # num_sanity_val_steps=1, - # enable_checkpointing=None, - # enable_progress_bar=None, - # enable_model_summary=None, - # accumulate_grad_batches=32, - # gradient_clip_val=15, - # gradient_clip_algorithm='norm', - # deterministic=None, - # inference_mode: bool=True, - use_distributed_sampler=True, - # profiler="simple", - # detect_anomaly=False, - # barebones=False, - # plugins=None, - # reload_dataloaders_every_n_epochs=0, -) - - -backend_args = None -train_pipeline = [ - dict(type='mmdet.LoadImageFromFile'), - dict(type='mmdet.LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='mmdet.Resize', scale=image_size), - dict(type='mmdet.RandomFlip', prob=0.5), - dict(type='mmdet.PackDetInputs') -] - -test_pipeline = [ - dict(type='mmdet.LoadImageFromFile', backend_args=backend_args), - dict(type='mmdet.Resize', scale=image_size), - # If you don't have a gt annotation, delete the pipeline - dict(type='mmdet.LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', - 'scale_factor')) -] - - -train_batch_size_per_gpu = 8 -train_num_workers = 4 -test_batch_size_per_gpu = 8 -test_num_workers = 4 -persistent_workers = True - -data_parent = '/Users/kyanchen/datasets/Building/WHU' -train_data_prefix = 'train/' -val_data_prefix = 'test/' - -dataset_type = 'WHUInsSegDataset' - -val_loader = dict( - batch_size=test_batch_size_per_gpu, - num_workers=test_num_workers, - persistent_workers=persistent_workers, - pin_memory=True, - dataset=dict( - type=dataset_type, - data_root=data_parent, - ann_file='annotations/WHU_building_test.json', - data_prefix=dict(img_path=val_data_prefix+'/image', seg_path=val_data_prefix+'/label'), - test_mode=True, - filter_cfg=dict(filter_empty_gt=True, min_size=32), - pipeline=test_pipeline, - backend_args=backend_args, - ) -) - -datamodule_cfg = dict( - type='PLDataModule', - train_loader=dict( - batch_size=train_batch_size_per_gpu, - num_workers=train_num_workers, - persistent_workers=persistent_workers, - pin_memory=True, - dataset=dict( - type=dataset_type, - data_root=data_parent, - ann_file='annotations/WHU_building_train.json', - data_prefix=dict(img_path=train_data_prefix+'/image', seg_path=train_data_prefix+'/label'), - filter_cfg=dict(filter_empty_gt=True, min_size=32), - pipeline=train_pipeline, - backend_args=backend_args) - ), - val_loader=val_loader, - test_loader=val_loader, - predict_loader=val_loader -) \ No newline at end of file diff --git a/spaces/LTputin/Janitor_AI/Dockerfile b/spaces/LTputin/Janitor_AI/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/LTputin/Janitor_AI/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/LightFury9/knee_osteoarthritis_classification/README.md b/spaces/LightFury9/knee_osteoarthritis_classification/README.md deleted file mode 100644 index f2eba24681a6dec8bbe791eecb9a09ec27976324..0000000000000000000000000000000000000000 --- a/spaces/LightFury9/knee_osteoarthritis_classification/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Knee Osteoarthritis Classification -emoji: 🐨 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: unknown ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Linaqruf/Animagine-XL/style.css b/spaces/Linaqruf/Animagine-XL/style.css deleted file mode 100644 index 50214b324f2250bfff6d884837e2fc3c2242821b..0000000000000000000000000000000000000000 --- a/spaces/Linaqruf/Animagine-XL/style.css +++ /dev/null @@ -1,60 +0,0 @@ -h1 { - text-align: center; - font-size: 10vw; /* relative to the viewport width */ -} - -h2 { - text-align: center; - font-size: 10vw; /* relative to the viewport width */ -} - -#duplicate-button { - margin: auto; - color: #fff; - background: #1565c0; - border-radius: 100vh; -} - -#component-0 { - max-width: 80%; /* relative to the parent element's width */ - margin: auto; - padding-top: 1.5rem; -} - -/* You can also use media queries to adjust your style for different screen sizes */ -@media (max-width: 600px) { - #component-0 { - max-width: 90%; - padding-top: 1rem; - } -} - -#gallery .grid-wrap{ - min-height: 25%; -} - -#title-container { - display: flex; - justify-content: center; - align-items: center; - height: 100vh; /* Adjust this value to position the title vertically */ - } - -#title { - font-size: 3em; - text-align: center; - color: #333; - font-family: 'Helvetica Neue', sans-serif; - text-transform: uppercase; - background: transparent; - } - -#title span { - background: -webkit-linear-gradient(45deg, #4EACEF, #28b485); - -webkit-background-clip: text; - -webkit-text-fill-color: transparent; -} - -#subtitle { - text-align: center; -} diff --git a/spaces/Manjushri/MusicGen/tests/modules/test_transformer.py b/spaces/Manjushri/MusicGen/tests/modules/test_transformer.py deleted file mode 100644 index d6092963d275ba101e016fd448fd0b456d918c27..0000000000000000000000000000000000000000 --- a/spaces/Manjushri/MusicGen/tests/modules/test_transformer.py +++ /dev/null @@ -1,253 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product - -import pytest -import torch - -from audiocraft.modules.transformer import ( - StreamingMultiheadAttention, StreamingTransformer, set_efficient_attention_backend) - - -def test_transformer_causal_streaming(): - torch.manual_seed(1234) - - for context, custom in product([None, 10], [False, True]): - # Test that causality and receptive fields are properly handled. - # looking at the gradients - tr = StreamingTransformer( - 16, 4, 1 if context else 2, - causal=True, past_context=context, custom=custom, - dropout=0.) - steps = 20 - for k in [0, 10, 15, 19]: - x = torch.randn(4, steps, 16, requires_grad=True) - y = tr(x) - y[:, k].abs().sum().backward() - if k + 1 < steps: - assert torch.allclose(x.grad[:, k + 1:], torch.tensor(0.)), x.grad[:, k + 1:].norm() - assert not torch.allclose(x.grad[:, :k + 1], torch.tensor(0.)), x.grad[:, :k + 1].norm() - if context is not None and k > context: - limit = k - context - 1 - assert torch.allclose(x.grad[:, :limit], - torch.tensor(0.)), x.grad[:, :limit].norm() - - # Now check that streaming gives the same result at batch eval. - x = torch.randn(4, steps, 16) - y = tr(x) - ys = [] - with tr.streaming(): - for k in range(steps): - chunk = x[:, k:k + 1, :] - ys.append(tr(chunk)) - y_stream = torch.cat(ys, dim=1) - delta = torch.norm(y_stream - y) / torch.norm(y) - assert delta < 1e-6, delta - - -def test_transformer_vs_pytorch(): - torch.manual_seed(1234) - # Check that in the non causal setting, we get the same result as - # PyTorch Transformer encoder. - for custom in [False, True]: - tr = StreamingTransformer( - 16, 4, 2, - causal=False, custom=custom, dropout=0., positional_scale=0.) - layer = torch.nn.TransformerEncoderLayer(16, 4, dropout=0., batch_first=True) - tr_ref = torch.nn.TransformerEncoder(layer, 2) - tr.load_state_dict(tr_ref.state_dict()) - - x = torch.randn(4, 20, 16) - y = tr(x) - y2 = tr_ref(x) - delta = torch.norm(y2 - y) / torch.norm(y) - assert delta < 1e-6, delta - - -def test_streaming_api(): - tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0.) - tr.eval() - steps = 12 - x = torch.randn(1, steps, 16) - - with torch.no_grad(): - with tr.streaming(): - _ = tr(x[:, :1]) - state = {k: v.clone() for k, v in tr.get_streaming_state().items()} - y = tr(x[:, 1:2]) - tr.set_streaming_state(state) - y2 = tr(x[:, 1:2]) - assert torch.allclose(y, y2), (y - y2).norm() - assert tr.flush() is None - - -def test_memory_efficient(): - torch.manual_seed(1234) - for backend in ['torch', 'xformers']: - set_efficient_attention_backend(backend) - - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1) - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1) - tr_mem_efficient.load_state_dict(tr.state_dict()) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_mem_efficient(x) - assert torch.allclose(y, y2), ((y - y2).norm(), backend) - - -def test_attention_as_float32(): - torch.manual_seed(1234) - cases = [ - {'custom': True}, - {'custom': False}, - ] - for case in cases: - tr = StreamingTransformer(16, 4, 2, dropout=0., dtype=torch.bfloat16, **case) - tr_float32 = StreamingTransformer( - 16, 4, 2, dropout=0., attention_as_float32=True, dtype=torch.bfloat16, **case) - if not case['custom']: - # we are not using autocast here because it doesn't really - # work as expected on CPU, so we have to manually cast the weights of the MHA. - for layer in tr_float32.layers: - layer.self_attn.mha.to(torch.float32) - tr_float32.load_state_dict(tr.state_dict()) - steps = 12 - x = torch.randn(3, steps, 16, dtype=torch.bfloat16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_float32(x) - assert not torch.allclose(y, y2), (y - y2).norm() - - -@torch.no_grad() -def test_streaming_memory_efficient(): - torch.manual_seed(1234) - for backend in ['torch', 'xformers']: - set_efficient_attention_backend(backend) - tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0., custom=True) - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, causal=True) - tr.load_state_dict(tr_mem_efficient.state_dict()) - tr.eval() - tr_mem_efficient.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - ref = tr(x) - - with tr_mem_efficient.streaming(): - outs = [] - # frame_sizes = [2] + [1] * (steps - 2) - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr_mem_efficient(frame)) - - out = torch.cat(outs, dim=1) - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -def test_cross_attention(): - torch.manual_seed(1234) - for norm_first in [True, False]: - m = StreamingTransformer( - 16, 4, 2, cross_attention=False, norm_first=norm_first, dropout=0., custom=True) - m_cross = StreamingTransformer( - 16, 4, 2, cross_attention=True, norm_first=norm_first, dropout=0., custom=True) - m_cross.load_state_dict(m.state_dict(), strict=False) - x = torch.randn(2, 5, 16) - cross_x = torch.randn(2, 3, 16) - y_ref = m(x) - y_cross_zero = m_cross(x, cross_attention_src=0 * cross_x) - # With norm_first, the two should be exactly yhe same, - # but with norm_first=False, we get 2 normalization in a row - # and the epsilon value leads to a tiny change. - atol = 0. if norm_first else 1e-6 - print((y_ref - y_cross_zero).norm() / y_ref.norm()) - assert torch.allclose(y_ref, y_cross_zero, atol=atol) - - # We now expect a difference even with a generous atol of 1e-2. - y_cross = m_cross(x, cross_attention_src=cross_x) - assert not torch.allclose(y_cross, y_cross_zero, atol=1e-2) - - with pytest.raises(AssertionError): - _ = m_cross(x) - _ = m(x, cross_attention_src=cross_x) - - -def test_cross_attention_compat(): - torch.manual_seed(1234) - num_heads = 2 - dim = num_heads * 64 - with pytest.raises(AssertionError): - StreamingMultiheadAttention(dim, num_heads, causal=True, cross_attention=True) - - cross_attn = StreamingMultiheadAttention( - dim, num_heads, dropout=0, cross_attention=True, custom=True) - ref_attn = torch.nn.MultiheadAttention(dim, num_heads, dropout=0, batch_first=True) - - # We can load the regular attention state dict - # so we have compat when loading old checkpoints. - cross_attn.load_state_dict(ref_attn.state_dict()) - - queries = torch.randn(3, 7, dim) - keys = torch.randn(3, 9, dim) - values = torch.randn(3, 9, dim) - - y = cross_attn(queries, keys, values)[0] - y_ref = ref_attn(queries, keys, values)[0] - assert torch.allclose(y, y_ref, atol=1e-7), (y - y_ref).norm() / y_ref.norm() - - # Now let's check that streaming is working properly. - with cross_attn.streaming(): - ys = [] - for step in range(queries.shape[1]): - ys.append(cross_attn(queries[:, step: step + 1], keys, values)[0]) - y_streaming = torch.cat(ys, dim=1) - assert torch.allclose(y_streaming, y, atol=1e-7) - - -def test_repeat_kv(): - torch.manual_seed(1234) - num_heads = 8 - kv_repeat = 4 - dim = num_heads * 64 - with pytest.raises(AssertionError): - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat, cross_attention=True) - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat) - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat, custom=True) - x = torch.randn(4, 18, dim) - y = mha(x, x, x)[0] - assert x.shape == y.shape - - -def test_qk_layer_norm(): - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, bias_attn=False) - steps = 12 - x = torch.randn(3, steps, 16) - y = tr(x) - - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, cross_attention=True) - z = torch.randn(3, 21, 16) - y = tr(x, cross_attention_src=z) - assert y.shape == x.shape diff --git a/spaces/Manthanx/catsdogs/README.md b/spaces/Manthanx/catsdogs/README.md deleted file mode 100644 index e5ef8cccbab7f5eaad7d8c9587a143387d8e475e..0000000000000000000000000000000000000000 --- a/spaces/Manthanx/catsdogs/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Catsdogs -emoji: 📚 -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.28.3 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Marshalls/testmtd/training/train.py b/spaces/Marshalls/testmtd/training/train.py deleted file mode 100644 index e619c745cef5818aa2299568144c149d5674ce88..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/training/train.py +++ /dev/null @@ -1,151 +0,0 @@ -import sys -import os -THIS_DIR = os.path.dirname(os.path.abspath(__file__)) -ROOT_DIR = os.path.abspath(os.path.join(THIS_DIR, os.pardir)) -sys.path.append(ROOT_DIR) -import glob -import torch -print(torch.cuda.is_available()) -from training.datasets import create_dataset, create_dataloader -print("HIII") -from models import create_model -import pytorch_lightning as pl -from training.options.train_options import TrainOptions -from pytorch_lightning import Trainer -from pytorch_lightning.loggers import TensorBoardLogger -print("HIII") -from pytorch_lightning.plugins import DDPPlugin -from pytorch_lightning.plugins.training_type.deepspeed import DeepSpeedPlugin -from pytorch_lightning.callbacks import ModelCheckpoint - - -from training.utils import get_latest_checkpoint -from mpi4py import MPI -comm = MPI.COMM_WORLD -rank = comm.Get_rank() -size = comm.Get_size() -print(rank) - - -if __name__ == '__main__': - pl.seed_everything(69420) - opt = TrainOptions().parse() - #Path(opt.checkpoints_dir+"/"+opt.experiment_name).mkdir(parents=True,exist_ok=True) - if rank == 0: - if not os.path.exists(opt.checkpoints_dir+"/"+opt.experiment_name): - os.makedirs(opt.checkpoints_dir+"/"+opt.experiment_name) - print("loaded options") - print(opt.experiment_name) - model = create_model(opt) - print("loaded model") - if "tpu_cores" in vars(opt) and opt.tpu_cores is not None and opt.tpu_cores > 0: - plugins = None - elif opt.plugins is None: - print("DDPPlugin") - plugins = DDPPlugin(find_unused_parameters=opt.find_unused_parameters, num_nodes=opt.num_nodes) - elif opt.plugins == "deepspeed": - deepspeed_config = { - "zero_optimization": { - "stage": 2, - "cpu_offload":False, - }, - #'train_batch_size': opt.batch_size, - 'gradient_clipping': opt.gradient_clip_val, - 'fp16': { - 'enabled': opt.precision == 16, - 'loss_scale': 0, - 'initial_scale_power': 15, - }, - } - plugins = DeepSpeedPlugin(config=deepspeed_config) - else: - #ddpplugin = DDPPlugin(find_unused_parameters=opt.find_unused_parameters, num_nodes=opt.num_nodes) - #plugins = [ddpplugin, opt.plugins] - plugins = opt.plugins - - ##Datasets and dataloaders - train_dataset = create_dataset(opt) - train_dataset.setup() - train_dataloader = create_dataloader(train_dataset) - if opt.do_validation: - val_dataset = create_dataset(opt, split="val") - val_dataset.setup() - val_dataloader = create_dataloader(val_dataset, split="val") - if opt.do_testing: - test_dataset = create_dataset(opt, split="test") - test_dataset.setup() - test_dataloader = create_dataloader(test_dataset, split="test") - print('#training sequences = {:d}'.format(len(train_dataset))) - - default_save_path = opt.checkpoints_dir+"/"+opt.experiment_name - - logger = TensorBoardLogger(opt.checkpoints_dir, name=opt.experiment_name, default_hp_metric=False) - checkpoint_callback = ModelCheckpoint( - ##### - monitor = 'loss', - save_top_k = 5, - every_n_train_steps = 1000, - # every_n_train_steps = 10, - ) - callbacks = [checkpoint_callback] - args = Trainer.parse_argparser(opt) - - if opt.continue_train: - print("CONTINUE TRAIN") - #TODO: add option to override saved hparams when doing continue_train with an hparams file, or even make that default - logs_path = default_save_path - latest_file = get_latest_checkpoint(logs_path) - print(latest_file) - if opt.load_weights_only: - state_dict = torch.load(latest_file) - state_dict = state_dict['state_dict'] - load_strict = True - if opt.only_load_in_state_dict != "": - state_dict = {k:v for k,v in state_dict.items() if (opt.only_load_in_state_dict in k)} - load_strict = False - if opt.ignore_in_state_dict != "": - state_dict = {k:v for k,v in state_dict.items() if not (opt.ignore_in_state_dict in k)} - load_strict = False - model.load_state_dict(state_dict, strict=load_strict) - trainer = Trainer.from_argparse_args(args, logger=logger, default_root_dir=default_save_path, plugins=plugins, callbacks=callbacks) - else: - trainer = Trainer.from_argparse_args(args, logger=logger, default_root_dir=default_save_path, resume_from_checkpoint=latest_file, plugins=plugins, callbacks=callbacks) - else: - trainer = Trainer.from_argparse_args(args, logger=logger, default_root_dir=default_save_path, plugins=plugins, callbacks=callbacks) - - #Tuning - if opt.do_tuning: - if opt.do_validation: - trainer.tune(model, train_dataloader, val_dataloader) - else: - trainer.tune(model, train_dataloader) - - #Training - if not opt.skip_training: - if opt.do_validation: - trainer.fit(model, train_dataloader, val_dataloader) - else: - trainer.fit(model, train_dataloader) - - #evaluating on test set - if opt.do_testing: - print("TESTING") - logs_path = default_save_path - latest_file = get_latest_checkpoint(logs_path) - print(latest_file) - state_dict = torch.load(latest_file) - model.load_state_dict(state_dict['state_dict']) - trainer.test(model, test_dataloader) - - # trainer = Trainer(logger=logger) - # # trainer.test(model, train_dataloader) - # logs_path = default_save_path - # checkpoint_subdirs = [(d,int(d.split("_")[1])) for d in os.listdir(logs_path) if os.path.isdir(logs_path+"/"+d)] - # checkpoint_subdirs = sorted(checkpoint_subdirs,key=lambda t: t[1]) - # checkpoint_path=logs_path+"/"+checkpoint_subdirs[-1][0]+"/checkpoints/" - # list_of_files = glob.glob(checkpoint_path+'/*') # * means all if need specific format then *.csv - # latest_file = max(list_of_files, key=os.path.getctime) - # print(latest_file) - # trainer.test(model, test_dataloaders=test_dataloader, ckpt_path=latest_file) - # trainer.test(test_dataloaders=test_dataloader, ckpt_path=latest_file) - # trainer.test(test_dataloaders=test_dataloader) diff --git a/spaces/MashiroSA/sovits-emu-voice-transform/hubert/hubert_model_onnx.py b/spaces/MashiroSA/sovits-emu-voice-transform/hubert/hubert_model_onnx.py deleted file mode 100644 index d18f3c2a0fc29592a573a9780308d38f059640b9..0000000000000000000000000000000000000000 --- a/spaces/MashiroSA/sovits-emu-voice-transform/hubert/hubert_model_onnx.py +++ /dev/null @@ -1,217 +0,0 @@ -import copy -import random -from typing import Optional, Tuple - -import torch -import torch.nn as nn -import torch.nn.functional as t_func -from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present - - -class Hubert(nn.Module): - def __init__(self, num_label_embeddings: int = 100, mask: bool = True): - super().__init__() - self._mask = mask - self.feature_extractor = FeatureExtractor() - self.feature_projection = FeatureProjection() - self.positional_embedding = PositionalConvEmbedding() - self.norm = nn.LayerNorm(768) - self.dropout = nn.Dropout(0.1) - self.encoder = TransformerEncoder( - nn.TransformerEncoderLayer( - 768, 12, 3072, activation="gelu", batch_first=True - ), - 12, - ) - self.proj = nn.Linear(768, 256) - - self.masked_spec_embed = nn.Parameter(torch.FloatTensor(768).uniform_()) - self.label_embedding = nn.Embedding(num_label_embeddings, 256) - - def mask(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - mask = None - if self.training and self._mask: - mask = _compute_mask((x.size(0), x.size(1)), 0.8, 10, x.device, 2) - x[mask] = self.masked_spec_embed.to(x.dtype) - return x, mask - - def encode( - self, x: torch.Tensor, layer: Optional[int] = None - ) -> Tuple[torch.Tensor, torch.Tensor]: - x = self.feature_extractor(x) - x = self.feature_projection(x.transpose(1, 2)) - x, mask = self.mask(x) - x = x + self.positional_embedding(x) - x = self.dropout(self.norm(x)) - x = self.encoder(x, output_layer=layer) - return x, mask - - def logits(self, x: torch.Tensor) -> torch.Tensor: - logits = torch.cosine_similarity( - x.unsqueeze(2), - self.label_embedding.weight.unsqueeze(0).unsqueeze(0), - dim=-1, - ) - return logits / 0.1 - - -class HubertSoft(Hubert): - def __init__(self): - super().__init__() - - def units(self, wav: torch.Tensor) -> torch.Tensor: - wav = t_func.pad(wav, ((400 - 320) // 2, (400 - 320) // 2)) - x, _ = self.encode(wav) - return self.proj(x) - - def forward(self, x): - return self.units(x) - -class FeatureExtractor(nn.Module): - def __init__(self): - super().__init__() - self.conv0 = nn.Conv1d(1, 512, 10, 5, bias=False) - self.norm0 = nn.GroupNorm(512, 512) - self.conv1 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv2 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv3 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv4 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv5 = nn.Conv1d(512, 512, 2, 2, bias=False) - self.conv6 = nn.Conv1d(512, 512, 2, 2, bias=False) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = t_func.gelu(self.norm0(self.conv0(x))) - x = t_func.gelu(self.conv1(x)) - x = t_func.gelu(self.conv2(x)) - x = t_func.gelu(self.conv3(x)) - x = t_func.gelu(self.conv4(x)) - x = t_func.gelu(self.conv5(x)) - x = t_func.gelu(self.conv6(x)) - return x - - -class FeatureProjection(nn.Module): - def __init__(self): - super().__init__() - self.norm = nn.LayerNorm(512) - self.projection = nn.Linear(512, 768) - self.dropout = nn.Dropout(0.1) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.norm(x) - x = self.projection(x) - x = self.dropout(x) - return x - - -class PositionalConvEmbedding(nn.Module): - def __init__(self): - super().__init__() - self.conv = nn.Conv1d( - 768, - 768, - kernel_size=128, - padding=128 // 2, - groups=16, - ) - self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.conv(x.transpose(1, 2)) - x = t_func.gelu(x[:, :, :-1]) - return x.transpose(1, 2) - - -class TransformerEncoder(nn.Module): - def __init__( - self, encoder_layer: nn.TransformerEncoderLayer, num_layers: int - ) -> None: - super(TransformerEncoder, self).__init__() - self.layers = nn.ModuleList( - [copy.deepcopy(encoder_layer) for _ in range(num_layers)] - ) - self.num_layers = num_layers - - def forward( - self, - src: torch.Tensor, - mask: torch.Tensor = None, - src_key_padding_mask: torch.Tensor = None, - output_layer: Optional[int] = None, - ) -> torch.Tensor: - output = src - for layer in self.layers[:output_layer]: - output = layer( - output, src_mask=mask, src_key_padding_mask=src_key_padding_mask - ) - return output - - -def _compute_mask( - shape: Tuple[int, int], - mask_prob: float, - mask_length: int, - device: torch.device, - min_masks: int = 0, -) -> torch.Tensor: - batch_size, sequence_length = shape - - if mask_length < 1: - raise ValueError("`mask_length` has to be bigger than 0.") - - if mask_length > sequence_length: - raise ValueError( - f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and `sequence_length`: {sequence_length}`" - ) - - # compute number of masked spans in batch - num_masked_spans = int(mask_prob * sequence_length / mask_length + random.random()) - num_masked_spans = max(num_masked_spans, min_masks) - - # make sure num masked indices <= sequence_length - if num_masked_spans * mask_length > sequence_length: - num_masked_spans = sequence_length // mask_length - - # SpecAugment mask to fill - mask = torch.zeros((batch_size, sequence_length), device=device, dtype=torch.bool) - - # uniform distribution to sample from, make sure that offset samples are < sequence_length - uniform_dist = torch.ones( - (batch_size, sequence_length - (mask_length - 1)), device=device - ) - - # get random indices to mask - mask_indices = torch.multinomial(uniform_dist, num_masked_spans) - - # expand masked indices to masked spans - mask_indices = ( - mask_indices.unsqueeze(dim=-1) - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - offsets = ( - torch.arange(mask_length, device=device)[None, None, :] - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - mask_idxs = mask_indices + offsets - - # scatter indices to mask - mask = mask.scatter(1, mask_idxs, True) - - return mask - - -def hubert_soft( - path: str, -) -> HubertSoft: - r"""HuBERT-Soft from `"A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion"`. - Args: - path (str): path of a pretrained model - """ - hubert = HubertSoft() - checkpoint = torch.load(path) - consume_prefix_in_state_dict_if_present(checkpoint, "module.") - hubert.load_state_dict(checkpoint) - hubert.eval() - return hubert diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/bricks/__init__.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/bricks/__init__.py deleted file mode 100644 index 0f33124ed23fc6f27119a37bcb5ab004d3572be0..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/bricks/__init__.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .activation import build_activation_layer -from .context_block import ContextBlock -from .conv import build_conv_layer -from .conv2d_adaptive_padding import Conv2dAdaptivePadding -from .conv_module import ConvModule -from .conv_ws import ConvAWS2d, ConvWS2d, conv_ws_2d -from .depthwise_separable_conv_module import DepthwiseSeparableConvModule -from .drop import Dropout, DropPath -from .generalized_attention import GeneralizedAttention -from .hsigmoid import HSigmoid -from .hswish import HSwish -from .non_local import NonLocal1d, NonLocal2d, NonLocal3d -from .norm import build_norm_layer, is_norm -from .padding import build_padding_layer -from .plugin import build_plugin_layer -from .registry import (ACTIVATION_LAYERS, CONV_LAYERS, NORM_LAYERS, - PADDING_LAYERS, PLUGIN_LAYERS, UPSAMPLE_LAYERS) -from .scale import Scale -from .swish import Swish -from .upsample import build_upsample_layer -from .wrappers import (Conv2d, Conv3d, ConvTranspose2d, ConvTranspose3d, - Linear, MaxPool2d, MaxPool3d) - -__all__ = [ - 'ConvModule', 'build_activation_layer', 'build_conv_layer', - 'build_norm_layer', 'build_padding_layer', 'build_upsample_layer', - 'build_plugin_layer', 'is_norm', 'HSigmoid', 'HSwish', 'NonLocal1d', - 'NonLocal2d', 'NonLocal3d', 'ContextBlock', 'GeneralizedAttention', - 'ACTIVATION_LAYERS', 'CONV_LAYERS', 'NORM_LAYERS', 'PADDING_LAYERS', - 'UPSAMPLE_LAYERS', 'PLUGIN_LAYERS', 'Scale', 'ConvAWS2d', 'ConvWS2d', - 'conv_ws_2d', 'DepthwiseSeparableConvModule', 'Swish', 'Linear', - 'Conv2dAdaptivePadding', 'Conv2d', 'ConvTranspose2d', 'MaxPool2d', - 'ConvTranspose3d', 'MaxPool3d', 'Conv3d', 'Dropout', 'DropPath' -] diff --git a/spaces/MetaWabbit/Auto-GPT/tests/test_image_gen.py b/spaces/MetaWabbit/Auto-GPT/tests/test_image_gen.py deleted file mode 100644 index 19c57e427d5c1b84aa7f72925733d0056ddf5268..0000000000000000000000000000000000000000 --- a/spaces/MetaWabbit/Auto-GPT/tests/test_image_gen.py +++ /dev/null @@ -1,102 +0,0 @@ -import hashlib -import os -import unittest - -from PIL import Image - -from autogpt.commands.image_gen import generate_image, generate_image_with_sd_webui -from autogpt.config import Config -from autogpt.workspace import path_in_workspace - - -def lst(txt): - return txt.split(":")[1].strip() - - -@unittest.skipIf(os.getenv("CI"), "Skipping image generation tests") -class TestImageGen(unittest.TestCase): - def setUp(self): - self.config = Config() - - def test_dalle(self): - self.config.image_provider = "dalle" - - # Test using size 256 - result = lst(generate_image("astronaut riding a horse", 256)) - image_path = path_in_workspace(result) - self.assertTrue(image_path.exists()) - with Image.open(image_path) as img: - self.assertEqual(img.size, (256, 256)) - image_path.unlink() - - # Test using size 512 - result = lst(generate_image("astronaut riding a horse", 512)) - image_path = path_in_workspace(result) - with Image.open(image_path) as img: - self.assertEqual(img.size, (512, 512)) - image_path.unlink() - - def test_huggingface(self): - self.config.image_provider = "huggingface" - - # Test usin SD 1.4 model and size 512 - self.config.huggingface_image_model = "CompVis/stable-diffusion-v1-4" - result = lst(generate_image("astronaut riding a horse", 512)) - image_path = path_in_workspace(result) - self.assertTrue(image_path.exists()) - with Image.open(image_path) as img: - self.assertEqual(img.size, (512, 512)) - image_path.unlink() - - # Test using SD 2.1 768 model and size 768 - self.config.huggingface_image_model = "stabilityai/stable-diffusion-2-1" - result = lst(generate_image("astronaut riding a horse", 768)) - image_path = path_in_workspace(result) - with Image.open(image_path) as img: - self.assertEqual(img.size, (768, 768)) - image_path.unlink() - - def test_sd_webui(self): - self.config.image_provider = "sd_webui" - return - - # Test using size 128 - result = lst(generate_image_with_sd_webui("astronaut riding a horse", 128)) - image_path = path_in_workspace(result) - self.assertTrue(image_path.exists()) - with Image.open(image_path) as img: - self.assertEqual(img.size, (128, 128)) - image_path.unlink() - - # Test using size 64 and negative prompt - result = lst( - generate_image_with_sd_webui( - "astronaut riding a horse", - negative_prompt="horse", - size=64, - extra={"seed": 123}, - ) - ) - image_path = path_in_workspace(result) - with Image.open(image_path) as img: - self.assertEqual(img.size, (64, 64)) - neg_image_hash = hashlib.md5(img.tobytes()).hexdigest() - image_path.unlink() - - # Same test as above but without the negative prompt - result = lst( - generate_image_with_sd_webui( - "astronaut riding a horse", image_size=64, size=1, extra={"seed": 123} - ) - ) - image_path = path_in_workspace(result) - with Image.open(image_path) as img: - self.assertEqual(img.size, (64, 64)) - image_hash = hashlib.md5(img.tobytes()).hexdigest() - image_path.unlink() - - self.assertNotEqual(image_hash, neg_image_hash) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/Miuzarte/SUI-svc-3.0/inference_main.py b/spaces/Miuzarte/SUI-svc-3.0/inference_main.py deleted file mode 100644 index 3af45834539d111eba469d7a3af3598e2b5c9e82..0000000000000000000000000000000000000000 --- a/spaces/Miuzarte/SUI-svc-3.0/inference_main.py +++ /dev/null @@ -1,58 +0,0 @@ -import io -import logging -import time -from pathlib import Path - -import librosa -import numpy as np -import soundfile - -from inference import infer_tool -from inference import slicer -from inference.infer_tool import Svc - -logging.getLogger('numba').setLevel(logging.WARNING) -chunks_dict = infer_tool.read_temp("inference/chunks_temp.json") - -model_path = "logs/48k/G_1M111000_sing.pth" -config_path = "configs/config.json" -svc_model = Svc(model_path, config_path) -infer_tool.mkdir(["raw", "results"]) - -# 支持多个wav文件,放在raw文件夹下 -clean_names = [] -# 例如:clean_names = ["千千阙歌_1", "千千阙歌_2", "千千阙歌_3", "千千阙歌_4", "千千阙歌_5", "千千阙歌_6", "千千阙歌_7", "千千阙歌_8"] -# 中文字符过多会报编码错误,分两批就好 -trans = [0] # 音高调整,支持正负(半音) -spk_list = ['suiji'] # 每次同时合成多语者音色 -slice_db = -40 # 默认-40,嘈杂的音频可以-30,干声保留呼吸可以-50 -wav_format = 'wav' # 音频输出格式 - -infer_tool.fill_a_to_b(trans, clean_names) -for clean_name, tran in zip(clean_names, trans): - raw_audio_path = f"raw/{clean_name}" - if "." not in raw_audio_path: - raw_audio_path += ".wav" - infer_tool.format_wav(raw_audio_path) - wav_path = Path(raw_audio_path).with_suffix('.wav') - chunks = slicer.cut(wav_path, db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - - for spk in spk_list: - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample)) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - raw_path.seek(0) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - else: - out_audio, out_sr = svc_model.infer(spk, tran, raw_path) - _audio = out_audio.cpu().numpy() - audio.extend(list(_audio)) - - res_path = f'./results/{clean_name}_{tran}key_{spk}.{wav_format}' - soundfile.write(res_path, audio, svc_model.target_sample, format=wav_format) diff --git a/spaces/Mrleo/MyChatGPT/chatgpt - macOS.command b/spaces/Mrleo/MyChatGPT/chatgpt - macOS.command deleted file mode 100644 index fa015edca9e6916f24394813ce8ba77d2072e296..0000000000000000000000000000000000000000 --- a/spaces/Mrleo/MyChatGPT/chatgpt - macOS.command +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash -echo Opening ChuanhuChatGPT... -cd "$(dirname "${BASH_SOURCE[0]}")" -nohup python3 ChuanhuChatbot.py >/dev/null 2>&1 & -sleep 5 -open http://127.0.0.1:7860 -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal. \ No newline at end of file diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/callbacks.py b/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/callbacks.py deleted file mode 100644 index 985d0c60cc0b866e10ad350986c004e4ea4ac161..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/callbacks.py +++ /dev/null @@ -1,258 +0,0 @@ -# Lint as: python3 -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Common modules for callbacks.""" -from __future__ import absolute_import -from __future__ import division -# from __future__ import google_type_annotations -from __future__ import print_function - -import os -from typing import Any, List, MutableMapping, Text -from absl import logging -import tensorflow as tf - -from official.utils.misc import keras_utils -from official.vision.image_classification import optimizer_factory - - -def get_callbacks(model_checkpoint: bool = True, - include_tensorboard: bool = True, - time_history: bool = True, - track_lr: bool = True, - write_model_weights: bool = True, - apply_moving_average: bool = False, - initial_step: int = 0, - batch_size: int = 0, - log_steps: int = 0, - model_dir: str = None) -> List[tf.keras.callbacks.Callback]: - """Get all callbacks.""" - model_dir = model_dir or '' - callbacks = [] - if model_checkpoint: - ckpt_full_path = os.path.join(model_dir, 'model.ckpt-{epoch:04d}') - callbacks.append(tf.keras.callbacks.ModelCheckpoint( - ckpt_full_path, save_weights_only=True, verbose=1)) - if include_tensorboard: - callbacks.append( - CustomTensorBoard( - log_dir=model_dir, - track_lr=track_lr, - initial_step=initial_step, - write_images=write_model_weights)) - if time_history: - callbacks.append( - keras_utils.TimeHistory( - batch_size, - log_steps, - logdir=model_dir if include_tensorboard else None)) - if apply_moving_average: - # Save moving average model to a different file so that - # we can resume training from a checkpoint - ckpt_full_path = os.path.join( - model_dir, 'average', 'model.ckpt-{epoch:04d}') - callbacks.append(AverageModelCheckpoint( - update_weights=False, - filepath=ckpt_full_path, - save_weights_only=True, - verbose=1)) - callbacks.append(MovingAverageCallback()) - return callbacks - - -def get_scalar_from_tensor(t: tf.Tensor) -> int: - """Utility function to convert a Tensor to a scalar.""" - t = tf.keras.backend.get_value(t) - if callable(t): - return t() - else: - return t - - -class CustomTensorBoard(tf.keras.callbacks.TensorBoard): - """A customized TensorBoard callback that tracks additional datapoints. - - Metrics tracked: - - Global learning rate - - Attributes: - log_dir: the path of the directory where to save the log files to be parsed - by TensorBoard. - track_lr: `bool`, whether or not to track the global learning rate. - initial_step: the initial step, used for preemption recovery. - **kwargs: Additional arguments for backwards compatibility. Possible key is - `period`. - """ - - # TODO(b/146499062): track params, flops, log lr, l2 loss, - # classification loss - - def __init__(self, - log_dir: str, - track_lr: bool = False, - initial_step: int = 0, - **kwargs): - super(CustomTensorBoard, self).__init__(log_dir=log_dir, **kwargs) - self.step = initial_step - self._track_lr = track_lr - - def on_batch_begin(self, - epoch: int, - logs: MutableMapping[str, Any] = None) -> None: - self.step += 1 - if logs is None: - logs = {} - logs.update(self._calculate_metrics()) - super(CustomTensorBoard, self).on_batch_begin(epoch, logs) - - def on_epoch_begin(self, - epoch: int, - logs: MutableMapping[str, Any] = None) -> None: - if logs is None: - logs = {} - metrics = self._calculate_metrics() - logs.update(metrics) - for k, v in metrics.items(): - logging.info('Current %s: %f', k, v) - super(CustomTensorBoard, self).on_epoch_begin(epoch, logs) - - def on_epoch_end(self, - epoch: int, - logs: MutableMapping[str, Any] = None) -> None: - if logs is None: - logs = {} - metrics = self._calculate_metrics() - logs.update(metrics) - super(CustomTensorBoard, self).on_epoch_end(epoch, logs) - - def _calculate_metrics(self) -> MutableMapping[str, Any]: - logs = {} - # TODO(b/149030439): disable LR reporting. - # if self._track_lr: - # logs['learning_rate'] = self._calculate_lr() - return logs - - def _calculate_lr(self) -> int: - """Calculates the learning rate given the current step.""" - return get_scalar_from_tensor( - self._get_base_optimizer()._decayed_lr(var_dtype=tf.float32)) # pylint:disable=protected-access - - def _get_base_optimizer(self) -> tf.keras.optimizers.Optimizer: - """Get the base optimizer used by the current model.""" - - optimizer = self.model.optimizer - - # The optimizer might be wrapped by another class, so unwrap it - while hasattr(optimizer, '_optimizer'): - optimizer = optimizer._optimizer # pylint:disable=protected-access - - return optimizer - - -class MovingAverageCallback(tf.keras.callbacks.Callback): - """A Callback to be used with a `MovingAverage` optimizer. - - Applies moving average weights to the model during validation time to test - and predict on the averaged weights rather than the current model weights. - Once training is complete, the model weights will be overwritten with the - averaged weights (by default). - - Attributes: - overwrite_weights_on_train_end: Whether to overwrite the current model - weights with the averaged weights from the moving average optimizer. - **kwargs: Any additional callback arguments. - """ - - def __init__(self, - overwrite_weights_on_train_end: bool = False, - **kwargs): - super(MovingAverageCallback, self).__init__(**kwargs) - self.overwrite_weights_on_train_end = overwrite_weights_on_train_end - - def set_model(self, model: tf.keras.Model): - super(MovingAverageCallback, self).set_model(model) - assert isinstance(self.model.optimizer, - optimizer_factory.MovingAverage) - self.model.optimizer.shadow_copy(self.model) - - def on_test_begin(self, logs: MutableMapping[Text, Any] = None): - self.model.optimizer.swap_weights() - - def on_test_end(self, logs: MutableMapping[Text, Any] = None): - self.model.optimizer.swap_weights() - - def on_train_end(self, logs: MutableMapping[Text, Any] = None): - if self.overwrite_weights_on_train_end: - self.model.optimizer.assign_average_vars(self.model.variables) - - -class AverageModelCheckpoint(tf.keras.callbacks.ModelCheckpoint): - """Saves and, optionally, assigns the averaged weights. - - Taken from tfa.callbacks.AverageModelCheckpoint. - - Attributes: - update_weights: If True, assign the moving average weights - to the model, and save them. If False, keep the old - non-averaged weights, but the saved model uses the - average weights. - See `tf.keras.callbacks.ModelCheckpoint` for the other args. - """ - - def __init__( - self, - update_weights: bool, - filepath: str, - monitor: str = 'val_loss', - verbose: int = 0, - save_best_only: bool = False, - save_weights_only: bool = False, - mode: str = 'auto', - save_freq: str = 'epoch', - **kwargs): - self.update_weights = update_weights - super().__init__( - filepath, - monitor, - verbose, - save_best_only, - save_weights_only, - mode, - save_freq, - **kwargs) - - def set_model(self, model): - if not isinstance(model.optimizer, optimizer_factory.MovingAverage): - raise TypeError( - 'AverageModelCheckpoint is only used when training' - 'with MovingAverage') - return super().set_model(model) - - def _save_model(self, epoch, logs): - assert isinstance(self.model.optimizer, optimizer_factory.MovingAverage) - - if self.update_weights: - self.model.optimizer.assign_average_vars(self.model.variables) - return super()._save_model(epoch, logs) - else: - # Note: `model.get_weights()` gives us the weights (non-ref) - # whereas `model.variables` returns references to the variables. - non_avg_weights = self.model.get_weights() - self.model.optimizer.assign_average_vars(self.model.variables) - # result is currently None, since `super._save_model` doesn't - # return anything, but this may change in the future. - result = super()._save_model(epoch, logs) - self.model.set_weights(non_avg_weights) - return result diff --git a/spaces/NbAiLab/maken-clip-image/app.py b/spaces/NbAiLab/maken-clip-image/app.py deleted file mode 100644 index 61c016c1eea50f7305a9a09ca17a5e8f4927d3d4..0000000000000000000000000000000000000000 --- a/spaces/NbAiLab/maken-clip-image/app.py +++ /dev/null @@ -1,133 +0,0 @@ -import os - -from pathlib import Path -import pandas as pd, numpy as np -from transformers import CLIPProcessor, CLIPTextModel, CLIPModel -import torch -from torch import nn -import gradio as gr -import requests -from PIL import Image, ImageFile -from urllib.request import urlretrieve -ImageFile.LOAD_TRUNCATED_IMAGES = True - -# Download sample images -urlretrieve("https://huggingface.co/spaces/NbAiLab/maken-clip-image/resolve/main/Gibraltar_Barbary_Macaque.jpg","monkey.jpg") -urlretrieve("https://huggingface.co/spaces/NbAiLab/maken-clip-image/resolve/main/buying-a-sailboat-checklist.jpg","sailboat.jpg") -urlretrieve("https://huggingface.co/spaces/NbAiLab/maken-clip-image/resolve/main/lG5mI_9Co1obw2TiY0e-oChlXfEQY3tsRaIjpYjERqs.jpg","bicycle.jpg") -sample_images = [ - ["monkey.jpg"], - ["sailboat.jpg"], - ["bicycle.jpg"], - ] - -LABELS = Path('class_names.txt').read_text().splitlines() -class_model = nn.Sequential( - nn.Conv2d(1, 32, 3, padding='same'), - nn.ReLU(), - nn.MaxPool2d(2), - nn.Conv2d(32, 64, 3, padding='same'), - nn.ReLU(), - nn.MaxPool2d(2), - nn.Conv2d(64, 128, 3, padding='same'), - nn.ReLU(), - nn.MaxPool2d(2), - nn.Flatten(), - nn.Linear(1152, 256), - nn.ReLU(), - nn.Linear(256, len(LABELS)), -) -state_dict = torch.load('pytorch_model.bin', map_location='cpu') -class_model.load_state_dict(state_dict, strict=False) -class_model.eval() - - -model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") -processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") -df = pd.read_csv('clip.csv') -embeddings_npy = np.load('clip.npy') -embeddings = np.divide(embeddings_npy, np.sqrt(np.sum(embeddings_npy**2, axis=1, keepdims=True))) - - -def compute_text_embeddings(list_of_strings): - inputs = processor(text=list_of_strings, return_tensors="pt", padding=True) - return model.get_text_features(**inputs) - - -def compute_image_embeddings(list_of_images): - inputs = processor(images=list_of_images, return_tensors="pt", padding=True) - return model.get_image_features(**inputs) - - -def load_image(image, same_height=False): - # im = Image.open(path) - im = Image.fromarray(np.uint8(image)) - if im.mode != 'RGB': - im = im.convert('RGB') - if same_height: - ratio = 224/im.size[1] - return im.resize((int(im.size[0]*ratio), int(im.size[1]*ratio))) - else: - ratio = 224/min(im.size) - return im.resize((int(im.size[0]*ratio), int(im.size[1]*ratio))) - - -def download_img(identifier, url): - local_path = f"{identifier}.jpg" - if not os.path.isfile(local_path): - img_data = requests.get(url).content - with open(local_path, 'wb') as handler: - handler.write(img_data) - return local_path - - -def predict(image=None, text=None, sketch=None): - if image is not None: - input_embeddings = compute_image_embeddings([load_image(image)]).detach().numpy() - topk = {"local": 100} - else: - if text: - query = text - topk = {text: 100} - else: - x = torch.tensor(sketch, dtype=torch.float32).unsqueeze(0).unsqueeze(0) / 255. - with torch.no_grad(): - out = class_model(x) - probabilities = torch.nn.functional.softmax(out[0], dim=0) - values, indices = torch.topk(probabilities, 5) - query = LABELS[indices[0]] - topk = {LABELS[i]: v.item() for i, v in zip(indices, values)} - input_embeddings = compute_text_embeddings([query]).detach().numpy() - - n_results = 3 - results = np.argsort((embeddings @ input_embeddings.T)[:, 0])[-1:-n_results - 1:-1] - outputs = [download_img(df.iloc[i]['id'], df.iloc[i]['thumbnail']) for i in results] - outputs.insert(0, topk) - print(outputs) - return outputs - - -def predict_image(image): - return predict(image, None, None) - - -def predict_text(image=None, text=None, sketch=None): - return predict(None, text, None) - - -def predict_sketch(image=None, text=None, sketch=None): - return predict(None, None, image) - - -title = "Upload an image to search in the Nasjonalbiblioteket" -description = "Find images in the Nasjonalbiblioteket image collections based on images you upload" -interface = gr.Interface( - fn=predict_image, - inputs=["image"], - outputs=[gr.outputs.Label(num_top_classes=3), gr.outputs.Image(type="file"), gr.outputs.Image(type="file"), gr.outputs.Image(type="file")], - title=title, - description=description, - examples=sample_images, - #live=True -) -interface.launch(debug=True) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/multilingual/data_scripts/utils/fasttext_multi_filter.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/multilingual/data_scripts/utils/fasttext_multi_filter.py deleted file mode 100644 index 41b38ba5bef20cb043921ac61820db8689189a5a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/multilingual/data_scripts/utils/fasttext_multi_filter.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -#!/bin/python - -import fasttext -from multiprocessing import Pool -import contextlib -import sys -import argparse -from functools import partial -import io - -model = None -def init(model_path): - global model - model = fasttext.load_model(model_path) - -def pred(lines): - return lines, [model.predict(line.strip())[0][0][9:] for line in lines] - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--model", type=str, required=True, - help="model to load") - parser.add_argument("--inputs", nargs="+", default=['-'], - help="input files to filter") - parser.add_argument("--langs", nargs="+", required=True, - help="lang ids of each input file") - parser.add_argument("--outputs", nargs="+", default=['-'], - help="path to save lid filtered outputs") - parser.add_argument("--num-workers", type=int, metavar="N", default=10, - help="number of processes in parallel") - args = parser.parse_args() - - assert len(args.inputs) == len(args.langs) and len(args.inputs) == len(args.outputs) - - with contextlib.ExitStack() as stack: - inputs = [ - stack.enter_context(open(input, "r", encoding="utf-8", newline="\n", errors="replace")) - if input != "-" else io.TextIOWrapper(sys.stdin.buffer, encoding='utf-8', errors="replace") - for input in args.inputs - ] - outputs = [ - stack.enter_context(open(output, "w", encoding="utf-8", newline="\n")) - if output != "-" else sys.stdout - for output in args.outputs - ] - with Pool(args.num_workers, initializer=partial(init, args.model)) as p: - skip_cnt = 0 - for lines, preds in p.imap(pred, list(zip(*inputs)), chunksize=500): - if not all(a == b for a, b in zip(preds, args.langs)): - skip_cnt += 1 - continue - for line, output_h in zip(lines, outputs): - print(line.strip(), file=output_h) - print(f"Skipped {skip_cnt} lines.") - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/roberta/wsc/wsc_task.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/roberta/wsc/wsc_task.py deleted file mode 100644 index 602ea737ed75a33fddf44dd859e999ecfce2730d..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/roberta/wsc/wsc_task.py +++ /dev/null @@ -1,401 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import json -import os -import tempfile - -import numpy as np -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.data import ( - Dictionary, - IdDataset, - ListDataset, - NestedDictionaryDataset, - NumelDataset, - NumSamplesDataset, - PadDataset, - SortDataset, - data_utils, - encoders, -) -from fairseq.tasks import LegacyFairseqTask, register_task - -from . import wsc_utils - - -@register_task("wsc") -class WSCTask(LegacyFairseqTask): - """Task to finetune RoBERTa for Winograd Schemas.""" - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - parser.add_argument( - "data", metavar="DIR", help="path to data directory; we load .jsonl" - ) - parser.add_argument( - "--init-token", - type=int, - default=None, - help="add token at the beginning of each batch item", - ) - - def __init__(self, args, vocab): - super().__init__(args) - self.vocab = vocab - self.mask = vocab.add_symbol("") - - self.bpe = encoders.build_bpe(args) - self.tokenizer = encoders.build_tokenizer(args) - - # hack to handle GPT-2 BPE, which includes leading spaces - if args.bpe == "gpt2": - self.leading_space = True - self.trailing_space = False - else: - self.leading_space = False - self.trailing_space = True - - @classmethod - def load_dictionary(cls, filename): - """Load the dictionary from the filename - - Args: - filename (str): the filename - """ - dictionary = Dictionary.load(filename) - dictionary.add_symbol("") - return dictionary - - @classmethod - def setup_task(cls, args, **kwargs): - assert args.criterion == "wsc", "Must set --criterion=wsc" - - # load data and label dictionaries - vocab = cls.load_dictionary(os.path.join(args.data, "dict.txt")) - print("| dictionary: {} types".format(len(vocab))) - - return cls(args, vocab) - - def binarize(self, s: str, append_eos: bool = False): - if self.tokenizer is not None: - s = self.tokenizer.encode(s) - if self.bpe is not None: - s = self.bpe.encode(s) - tokens = self.vocab.encode_line( - s, - append_eos=append_eos, - add_if_not_exist=False, - ).long() - if self.args.init_token is not None: - tokens = torch.cat([tokens.new([self.args.init_token]), tokens]) - return tokens - - def binarize_with_mask(self, txt, prefix, suffix, leading_space, trailing_space): - toks = self.binarize( - prefix + leading_space + txt + trailing_space + suffix, - append_eos=True, - ) - mask = torch.zeros_like(toks, dtype=torch.bool) - mask_start = len(self.binarize(prefix)) - mask_size = len(self.binarize(leading_space + txt)) - mask[mask_start : mask_start + mask_size] = 1 - return toks, mask - - def load_dataset( - self, split, epoch=1, combine=False, data_path=None, return_only=False, **kwargs - ): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - if data_path is None: - data_path = os.path.join(self.args.data, split + ".jsonl") - if not os.path.exists(data_path): - raise FileNotFoundError("Cannot find data: {}".format(data_path)) - - query_tokens = [] - query_masks = [] - query_lengths = [] - candidate_tokens = [] - candidate_masks = [] - candidate_lengths = [] - labels = [] - - for sentence, pronoun_span, query, label in wsc_utils.jsonl_iterator(data_path): - prefix = sentence[: pronoun_span.start].text - suffix = sentence[pronoun_span.end :].text_with_ws - - # spaCy spans include trailing spaces, but we need to know about - # leading spaces for the GPT-2 BPE - leading_space = ( - " " if sentence[: pronoun_span.start].text_with_ws.endswith(" ") else "" - ) - trailing_space = " " if pronoun_span.text_with_ws.endswith(" ") else "" - - # get noun phrases, excluding pronouns and anything overlapping with the query - cand_spans = wsc_utils.filter_noun_chunks( - wsc_utils.extended_noun_chunks(sentence), - exclude_pronouns=True, - exclude_query=query, - exact_match=False, - ) - - if query is not None: - query_toks, query_mask = self.binarize_with_mask( - query, prefix, suffix, leading_space, trailing_space - ) - query_len = len(query_toks) - else: - query_toks, query_mask, query_len = None, None, 0 - - query_tokens.append(query_toks) - query_masks.append(query_mask) - query_lengths.append(query_len) - - cand_toks, cand_masks = [], [] - for cand_span in cand_spans: - toks, mask = self.binarize_with_mask( - cand_span.text, - prefix, - suffix, - leading_space, - trailing_space, - ) - cand_toks.append(toks) - cand_masks.append(mask) - - # collate candidates - cand_toks = data_utils.collate_tokens(cand_toks, pad_idx=self.vocab.pad()) - cand_masks = data_utils.collate_tokens(cand_masks, pad_idx=0) - assert cand_toks.size() == cand_masks.size() - - candidate_tokens.append(cand_toks) - candidate_masks.append(cand_masks) - candidate_lengths.append(cand_toks.size(1)) - - labels.append(label) - - query_lengths = np.array(query_lengths) - query_tokens = ListDataset(query_tokens, query_lengths) - query_masks = ListDataset(query_masks, query_lengths) - - candidate_lengths = np.array(candidate_lengths) - candidate_tokens = ListDataset(candidate_tokens, candidate_lengths) - candidate_masks = ListDataset(candidate_masks, candidate_lengths) - - labels = ListDataset(labels, [1] * len(labels)) - - dataset = { - "id": IdDataset(), - "query_tokens": query_tokens, - "query_masks": query_masks, - "candidate_tokens": candidate_tokens, - "candidate_masks": candidate_masks, - "labels": labels, - "nsentences": NumSamplesDataset(), - "ntokens": NumelDataset(query_tokens, reduce=True), - } - - nested_dataset = NestedDictionaryDataset( - dataset, - sizes=[query_lengths], - ) - - with data_utils.numpy_seed(self.args.seed): - shuffle = np.random.permutation(len(query_tokens)) - dataset = SortDataset( - nested_dataset, - # shuffle - sort_order=[shuffle], - ) - - if return_only: - return dataset - - self.datasets[split] = dataset - return self.datasets[split] - - def build_dataset_for_inference(self, sample_json): - with tempfile.NamedTemporaryFile(buffering=0) as h: - h.write((json.dumps(sample_json) + "\n").encode("utf-8")) - dataset = self.load_dataset( - "disambiguate_pronoun", - data_path=h.name, - return_only=True, - ) - return dataset - - def disambiguate_pronoun(self, model, sentence, use_cuda=False): - sample_json = wsc_utils.convert_sentence_to_json(sentence) - dataset = self.build_dataset_for_inference(sample_json) - sample = dataset.collater([dataset[0]]) - if use_cuda: - sample = utils.move_to_cuda(sample) - - def get_masked_input(tokens, mask): - masked_tokens = tokens.clone() - masked_tokens[mask.bool()] = self.mask - return masked_tokens - - def get_lprobs(tokens, mask): - logits, _ = model(src_tokens=get_masked_input(tokens, mask)) - lprobs = F.log_softmax(logits, dim=-1, dtype=torch.float) - scores = lprobs.gather(2, tokens.unsqueeze(-1)).squeeze(-1) - mask = mask.type_as(scores) - scores = (scores * mask).sum(dim=-1) / mask.sum(dim=-1) - return scores - - cand_lprobs = get_lprobs( - sample["candidate_tokens"][0], - sample["candidate_masks"][0], - ) - if sample["query_tokens"][0] is not None: - query_lprobs = get_lprobs( - sample["query_tokens"][0].unsqueeze(0), - sample["query_masks"][0].unsqueeze(0), - ) - return (query_lprobs >= cand_lprobs).all().item() == 1 - else: - best_idx = cand_lprobs.argmax().item() - full_cand = sample["candidate_tokens"][0][best_idx] - mask = sample["candidate_masks"][0][best_idx] - toks = full_cand[mask.bool()] - return self.bpe.decode(self.source_dictionary.string(toks)).strip() - - @property - def source_dictionary(self): - return self.vocab - - @property - def target_dictionary(self): - return self.vocab - - -@register_task("winogrande") -class WinograndeTask(WSCTask): - """ - Task for WinoGrande dataset. Efficient implementation for Winograd schema - tasks with exactly two candidates, one of which is correct. - """ - - @classmethod - def setup_task(cls, args, **kwargs): - assert args.criterion == "winogrande", "Must set --criterion=winogrande" - - # load data and label dictionaries - vocab = cls.load_dictionary(os.path.join(args.data, "dict.txt")) - print("| dictionary: {} types".format(len(vocab))) - - return cls(args, vocab) - - def load_dataset( - self, split, epoch=1, combine=False, data_path=None, return_only=False, **kwargs - ): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - if data_path is None: - data_path = os.path.join(self.args.data, split + ".jsonl") - if not os.path.exists(data_path): - raise FileNotFoundError("Cannot find data: {}".format(data_path)) - - query_tokens = [] - query_masks = [] - query_lengths = [] - candidate_tokens = [] - candidate_masks = [] - candidate_lengths = [] - - itr = wsc_utils.winogrande_jsonl_iterator(data_path, eval=(split == "test")) - - for sample in itr: - sentence, pronoun_span, query, cand_text = sample - prefix = sentence[: pronoun_span[0]].rstrip() - suffix = sentence[pronoun_span[1] :] - - leading_space = " " if sentence[: pronoun_span[0]].endswith(" ") else "" - trailing_space = "" - - if query is not None: - query_toks, query_mask = self.binarize_with_mask( - query, - prefix, - suffix, - leading_space, - trailing_space, - ) - query_len = len(query_toks) - else: - query_toks, query_mask, query_len = None, None, 0 - - query_tokens.append(query_toks) - query_masks.append(query_mask) - query_lengths.append(query_len) - - cand_toks, cand_mask = self.binarize_with_mask( - cand_text, - prefix, - suffix, - leading_space, - trailing_space, - ) - - candidate_tokens.append(cand_toks) - candidate_masks.append(cand_mask) - candidate_lengths.append(cand_toks.size(0)) - - query_lengths = np.array(query_lengths) - - def get_pad_dataset_fn(tokens, length, pad_idx): - return PadDataset( - ListDataset(tokens, length), - pad_idx=pad_idx, - left_pad=False, - ) - - query_tokens = get_pad_dataset_fn(query_tokens, query_lengths, self.vocab.pad()) - query_masks = get_pad_dataset_fn(query_masks, query_lengths, 0) - - candidate_lengths = np.array(candidate_lengths) - candidate_tokens = get_pad_dataset_fn( - candidate_tokens, candidate_lengths, self.vocab.pad() - ) - candidate_masks = get_pad_dataset_fn(candidate_masks, candidate_lengths, 0) - - dataset = { - "id": IdDataset(), - "query_tokens": query_tokens, - "query_masks": query_masks, - "candidate_tokens": candidate_tokens, - "candidate_masks": candidate_masks, - "nsentences": NumSamplesDataset(), - "ntokens": NumelDataset(query_tokens, reduce=True), - } - - nested_dataset = NestedDictionaryDataset( - dataset, - sizes=[query_lengths], - ) - - with data_utils.numpy_seed(self.args.seed): - shuffle = np.random.permutation(len(query_tokens)) - dataset = SortDataset( - nested_dataset, - # shuffle - sort_order=[shuffle], - ) - - if return_only: - return dataset - - self.datasets[split] = dataset - return self.datasets[split] diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/wav2vec/wav2vec.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/wav2vec/wav2vec.py deleted file mode 100644 index af6604da10f504baabff50bf14a6eb2214bffef3..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/wav2vec/wav2vec.py +++ /dev/null @@ -1,630 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -import logging -import math -from typing import Optional, Tuple -from omegaconf import II -import sys - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.models import BaseFairseqModel, register_model -from fairseq.modules import ( - Fp32GroupNorm, - Fp32LayerNorm, - GumbelVectorQuantizer, - KmeansVectorQuantizer, - TransposeLast, -) -from fairseq.tasks import FairseqTask -from fairseq.utils import buffered_arange - - -logger = logging.getLogger(__name__) - - -AGGREGATOR_CHOICES = ChoiceEnum(["cnn", "gru"]) -PROJECT_FEATURES_CHOICES = ChoiceEnum(["none", "same", "new"]) -ACTIVATION_CHOICES = ChoiceEnum(["relu", "gelu"]) -VQ_TYPE_CHOICES = ChoiceEnum(["none", "gumbel", "kmeans"]) - - -@dataclass -class Wav2VecConfig(FairseqDataclass): - prediction_steps: int = field( - default=12, metadata={"help": "number of steps ahead to predict"} - ) - sample_distance: Optional[int] = field( - default=None, - metadata={ - "help": "sample distance from target. does not work properly with cross-sampling" - }, - ) - cross_sample_negatives: int = field( - default=0, metadata={"help": "num of cross sampled negatives"} - ) - num_negatives: int = field( - default=10, metadata={"help": "num of sampled negatives"} - ) - conv_feature_layers: str = field( - default="[(512, 10, 5), (512, 8, 4), (512, 4, 2), (512, 4, 2), (512, 4, 2), (512, 1, 1), (512, 1, 1), (512, 1, 1)]", - metadata={ - "help": "convolutional feature extraction layers [(dim, kernel_size, stride), ...]" - }, - ) - conv_aggregator_layers: str = field( - default="[(512, 2, 1), (512, 3, 1), (512, 4, 1), (512, 5, 1), (512, 6, 1), (512, 7, 1), (512, 8, 1), (512, 9, 1), (512, 10, 1), (512, 11, 1), (512, 12, 1), (512, 13, 1)]", - metadata={ - "help": "convolutional aggregator layers [(dim, kernel_size, stride), ...]" - }, - ) - dropout: float = field( - default=0.0, metadata={"help": "dropout to apply within the model"} - ) - dropout_features: float = field( - default=0.0, metadata={"help": "dropout to apply to the features"} - ) - dropout_agg: float = field( - default=0.0, metadata={"help": "dropout to apply after aggregation step"} - ) - aggregator: AGGREGATOR_CHOICES = field( - default="cnn", metadata={"help": "type of aggregator to use"} - ) - gru_dim: int = field(default=512, metadata={"help": "GRU dimensionality"}) - no_conv_bias: bool = field( - default=False, metadata={"help": "if set, does not learn bias for conv layers"} - ) - agg_zero_pad: bool = field( - default=False, - metadata={"help": "if set, zero pads in aggregator instead of repl pad"}, - ) - skip_connections_feat: bool = field( - default=False, - metadata={"help": "if set, adds skip connections to the feature extractor"}, - ) - skip_connections_agg: bool = field( - default=True, - metadata={"help": "if set, adds skip connections to the aggregator"}, - ) - residual_scale: float = field( - default=0.5, metadata={"help": "scales residual by sqrt(value)"} - ) - log_compression: bool = field( - default=True, - metadata={"help": "if set, adds a log compression to feature extractor"}, - ) - balanced_classes: bool = field( - default=False, - metadata={"help": "if set, loss is scaled to balance for number of negatives"}, - ) - project_features: PROJECT_FEATURES_CHOICES = field( - default="none", - metadata={ - "help": "if not none, features are projected using the (same or new) aggregator" - }, - ) - non_affine_group_norm: bool = field( - default=False, metadata={"help": "if set, group norm is not affine"} - ) - offset: str = field( - default="auto", - metadata={ - "help": "if set to 'auto', it is computed automatically from the receptive field, else set to int value" - }, - ) - activation: ACTIVATION_CHOICES = field( - default="relu", - metadata={ - "help": "if set to 'auto', it is computed automatically from the receptive field, else set to int value" - }, - ) - vq_type: VQ_TYPE_CHOICES = field( - default="none", metadata={"help": "which type of quantizer to use"} - ) - vq_vars: int = field( - default=320, - metadata={"help": "project to this many vector quantized variables per group"}, - ) - vq_groups: int = field( - default=2, metadata={"help": "number of groups of latent variables"} - ) - vq_dim: int = field( - default=0, - metadata={ - "help": "uses this dimensionality for quantized vectors. 0 to use model dim // groups" - }, - ) - vq_depth: int = field( - default=1, metadata={"help": "number of layers for vq weight projection"} - ) - combine_groups: bool = field( - default=False, metadata={"help": "if set, variables are shared among groups"} - ) - vq_temp: Tuple[float, float, float] = field( - default=(2.0, 0.5, 0.999995), - metadata={ - "help": "temperature for latent variable sampling with gumbel softmax. should be a tuple of 3 values (start, end, decay)" - }, - ) - vq_gamma: float = field( - default=0.25, - metadata={"help": "gamma parameter for kmeans style vector quantization"}, - ) - infonce: bool = II("criterion.infonce") - - -@register_model("wav2vec", dataclass=Wav2VecConfig) -class Wav2VecModel(BaseFairseqModel): - @classmethod - def build_model(cls, cfg: Wav2VecConfig, task: FairseqTask): - """Build a new model instance.""" - - model = Wav2VecModel(cfg) - logger.info(model) - return model - - def __init__(self, cfg: Wav2VecConfig): - super().__init__() - - self.prediction_steps = cfg.prediction_steps - offset = cfg.offset - - if cfg.activation == "relu": - activation = nn.ReLU() - elif cfg.activation == "gelu": - activation = nn.GELU() - else: - raise Exception("unknown activation " + cfg.activation) - - feature_enc_layers = eval(cfg.conv_feature_layers) - self.feature_extractor = ConvFeatureExtractionModel( - conv_layers=feature_enc_layers, - dropout=0.0, - log_compression=cfg.log_compression, - skip_connections=cfg.skip_connections_feat, - residual_scale=cfg.residual_scale, - non_affine_group_norm=cfg.non_affine_group_norm, - activation=activation, - ) - embed = feature_enc_layers[-1][0] - - self.vector_quantizer = None - if cfg.vq_type == "gumbel": - self.vector_quantizer = GumbelVectorQuantizer( - dim=embed, - num_vars=cfg.vq_vars, - temp=cfg.vq_temp, - groups=cfg.vq_groups, - combine_groups=cfg.combine_groups, - vq_dim=cfg.vq_dim if cfg.vq_dim > 0 else embed, - time_first=False, - activation=activation, - weight_proj_depth=cfg.vq_depth, - weight_proj_factor=2, - ) - elif cfg.vq_type == "kmeans": - self.vector_quantizer = KmeansVectorQuantizer( - dim=embed, - num_vars=cfg.vq_vars, - groups=cfg.vq_groups, - combine_groups=cfg.combine_groups, - vq_dim=cfg.vq_dim if cfg.vq_dim > 0 else embed, - time_first=False, - gamma=cfg.vq_gamma, - ) - else: - assert ( - cfg.vq_type == "none" or cfg.vq_type is None - ), "Unknown quantizer type" - - if cfg.offset == "auto": - jin = 0 - rin = 0 - for _, k, stride in feature_enc_layers: - if rin == 0: - rin = k - rin = rin + (k - 1) * jin - if jin == 0: - jin = stride - else: - jin *= stride - offset = math.ceil(rin / jin) - - offset = int(offset) - - def make_aggregator(): - if cfg.aggregator == "cnn": - agg_layers = eval(cfg.conv_aggregator_layers) - agg_dim = agg_layers[-1][0] - feature_aggregator = ConvAggegator( - conv_layers=agg_layers, - embed=embed, - dropout=cfg.dropout, - skip_connections=cfg.skip_connections_agg, - residual_scale=cfg.residual_scale, - non_affine_group_norm=cfg.non_affine_group_norm, - conv_bias=not cfg.no_conv_bias, - zero_pad=cfg.agg_zero_pad, - activation=activation, - ) - elif cfg.aggregator == "gru": - agg_dim = cfg.gru_dim - feature_aggregator = nn.Sequential( - TransposeLast(), - nn.GRU( - input_size=embed, - hidden_size=agg_dim, - num_layers=1, - dropout=cfg.dropout, - ), - TransposeLast(deconstruct_idx=0), - ) - else: - raise Exception("unknown aggregator type " + cfg.aggregator) - - return feature_aggregator, agg_dim - - self.feature_aggregator, agg_dim = make_aggregator() - - self.wav2vec_predictions = Wav2VecPredictionsModel( - in_dim=agg_dim, - out_dim=embed, - prediction_steps=cfg.prediction_steps, - n_negatives=cfg.num_negatives, - cross_sample_negatives=cfg.cross_sample_negatives, - sample_distance=cfg.sample_distance, - dropout=cfg.dropout, - offset=offset, - balanced_classes=cfg.balanced_classes, - infonce=cfg.infonce, - ) - - self.dropout_feats = nn.Dropout(p=cfg.dropout_features) - self.dropout_agg = nn.Dropout(p=cfg.dropout_agg) - - if cfg.project_features == "none": - self.project_features = None - elif cfg.project_features == "same": - self.project_features = self.feature_aggregator - elif cfg.project_features == "new": - self.project_features, _ = make_aggregator() - - def forward(self, source): - result = {} - - features = self.feature_extractor(source) - if self.vector_quantizer: - q_res = self.vector_quantizer(features) - features = q_res["x"] - for k in q_res.keys(): - if k != "x": - result[k] = q_res[k] - - x = self.dropout_feats(features) - x = self.feature_aggregator(x) - x = self.dropout_agg(x) - - if self.project_features is not None: - features = self.project_features(features) - x, targets = self.wav2vec_predictions(x, features) - result["cpc_logits"] = x - result["cpc_targets"] = targets - - return result - - def upgrade_state_dict_named(self, state_dict, name): - super().upgrade_state_dict_named(state_dict, name) - - def max_positions(self): - """Maximum length supported by the model.""" - return sys.maxsize - - def get_logits(self, net_output): - logits = net_output["cpc_logits"] - return logits - - def get_targets(self, sample, net_output): - t = net_output["cpc_targets"] - if isinstance(t, tuple): - t = t[0] - return t.contiguous() - - def get_target_weights(self, targets, net_output): - targets = net_output["cpc_targets"] - if isinstance(targets, tuple) and targets[-1] is not None: - return targets[-1] - return None - - def get_extra_losses(self, net_output): - loss = None - if "prob_perplexity" in net_output: - loss = net_output["num_vars"] - net_output["prob_perplexity"] - elif "kmeans_loss" in net_output: - loss = net_output["kmeans_loss"] - - return loss - - -def norm_block(is_layer_norm, dim, affine=True): - if is_layer_norm: - mod = nn.Sequential( - TransposeLast(), - Fp32LayerNorm(dim, elementwise_affine=affine), - TransposeLast(), - ) - else: - mod = Fp32GroupNorm(1, dim, affine=affine) - - return mod - - -class ConvFeatureExtractionModel(nn.Module): - def __init__( - self, - conv_layers, - dropout, - log_compression, - skip_connections, - residual_scale, - non_affine_group_norm, - activation, - ): - super().__init__() - - def block(n_in, n_out, k, stride): - return nn.Sequential( - nn.Conv1d(n_in, n_out, k, stride=stride, bias=False), - nn.Dropout(p=dropout), - norm_block( - is_layer_norm=False, dim=n_out, affine=not non_affine_group_norm - ), - activation, - ) - - in_d = 1 - self.conv_layers = nn.ModuleList() - for dim, k, stride in conv_layers: - self.conv_layers.append(block(in_d, dim, k, stride)) - in_d = dim - - self.log_compression = log_compression - self.skip_connections = skip_connections - self.residual_scale = math.sqrt(residual_scale) - - def forward(self, x): - # BxT -> BxCxT - x = x.unsqueeze(1) - - for conv in self.conv_layers: - residual = x - x = conv(x) - if self.skip_connections and x.size(1) == residual.size(1): - tsz = x.size(2) - r_tsz = residual.size(2) - residual = residual[..., :: r_tsz // tsz][..., :tsz] - x = (x + residual) * self.residual_scale - - if self.log_compression: - x = x.abs() - x = x + 1 - x = x.log() - - return x - - -class ZeroPad1d(nn.Module): - def __init__(self, pad_left, pad_right): - super().__init__() - self.pad_left = pad_left - self.pad_right = pad_right - - def forward(self, x): - return F.pad(x, (self.pad_left, self.pad_right)) - - -class ConvAggegator(nn.Module): - def __init__( - self, - conv_layers, - embed, - dropout, - skip_connections, - residual_scale, - non_affine_group_norm, - conv_bias, - zero_pad, - activation, - ): - super().__init__() - - def block(n_in, n_out, k, stride): - # padding dims only really make sense for stride = 1 - ka = k // 2 - kb = ka - 1 if k % 2 == 0 else ka - - pad = ( - ZeroPad1d(ka + kb, 0) if zero_pad else nn.ReplicationPad1d((ka + kb, 0)) - ) - - return nn.Sequential( - pad, - nn.Conv1d(n_in, n_out, k, stride=stride, bias=conv_bias), - nn.Dropout(p=dropout), - norm_block(False, n_out, affine=not non_affine_group_norm), - activation, - ) - - in_d = embed - self.conv_layers = nn.ModuleList() - self.residual_proj = nn.ModuleList() - for dim, k, stride in conv_layers: - if in_d != dim and skip_connections: - self.residual_proj.append(nn.Conv1d(in_d, dim, 1, bias=False)) - else: - self.residual_proj.append(None) - - self.conv_layers.append(block(in_d, dim, k, stride)) - in_d = dim - self.conv_layers = nn.Sequential(*self.conv_layers) - self.skip_connections = skip_connections - self.residual_scale = math.sqrt(residual_scale) - - def forward(self, x): - for rproj, conv in zip(self.residual_proj, self.conv_layers): - residual = x - x = conv(x) - if self.skip_connections: - if rproj is not None: - residual = rproj(residual) - x = (x + residual) * self.residual_scale - return x - - -class Wav2VecPredictionsModel(nn.Module): - def __init__( - self, - in_dim, - out_dim, - prediction_steps, - n_negatives, - cross_sample_negatives, - sample_distance, - dropout, - offset, - balanced_classes, - infonce, - ): - super().__init__() - - self.n_negatives = n_negatives - self.cross_sample_negatives = cross_sample_negatives - self.sample_distance = sample_distance - self.project_to_steps = nn.ConvTranspose2d( - in_dim, out_dim, (1, prediction_steps) - ) - self.dropout = nn.Dropout(p=dropout) - self.offset = offset - self.balanced_classes = balanced_classes - self.infonce = infonce - - def sample_negatives(self, y): - bsz, fsz, tsz = y.shape - - y = y.transpose(0, 1) # BCT -> CBT - y = y.contiguous().view(fsz, -1) # CBT => C(BxT) - - cross_high = tsz * bsz - high = tsz if self.sample_distance is None else min(tsz, self.sample_distance) - assert high > 1 - - neg_idxs = torch.randint(low=0, high=high, size=(bsz, self.n_negatives * tsz)) - - with torch.no_grad(): - if self.n_negatives > 0: - tszs = ( - buffered_arange(tsz) - .unsqueeze(-1) - .expand(-1, self.n_negatives) - .flatten() - ) - - neg_idxs = torch.randint( - low=0, high=high - 1, size=(bsz, self.n_negatives * tsz) - ) - neg_idxs[neg_idxs >= tszs] += 1 - - if self.cross_sample_negatives > 0: - tszs = ( - buffered_arange(tsz) - .unsqueeze(-1) - .expand(-1, self.cross_sample_negatives) - .flatten() - ) - - cross_neg_idxs = torch.randint( - low=0, - high=cross_high - 1, - size=(bsz, self.cross_sample_negatives * tsz), - ) - cross_neg_idxs[cross_neg_idxs >= tszs] += 1 - - if self.n_negatives > 0: - for i in range(1, bsz): - neg_idxs[i] += i * high - else: - neg_idxs = cross_neg_idxs - - if self.cross_sample_negatives > 0 and self.n_negatives > 0: - neg_idxs = torch.cat([neg_idxs, cross_neg_idxs], dim=1) - - negs = y[..., neg_idxs.view(-1)] - negs = negs.view( - fsz, bsz, self.n_negatives + self.cross_sample_negatives, tsz - ).permute( - 2, 1, 0, 3 - ) # to NxBxCxT - - return negs - - def forward(self, x, y): - - x = x.unsqueeze(-1) - x = self.project_to_steps(x) # BxCxTxS - x = self.dropout(x) - - negatives = self.sample_negatives(y) - y = y.unsqueeze(0) - targets = torch.cat([y, negatives], dim=0) # Copies x B x C x T - - copies = targets.size(0) - bsz, dim, tsz, steps = x.shape - steps = min(steps, tsz - self.offset) - - predictions = x.new( - bsz * copies * (tsz - self.offset + 1) * steps - - ((steps + 1) * steps // 2) * copies * bsz - ) - if self.infonce: - labels = predictions.new_full( - (predictions.shape[0] // copies,), 0, dtype=torch.long - ) - else: - labels = torch.zeros_like(predictions) - weights = ( - torch.full_like(labels, 1 / self.n_negatives) - if self.balanced_classes and not self.infonce - else None - ) - - start = end = 0 - for i in range(steps): - offset = i + self.offset - end = start + (tsz - offset) * bsz * copies - if self.infonce: - predictions[start:end] = torch.einsum( - "bct,nbct->tbn", x[..., :-offset, i], targets[..., offset:] - ).flatten() - else: - pos_num = (end - start) // copies - predictions[start:end] = torch.einsum( - "bct,nbct->nbt", x[..., :-offset, i], targets[..., offset:] - ).flatten() - labels[start : start + pos_num] = 1.0 - if weights is not None: - weights[start : start + pos_num] = 1.0 - start = end - assert end == predictions.numel(), "{} != {}".format(end, predictions.numel()) - - if self.infonce: - predictions = predictions.view(-1, copies) - else: - if weights is not None: - labels = (labels, weights) - - return predictions, labels diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/noisychannel/rerank_score_lm.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/noisychannel/rerank_score_lm.py deleted file mode 100644 index e80948d78b02561cbd09d72c319222105f41f6bb..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/noisychannel/rerank_score_lm.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os - -from fairseq import options - -from examples.noisychannel import rerank_options, rerank_utils - - -def score_lm(args): - using_nbest = args.nbest_list is not None - ( - pre_gen, - left_to_right_preprocessed_dir, - right_to_left_preprocessed_dir, - backwards_preprocessed_dir, - lm_preprocessed_dir, - ) = rerank_utils.get_directories( - args.data_dir_name, - args.num_rescore, - args.gen_subset, - args.gen_model_name, - args.shard_id, - args.num_shards, - args.sampling, - args.prefix_len, - args.target_prefix_frac, - args.source_prefix_frac, - ) - - predictions_bpe_file = pre_gen + "/generate_output_bpe.txt" - if using_nbest: - print("Using predefined n-best list from interactive.py") - predictions_bpe_file = args.nbest_list - - gen_output = rerank_utils.BitextOutputFromGen( - predictions_bpe_file, bpe_symbol=args.post_process, nbest=using_nbest - ) - - if args.language_model is not None: - lm_score_file = rerank_utils.rescore_file_name( - pre_gen, args.prefix_len, args.lm_name, lm_file=True - ) - - if args.language_model is not None and not os.path.isfile(lm_score_file): - print("STEP 4.5: language modeling for P(T)") - if args.lm_bpe_code is None: - bpe_status = "no bpe" - elif args.lm_bpe_code == "shared": - bpe_status = "shared" - else: - bpe_status = "different" - - rerank_utils.lm_scoring( - lm_preprocessed_dir, - bpe_status, - gen_output, - pre_gen, - args.lm_dict, - args.lm_name, - args.language_model, - args.lm_bpe_code, - 128, - lm_score_file, - args.target_lang, - args.source_lang, - prefix_len=args.prefix_len, - ) - - -def cli_main(): - parser = rerank_options.get_reranking_parser() - args = options.parse_args_and_arch(parser) - score_lm(args) - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/new/infer.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/new/infer.py deleted file mode 100644 index 3fb67151e0dc425e02d090a62b1d83e6039e6ccb..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/new/infer.py +++ /dev/null @@ -1,471 +0,0 @@ -#!/usr/bin/env python -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import ast -import hashlib -import logging -import os -import shutil -import sys -from dataclasses import dataclass, field, is_dataclass -from pathlib import Path -from typing import Any, Dict, List, Optional, Tuple, Union - -import editdistance -import torch -import torch.distributed as dist -from examples.speech_recognition.new.decoders.decoder_config import ( - DecoderConfig, - FlashlightDecoderConfig, -) -from examples.speech_recognition.new.decoders.decoder import Decoder -from fairseq import checkpoint_utils, distributed_utils, progress_bar, tasks, utils -from fairseq.data.data_utils import post_process -from fairseq.dataclass.configs import ( - CheckpointConfig, - CommonConfig, - CommonEvalConfig, - DatasetConfig, - DistributedTrainingConfig, - FairseqDataclass, -) -from fairseq.logging.meters import StopwatchMeter, TimeMeter -from fairseq.logging.progress_bar import BaseProgressBar -from fairseq.models.fairseq_model import FairseqModel -from omegaconf import OmegaConf - -import hydra -from hydra.core.config_store import ConfigStore - -logging.root.setLevel(logging.INFO) -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) - -config_path = Path(__file__).resolve().parent / "conf" - - -@dataclass -class DecodingConfig(DecoderConfig, FlashlightDecoderConfig): - unique_wer_file: bool = field( - default=False, - metadata={"help": "If set, use a unique file for storing WER"}, - ) - results_path: Optional[str] = field( - default=None, - metadata={ - "help": "If set, write hypothesis and reference sentences into this directory" - }, - ) - - -@dataclass -class InferConfig(FairseqDataclass): - task: Any = None - decoding: DecodingConfig = DecodingConfig() - common: CommonConfig = CommonConfig() - common_eval: CommonEvalConfig = CommonEvalConfig() - checkpoint: CheckpointConfig = CheckpointConfig() - distributed_training: DistributedTrainingConfig = DistributedTrainingConfig() - dataset: DatasetConfig = DatasetConfig() - is_ax: bool = field( - default=False, - metadata={ - "help": "if true, assumes we are using ax for tuning and returns a tuple for ax to consume" - }, - ) - - -def reset_logging(): - root = logging.getLogger() - for handler in root.handlers: - root.removeHandler(handler) - root.setLevel(os.environ.get("LOGLEVEL", "INFO").upper()) - handler = logging.StreamHandler(sys.stdout) - handler.setFormatter( - logging.Formatter( - fmt="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - ) - ) - root.addHandler(handler) - - -class InferenceProcessor: - cfg: InferConfig - - def __init__(self, cfg: InferConfig) -> None: - self.cfg = cfg - self.task = tasks.setup_task(cfg.task) - - models, saved_cfg = self.load_model_ensemble() - self.models = models - self.saved_cfg = saved_cfg - self.tgt_dict = self.task.target_dictionary - - self.task.load_dataset( - self.cfg.dataset.gen_subset, - task_cfg=saved_cfg.task, - ) - self.generator = Decoder(cfg.decoding, self.tgt_dict) - self.gen_timer = StopwatchMeter() - self.wps_meter = TimeMeter() - self.num_sentences = 0 - self.total_errors = 0 - self.total_length = 0 - - self.hypo_words_file = None - self.hypo_units_file = None - self.ref_words_file = None - self.ref_units_file = None - - self.progress_bar = self.build_progress_bar() - - def __enter__(self) -> "InferenceProcessor": - if self.cfg.decoding.results_path is not None: - self.hypo_words_file = self.get_res_file("hypo.word") - self.hypo_units_file = self.get_res_file("hypo.units") - self.ref_words_file = self.get_res_file("ref.word") - self.ref_units_file = self.get_res_file("ref.units") - return self - - def __exit__(self, *exc) -> bool: - if self.cfg.decoding.results_path is not None: - self.hypo_words_file.close() - self.hypo_units_file.close() - self.ref_words_file.close() - self.ref_units_file.close() - return False - - def __iter__(self) -> Any: - for sample in self.progress_bar: - if not self.cfg.common.cpu: - sample = utils.move_to_cuda(sample) - - # Happens on the last batch. - if "net_input" not in sample: - continue - yield sample - - def log(self, *args, **kwargs): - self.progress_bar.log(*args, **kwargs) - - def print(self, *args, **kwargs): - self.progress_bar.print(*args, **kwargs) - - def get_res_file(self, fname: str) -> None: - fname = os.path.join(self.cfg.decoding.results_path, fname) - if self.data_parallel_world_size > 1: - fname = f"{fname}.{self.data_parallel_rank}" - return open(fname, "w", buffering=1) - - def merge_shards(self) -> None: - """Merges all shard files into shard 0, then removes shard suffix.""" - - shard_id = self.data_parallel_rank - num_shards = self.data_parallel_world_size - - if self.data_parallel_world_size > 1: - - def merge_shards_with_root(fname: str) -> None: - fname = os.path.join(self.cfg.decoding.results_path, fname) - logger.info("Merging %s on shard %d", fname, shard_id) - base_fpath = Path(f"{fname}.0") - with open(base_fpath, "a") as out_file: - for s in range(1, num_shards): - shard_fpath = Path(f"{fname}.{s}") - with open(shard_fpath, "r") as in_file: - for line in in_file: - out_file.write(line) - shard_fpath.unlink() - shutil.move(f"{fname}.0", fname) - - dist.barrier() # ensure all shards finished writing - if shard_id == (0 % num_shards): - merge_shards_with_root("hypo.word") - if shard_id == (1 % num_shards): - merge_shards_with_root("hypo.units") - if shard_id == (2 % num_shards): - merge_shards_with_root("ref.word") - if shard_id == (3 % num_shards): - merge_shards_with_root("ref.units") - dist.barrier() - - def optimize_model(self, model: FairseqModel) -> None: - model.make_generation_fast_() - if self.cfg.common.fp16: - model.half() - if not self.cfg.common.cpu: - model.cuda() - - def load_model_ensemble(self) -> Tuple[List[FairseqModel], FairseqDataclass]: - arg_overrides = ast.literal_eval(self.cfg.common_eval.model_overrides) - models, saved_cfg = checkpoint_utils.load_model_ensemble( - utils.split_paths(self.cfg.common_eval.path, separator="\\"), - arg_overrides=arg_overrides, - task=self.task, - suffix=self.cfg.checkpoint.checkpoint_suffix, - strict=(self.cfg.checkpoint.checkpoint_shard_count == 1), - num_shards=self.cfg.checkpoint.checkpoint_shard_count, - ) - for model in models: - self.optimize_model(model) - return models, saved_cfg - - def get_dataset_itr(self, disable_iterator_cache: bool = False) -> None: - return self.task.get_batch_iterator( - dataset=self.task.dataset(self.cfg.dataset.gen_subset), - max_tokens=self.cfg.dataset.max_tokens, - max_sentences=self.cfg.dataset.batch_size, - max_positions=(sys.maxsize, sys.maxsize), - ignore_invalid_inputs=self.cfg.dataset.skip_invalid_size_inputs_valid_test, - required_batch_size_multiple=self.cfg.dataset.required_batch_size_multiple, - seed=self.cfg.common.seed, - num_shards=self.data_parallel_world_size, - shard_id=self.data_parallel_rank, - num_workers=self.cfg.dataset.num_workers, - data_buffer_size=self.cfg.dataset.data_buffer_size, - disable_iterator_cache=disable_iterator_cache, - ).next_epoch_itr(shuffle=False) - - def build_progress_bar( - self, - epoch: Optional[int] = None, - prefix: Optional[str] = None, - default_log_format: str = "tqdm", - ) -> BaseProgressBar: - return progress_bar.progress_bar( - iterator=self.get_dataset_itr(), - log_format=self.cfg.common.log_format, - log_interval=self.cfg.common.log_interval, - epoch=epoch, - prefix=prefix, - tensorboard_logdir=self.cfg.common.tensorboard_logdir, - default_log_format=default_log_format, - ) - - @property - def data_parallel_world_size(self): - if self.cfg.distributed_training.distributed_world_size == 1: - return 1 - return distributed_utils.get_data_parallel_world_size() - - @property - def data_parallel_rank(self): - if self.cfg.distributed_training.distributed_world_size == 1: - return 0 - return distributed_utils.get_data_parallel_rank() - - def process_sentence( - self, - sample: Dict[str, Any], - hypo: Dict[str, Any], - sid: int, - batch_id: int, - ) -> Tuple[int, int]: - speaker = None # Speaker can't be parsed from dataset. - - if "target_label" in sample: - toks = sample["target_label"] - else: - toks = sample["target"] - toks = toks[batch_id, :] - - # Processes hypothesis. - hyp_pieces = self.tgt_dict.string(hypo["tokens"].int().cpu()) - if "words" in hypo: - hyp_words = " ".join(hypo["words"]) - else: - hyp_words = post_process(hyp_pieces, self.cfg.common_eval.post_process) - - # Processes target. - target_tokens = utils.strip_pad(toks, self.tgt_dict.pad()) - tgt_pieces = self.tgt_dict.string(target_tokens.int().cpu()) - tgt_words = post_process(tgt_pieces, self.cfg.common_eval.post_process) - - if self.cfg.decoding.results_path is not None: - print(f"{hyp_pieces} ({speaker}-{sid})", file=self.hypo_units_file) - print(f"{hyp_words} ({speaker}-{sid})", file=self.hypo_words_file) - print(f"{tgt_pieces} ({speaker}-{sid})", file=self.ref_units_file) - print(f"{tgt_words} ({speaker}-{sid})", file=self.ref_words_file) - - if not self.cfg.common_eval.quiet: - logger.info(f"HYPO: {hyp_words}") - logger.info(f"REF: {tgt_words}") - logger.info("---------------------") - - hyp_words, tgt_words = hyp_words.split(), tgt_words.split() - - return editdistance.eval(hyp_words, tgt_words), len(tgt_words) - - def process_sample(self, sample: Dict[str, Any]) -> None: - self.gen_timer.start() - hypos = self.task.inference_step( - generator=self.generator, - models=self.models, - sample=sample, - ) - num_generated_tokens = sum(len(h[0]["tokens"]) for h in hypos) - self.gen_timer.stop(num_generated_tokens) - self.wps_meter.update(num_generated_tokens) - - for batch_id, sample_id in enumerate(sample["id"].tolist()): - errs, length = self.process_sentence( - sample=sample, - sid=sample_id, - batch_id=batch_id, - hypo=hypos[batch_id][0], - ) - self.total_errors += errs - self.total_length += length - - self.log({"wps": round(self.wps_meter.avg)}) - if "nsentences" in sample: - self.num_sentences += sample["nsentences"] - else: - self.num_sentences += sample["id"].numel() - - def log_generation_time(self) -> None: - logger.info( - "Processed %d sentences (%d tokens) in %.1fs %.2f " - "sentences per second, %.2f tokens per second)", - self.num_sentences, - self.gen_timer.n, - self.gen_timer.sum, - self.num_sentences / self.gen_timer.sum, - 1.0 / self.gen_timer.avg, - ) - - -def parse_wer(wer_file: Path) -> float: - with open(wer_file, "r") as f: - return float(f.readline().strip().split(" ")[1]) - - -def get_wer_file(cfg: InferConfig) -> Path: - """Hashes the decoding parameters to a unique file ID.""" - base_path = "wer" - if cfg.decoding.results_path is not None: - base_path = os.path.join(cfg.decoding.results_path, base_path) - - if cfg.decoding.unique_wer_file: - yaml_str = OmegaConf.to_yaml(cfg.decoding) - fid = int(hashlib.md5(yaml_str.encode("utf-8")).hexdigest(), 16) - return Path(f"{base_path}.{fid % 1000000}") - else: - return Path(base_path) - - -def main(cfg: InferConfig) -> float: - """Entry point for main processing logic. - - Args: - cfg: The inferance configuration to use. - wer: Optional shared memory pointer for returning the WER. If not None, - the final WER value will be written here instead of being returned. - - Returns: - The final WER if `wer` is None, otherwise None. - """ - - yaml_str, wer_file = OmegaConf.to_yaml(cfg.decoding), get_wer_file(cfg) - - # Validates the provided configuration. - if cfg.dataset.max_tokens is None and cfg.dataset.batch_size is None: - cfg.dataset.max_tokens = 4000000 - if not cfg.common.cpu and not torch.cuda.is_available(): - raise ValueError("CUDA not found; set `cpu=True` to run without CUDA") - - with InferenceProcessor(cfg) as processor: - for sample in processor: - processor.process_sample(sample) - - processor.log_generation_time() - - if cfg.decoding.results_path is not None: - processor.merge_shards() - - errs_t, leng_t = processor.total_errors, processor.total_length - - if cfg.common.cpu: - logger.warning("Merging WER requires CUDA.") - elif processor.data_parallel_world_size > 1: - stats = torch.LongTensor([errs_t, leng_t]).cuda() - dist.all_reduce(stats, op=dist.ReduceOp.SUM) - errs_t, leng_t = stats[0].item(), stats[1].item() - - wer = errs_t * 100.0 / leng_t - - if distributed_utils.is_master(cfg.distributed_training): - with open(wer_file, "w") as f: - f.write( - ( - f"WER: {wer}\n" - f"err / num_ref_words = {errs_t} / {leng_t}\n\n" - f"{yaml_str}" - ) - ) - - return wer - - -@hydra.main(config_path=config_path, config_name="infer") -def hydra_main(cfg: InferConfig) -> Union[float, Tuple[float, Optional[float]]]: - container = OmegaConf.to_container(cfg, resolve=True, enum_to_str=True) - cfg = OmegaConf.create(container) - OmegaConf.set_struct(cfg, True) - - if cfg.common.reset_logging: - reset_logging() - - # logger.info("Config:\n%s", OmegaConf.to_yaml(cfg)) - wer = float("inf") - - try: - if cfg.common.profile: - with torch.cuda.profiler.profile(): - with torch.autograd.profiler.emit_nvtx(): - distributed_utils.call_main(cfg, main) - else: - distributed_utils.call_main(cfg, main) - - wer = parse_wer(get_wer_file(cfg)) - except BaseException as e: # pylint: disable=broad-except - if not cfg.common.suppress_crashes: - raise - else: - logger.error("Crashed! %s", str(e)) - - logger.info("Word error rate: %.4f", wer) - if cfg.is_ax: - return wer, None - - return wer - - -def cli_main() -> None: - try: - from hydra._internal.utils import ( - get_args, - ) # pylint: disable=import-outside-toplevel - - cfg_name = get_args().config_name or "infer" - except ImportError: - logger.warning("Failed to get config name from hydra args") - cfg_name = "infer" - - cs = ConfigStore.instance() - cs.store(name=cfg_name, node=InferConfig) - - for k in InferConfig.__dataclass_fields__: - if is_dataclass(InferConfig.__dataclass_fields__[k].type): - v = InferConfig.__dataclass_fields__[k].default - cs.store(name=k, node=v) - - hydra_main() # pylint: disable=no-value-for-parameter - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/distributed_fairseq_model.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/distributed_fairseq_model.py deleted file mode 100644 index 5eda2276404ca686be124901674ddfe36bd6dfd1..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/distributed_fairseq_model.py +++ /dev/null @@ -1,146 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import signal -import threading - -import torch -import torch.nn as nn -from torch.nn.parallel import DistributedDataParallel - -from fairseq.distributed import ( - DistributedTimeoutWrapper, - LegacyDistributedDataParallel, - ModuleProxyWrapper, - TPUDistributedDataParallel, -) - - -logger = logging.getLogger(__name__) - - -_GOSSIP_DISABLED = False -try: - import gossip -except ImportError: - _GOSSIP_DISABLED = True - - -def DistributedFairseqModel(args, model, process_group, device): - """ - Wrap a *model* to support distributed data parallel training. - - This is similar to the built-in DistributedDataParallel, but allows - additional configuration of the DistributedDataParallel class to - use, and also provides easier access to the wrapped model by - forwarding requests for missing attributes to the wrapped model. - - Args: - args (argparse.Namespace): fairseq args - model (BaseFairseqModel): model to wrap - process_group: the c10d process group to be used for distributed data - parallel all-reduction. - device: device to move model to - """ - assert isinstance(model, nn.Module) - if args.tpu: - wrapped_model = TPUDistributedDataParallel( - module=model.to(device), - process_group=process_group, - ) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend in {"c10d", "pytorch_ddp"}: - wrapped_model = DistributedDataParallel( - module=model.to(device), - device_ids=[args.device_id], - output_device=args.device_id, - broadcast_buffers=args.broadcast_buffers, - bucket_cap_mb=args.bucket_cap_mb, - process_group=process_group, - find_unused_parameters=args.find_unused_parameters, - gradient_as_bucket_view=args.gradient_as_bucket_view, - ) - if args.ddp_comm_hook == "fp16": - logger.info("enable fp16 communication hook in DDP") - try: - from torch.distributed.algorithms.ddp_comm_hooks import ( - register_ddp_comm_hook, - DDPCommHookType, - ) - except: - logger.error( - "Could not import from torch.distributed.algorithms.ddp_comm_hooks; you may need to update your pytorch version" - ) - raise - - register_ddp_comm_hook(DDPCommHookType.FP16_COMPRESS, wrapped_model) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend in {"no_c10d", "legacy_ddp"}: - wrapped_model = LegacyDistributedDataParallel( - module=model.to(device), - buffer_size=2 ** 28, - process_group=process_group, - ) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend == "slow_mo": - if _GOSSIP_DISABLED: - raise ImportError( - "Cannot find gossip library. Please install from: " - "github.com/facebookresearch/stochastic_gradient_push" - ) - - # The values of slowmo_momentum below were obtained by tuning on the - # En-De 16 dataset by training the transformer_wmt_en_de_large model - if args.slowmo_momentum is None: - if args.distributed_world_size <= 16: - args.slowmo_momentum = 0.0 - elif args.distributed_world_size <= 32: - args.slowmo_momentum = 0.2 - elif args.distributed_world_size <= 64: - args.slowmo_momentum = 0.5 - else: - args.slowmo_momentum = 0.6 - - wrapped_model = gossip.GossipDataParallel( - module=model.to(device), - device_ids=[args.device_id], - output_device=args.device_id, - broadcast_buffers=args.broadcast_buffers, - nprocs_per_node=args.nprocs_per_node, - slowmo_momentum=args.slowmo_momentum, - localsgd=(args.slowmo_algorithm == "LocalSGD"), - localsgd_frequency=args.localsgd_frequency, - ) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend == "fully_sharded": - try: - from fairscale.nn.data_parallel import FullyShardedDataParallel as FSDP - except ImportError: - raise ImportError( - "Cannot find FullyShardedDataParallel. " - "Please install fairscale with: pip install fairscale" - ) - assert isinstance(model, FSDP), "expected model to already be wrapped in FSDP" - wrapped_model = model - if args.memory_efficient_fp16: - wrapped_model = wrapped_model.half() - if not args.cpu_offload: - wrapped_model = wrapped_model.to(device=device) - else: - raise ValueError("Unknown --ddp-backend: " + args.ddp_backend) - - # kill hung distributed jobs after a timeout - if getattr(args, "heartbeat_timeout", -1) > 0: - wrapped_model = DistributedTimeoutWrapper( - wrapped_model, timeout=getattr(args, "heartbeat_timeout", -1) - ) - - return wrapped_model diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/utils/memory.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/utils/memory.py deleted file mode 100644 index bd494780b9dbbd1571688cd270bb9b53d113c13e..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/utils/memory.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import logging -from contextlib import contextmanager -from functools import wraps -import torch - -__all__ = ["retry_if_cuda_oom"] - - -@contextmanager -def _ignore_torch_cuda_oom(): - """ - A context which ignores CUDA OOM exception from pytorch. - """ - try: - yield - except RuntimeError as e: - # NOTE: the string may change? - if "CUDA out of memory. " in str(e): - pass - else: - raise - - -def retry_if_cuda_oom(func): - """ - Makes a function retry itself after encountering - pytorch's CUDA OOM error. - It will first retry after calling `torch.cuda.empty_cache()`. - - If that still fails, it will then retry by trying to convert inputs to CPUs. - In this case, it expects the function to dispatch to CPU implementation. - The return values may become CPU tensors as well and it's user's - responsibility to convert it back to CUDA tensor if needed. - - Args: - func: a stateless callable that takes tensor-like objects as arguments - - Returns: - a callable which retries `func` if OOM is encountered. - - Examples: - :: - output = retry_if_cuda_oom(some_torch_function)(input1, input2) - # output may be on CPU even if inputs are on GPU - - Note: - 1. When converting inputs to CPU, it will only look at each argument and check - if it has `.device` and `.to` for conversion. Nested structures of tensors - are not supported. - - 2. Since the function might be called more than once, it has to be - stateless. - """ - - def maybe_to_cpu(x): - try: - like_gpu_tensor = x.device.type == "cuda" and hasattr(x, "to") - except AttributeError: - like_gpu_tensor = False - if like_gpu_tensor: - return x.to(device="cpu") - else: - return x - - @wraps(func) - def wrapped(*args, **kwargs): - with _ignore_torch_cuda_oom(): - return func(*args, **kwargs) - - # Clear cache and retry - torch.cuda.empty_cache() - with _ignore_torch_cuda_oom(): - return func(*args, **kwargs) - - # Try on CPU. This slows down the code significantly, therefore print a notice. - logger = logging.getLogger(__name__) - logger.info("Attempting to copy inputs of {} to CPU due to CUDA OOM".format(str(func))) - new_args = (maybe_to_cpu(x) for x in args) - new_kwargs = {k: maybe_to_cpu(v) for k, v in kwargs.items()} - return func(*new_args, **new_kwargs) - - return wrapped diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/records/inspection.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/records/inspection.go deleted file mode 100644 index c13b33a2bbd7557646d4096dcd1a620dcf4a963e..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/records/inspection.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/sxml/apply-templates.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/sxml/apply-templates.go deleted file mode 100644 index 2147d855574c1178b48ab41bbfdeeb98fc5688aa..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/sxml/apply-templates.go and /dev/null differ diff --git a/spaces/PeepDaSlan9/bigscience-bloom/README.md b/spaces/PeepDaSlan9/bigscience-bloom/README.md deleted file mode 100644 index a62520078956e37e2503830c6f57427b9f63fc6c..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/bigscience-bloom/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Bigscience Bloom -emoji: 📉 -colorFrom: indigo -colorTo: yellow -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: bigscience-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/hooks/logger/base.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/hooks/logger/base.py deleted file mode 100644 index f845256729458ced821762a1b8ef881e17ff9955..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/hooks/logger/base.py +++ /dev/null @@ -1,166 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numbers -from abc import ABCMeta, abstractmethod - -import numpy as np -import torch - -from ..hook import Hook - - -class LoggerHook(Hook): - """Base class for logger hooks. - - Args: - interval (int): Logging interval (every k iterations). - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. - reset_flag (bool): Whether to clear the output buffer after logging. - by_epoch (bool): Whether EpochBasedRunner is used. - """ - - __metaclass__ = ABCMeta - - def __init__(self, - interval=10, - ignore_last=True, - reset_flag=False, - by_epoch=True): - self.interval = interval - self.ignore_last = ignore_last - self.reset_flag = reset_flag - self.by_epoch = by_epoch - - @abstractmethod - def log(self, runner): - pass - - @staticmethod - def is_scalar(val, include_np=True, include_torch=True): - """Tell the input variable is a scalar or not. - - Args: - val: Input variable. - include_np (bool): Whether include 0-d np.ndarray as a scalar. - include_torch (bool): Whether include 0-d torch.Tensor as a scalar. - - Returns: - bool: True or False. - """ - if isinstance(val, numbers.Number): - return True - elif include_np and isinstance(val, np.ndarray) and val.ndim == 0: - return True - elif include_torch and isinstance(val, torch.Tensor) and len(val) == 1: - return True - else: - return False - - def get_mode(self, runner): - if runner.mode == 'train': - if 'time' in runner.log_buffer.output: - mode = 'train' - else: - mode = 'val' - elif runner.mode == 'val': - mode = 'val' - else: - raise ValueError(f"runner mode should be 'train' or 'val', " - f'but got {runner.mode}') - return mode - - def get_epoch(self, runner): - if runner.mode == 'train': - epoch = runner.epoch + 1 - elif runner.mode == 'val': - # normal val mode - # runner.epoch += 1 has been done before val workflow - epoch = runner.epoch - else: - raise ValueError(f"runner mode should be 'train' or 'val', " - f'but got {runner.mode}') - return epoch - - def get_iter(self, runner, inner_iter=False): - """Get the current training iteration step.""" - if self.by_epoch and inner_iter: - current_iter = runner.inner_iter + 1 - else: - current_iter = runner.iter + 1 - return current_iter - - def get_lr_tags(self, runner): - tags = {} - lrs = runner.current_lr() - if isinstance(lrs, dict): - for name, value in lrs.items(): - tags[f'learning_rate/{name}'] = value[0] - else: - tags['learning_rate'] = lrs[0] - return tags - - def get_momentum_tags(self, runner): - tags = {} - momentums = runner.current_momentum() - if isinstance(momentums, dict): - for name, value in momentums.items(): - tags[f'momentum/{name}'] = value[0] - else: - tags['momentum'] = momentums[0] - return tags - - def get_loggable_tags(self, - runner, - allow_scalar=True, - allow_text=False, - add_mode=True, - tags_to_skip=('time', 'data_time')): - tags = {} - for var, val in runner.log_buffer.output.items(): - if var in tags_to_skip: - continue - if self.is_scalar(val) and not allow_scalar: - continue - if isinstance(val, str) and not allow_text: - continue - if add_mode: - var = f'{self.get_mode(runner)}/{var}' - tags[var] = val - tags.update(self.get_lr_tags(runner)) - tags.update(self.get_momentum_tags(runner)) - return tags - - def before_run(self, runner): - for hook in runner.hooks[::-1]: - if isinstance(hook, LoggerHook): - hook.reset_flag = True - break - - def before_epoch(self, runner): - runner.log_buffer.clear() # clear logs of last epoch - - def after_train_iter(self, runner): - if self.by_epoch and self.every_n_inner_iters(runner, self.interval): - runner.log_buffer.average(self.interval) - elif not self.by_epoch and self.every_n_iters(runner, self.interval): - runner.log_buffer.average(self.interval) - elif self.end_of_epoch(runner) and not self.ignore_last: - # not precise but more stable - runner.log_buffer.average(self.interval) - - if runner.log_buffer.ready: - self.log(runner) - if self.reset_flag: - runner.log_buffer.clear_output() - - def after_train_epoch(self, runner): - if runner.log_buffer.ready: - self.log(runner) - if self.reset_flag: - runner.log_buffer.clear_output() - - def after_val_epoch(self, runner): - runner.log_buffer.average() - self.log(runner) - if self.reset_flag: - runner.log_buffer.clear_output() diff --git a/spaces/PirateHFH/IllusionDiffusion/gallery_history.py b/spaces/PirateHFH/IllusionDiffusion/gallery_history.py deleted file mode 100644 index 597229cdc89824efe68d0d2ba3f4134dafc1404a..0000000000000000000000000000000000000000 --- a/spaces/PirateHFH/IllusionDiffusion/gallery_history.py +++ /dev/null @@ -1,134 +0,0 @@ -""" -How to use: -1. Create a Space with a Persistent Storage attached. Filesystem will be available under `/data`. -2. Add `hf_oauth: true` to the Space metadata (README.md). Make sure to have Gradio>=3.41.0 configured. -3. Add `HISTORY_FOLDER` as a Space variable (example. `"/data/history"`). -4. Add `filelock` as dependency in `requirements.txt`. -5. Add history gallery to your Gradio app: - a. Add imports: `from gallery_history import fetch_gallery_history, show_gallery_history` - a. Add `history = show_gallery_history()` within `gr.Blocks` context. - b. Add `.then(fn=fetch_gallery_history, inputs=[prompt, result], outputs=history)` on the generate event. -""" -import json -import os -import numpy as np -import shutil -from pathlib import Path -from PIL import Image -from typing import Dict, List, Optional, Tuple -from uuid import uuid4 - -import gradio as gr -from filelock import FileLock - -_folder = os.environ.get("HISTORY_FOLDER") -if _folder is None: - print( - "'HISTORY_FOLDER' environment variable not set. User history will be saved " - "locally and will be lost when the Space instance is restarted." - ) - _folder = Path(__file__).parent / "history" -if _folder.startswith("/data") and not os.path.exists("/data"): - print( - f"'HISTORY_FOLDER' environment variable is set to '{_folder}' which doesn't exist. User history will be saved " - "locally and will be lost when the Space instance is restarted." - ) - _folder = Path(__file__).parent / "history" -HISTORY_FOLDER_PATH = Path(_folder) - -IMAGES_FOLDER_PATH = HISTORY_FOLDER_PATH / "images" -IMAGES_FOLDER_PATH.mkdir(parents=True, exist_ok=True) - - -def show_gallery_history(): - gr.Markdown( - "## Your past generations\n\n(Log in to keep a gallery of your previous generations." - " Your history will be saved and available on your next visit.)" - ) - with gr.Column(): - with gr.Row(): - gr.LoginButton(min_width=250) - gr.LogoutButton(min_width=250) - gallery = gr.Gallery( - label="Past images", - show_label=True, - elem_id="gallery", - object_fit="contain", - columns=4, - height=512, - preview=False, - show_share_button=False, - show_download_button=False, - ) - gr.Markdown( - "Make sure to save your images from time to time, this gallery may be deleted in the future." - ) - gallery.attach_load_event(fetch_gallery_history, every=None) - return gallery - - -def fetch_gallery_history( - prompt: Optional[str] = None, - result: Optional[np.ndarray] = None, - user: Optional[gr.OAuthProfile] = None, -): - if user is None: - return [] - try: - if prompt is not None and result is not None: # None values means no new images - new_image = Image.fromarray(result, 'RGB') - return _update_user_history(user["preferred_username"], new_image, prompt) - else: - return _read_user_history(user["preferred_username"]) - except Exception as e: - raise gr.Error(f"Error while fetching history: {e}") from e - - -#################### -# Internal helpers # -#################### - - -def _read_user_history(username: str) -> List[Tuple[str, str]]: - """Return saved history for that user.""" - with _user_lock(username): - path = _user_history_path(username) - if path.exists(): - return json.loads(path.read_text()) - return [] # No history yet - - -def _update_user_history( - username: str, new_image: Image.Image, prompt: str -) -> List[Tuple[str, str]]: - """Update history for that user and return it.""" - with _user_lock(username): - # Read existing - path = _user_history_path(username) - if path.exists(): - images = json.loads(path.read_text()) - else: - images = [] # No history yet - - # Copy image to persistent folder - images = [(_copy_image(new_image), prompt)] + images - - # Save and return - path.write_text(json.dumps(images)) - return images - - -def _user_history_path(username: str) -> Path: - return HISTORY_FOLDER_PATH / f"{username}.json" - - -def _user_lock(username: str) -> FileLock: - """Ensure history is not corrupted if concurrent calls.""" - return FileLock(f"{_user_history_path(username)}.lock") - - -def _copy_image(new_image: Image.Image) -> str: - """Copy image to the persistent storage.""" - dst = str(IMAGES_FOLDER_PATH / f"{uuid4().hex}.png") - new_image.save(dst) - return dst \ No newline at end of file diff --git a/spaces/Plachta/VITS-Umamusume-voice-synthesizer/modules.py b/spaces/Plachta/VITS-Umamusume-voice-synthesizer/modules.py deleted file mode 100644 index f5af1fd9a20dc03707889f360a39bb4b784a6df3..0000000000000000000000000000000000000000 --- a/spaces/Plachta/VITS-Umamusume-voice-synthesizer/modules.py +++ /dev/null @@ -1,387 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/RMXK/RVC_HFF/infer/lib/infer_pack/modules/F0Predictor/F0Predictor.py b/spaces/RMXK/RVC_HFF/infer/lib/infer_pack/modules/F0Predictor/F0Predictor.py deleted file mode 100644 index f56e49e7f0e6eab3babf0711cae2933371b9f9cc..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/infer/lib/infer_pack/modules/F0Predictor/F0Predictor.py +++ /dev/null @@ -1,16 +0,0 @@ -class F0Predictor(object): - def compute_f0(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length] - """ - pass - - def compute_f0_uv(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length],uv:[signal_length//hop_length] - """ - pass diff --git a/spaces/RMXK/RVC_HFF/julius/__init__.py b/spaces/RMXK/RVC_HFF/julius/__init__.py deleted file mode 100644 index 69811b0415a291ca1beb845531785ba03c57099a..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/julius/__init__.py +++ /dev/null @@ -1,41 +0,0 @@ -# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details. -# Author: adefossez, 2020 - -# flake8: noqa -""" -.. image:: ../logo.png - -Julius contains different Digital Signal Processing algorithms implemented -with PyTorch, so that they are differentiable and available on CUDA. -Note that all the modules implemented here can be used with TorchScript. - -For now, I have implemented: - -- `julius.resample`: fast sinc resampling. -- `julius.fftconv`: FFT based convolutions. -- `julius.lowpass`: FIR low pass filter banks. -- `julius.filters`: FIR high pass and band pass filters. -- `julius.bands`: Decomposition of a waveform signal over mel-scale frequency bands. - -Along that, you might found useful utilities in: - -- `julius.core`: DSP related functions. -- `julius.utils`: Generic utilities. - - -Please checkout [the Github repository](https://github.com/adefossez/julius) for other informations. -For a verification of the speed and correctness of Julius, check the benchmark module `bench`. - - -This package is named in this honor of -[Julius O. Smith](https://ccrma.stanford.edu/~jos/), -whose books and website were a gold mine of information for me to learn about DSP. Go checkout his website if you want -to learn more about DSP. -""" - -from .bands import SplitBands, split_bands -from .fftconv import fft_conv1d, FFTConv1d -from .filters import bandpass_filter, BandPassFilter -from .filters import highpass_filter, highpass_filters, HighPassFilter, HighPassFilters -from .lowpass import lowpass_filter, lowpass_filters, LowPassFilters, LowPassFilter -from .resample import resample_frac, ResampleFrac diff --git a/spaces/RMXK/RVC_HFF/lib/infer_pack/onnx_inference.py b/spaces/RMXK/RVC_HFF/lib/infer_pack/onnx_inference.py deleted file mode 100644 index 6517853be49e61c427cf7cd9b5ed203f6d5f367e..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/lib/infer_pack/onnx_inference.py +++ /dev/null @@ -1,145 +0,0 @@ -import onnxruntime -import librosa -import numpy as np -import soundfile - - -class ContentVec: - def __init__(self, vec_path="pretrained/vec-768-layer-12.onnx", device=None): - print("load model(s) from {}".format(vec_path)) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def __call__(self, wav): - return self.forward(wav) - - def forward(self, wav): - feats = wav - if feats.ndim == 2: # double channels - feats = feats.mean(-1) - assert feats.ndim == 1, feats.ndim - feats = np.expand_dims(np.expand_dims(feats, 0), 0) - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input)[0] - return logits.transpose(0, 2, 1) - - -def get_f0_predictor(f0_predictor, hop_length, sampling_rate, **kargs): - if f0_predictor == "pm": - from lib.infer_pack.modules.F0Predictor.PMF0Predictor import PMF0Predictor - - f0_predictor_object = PMF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "harvest": - from lib.infer_pack.modules.F0Predictor.HarvestF0Predictor import ( - HarvestF0Predictor, - ) - - f0_predictor_object = HarvestF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "dio": - from lib.infer_pack.modules.F0Predictor.DioF0Predictor import DioF0Predictor - - f0_predictor_object = DioF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - else: - raise Exception("Unknown f0 predictor") - return f0_predictor_object - - -class OnnxRVC: - def __init__( - self, - model_path, - sr=40000, - hop_size=512, - vec_path="vec-768-layer-12", - device="cpu", - ): - vec_path = f"pretrained/{vec_path}.onnx" - self.vec_model = ContentVec(vec_path, device) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(model_path, providers=providers) - self.sampling_rate = sr - self.hop_size = hop_size - - def forward(self, hubert, hubert_length, pitch, pitchf, ds, rnd): - onnx_input = { - self.model.get_inputs()[0].name: hubert, - self.model.get_inputs()[1].name: hubert_length, - self.model.get_inputs()[2].name: pitch, - self.model.get_inputs()[3].name: pitchf, - self.model.get_inputs()[4].name: ds, - self.model.get_inputs()[5].name: rnd, - } - return (self.model.run(None, onnx_input)[0] * 32767).astype(np.int16) - - def inference( - self, - raw_path, - sid, - f0_method="dio", - f0_up_key=0, - pad_time=0.5, - cr_threshold=0.02, - ): - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - f0_predictor = get_f0_predictor( - f0_method, - hop_length=self.hop_size, - sampling_rate=self.sampling_rate, - threshold=cr_threshold, - ) - wav, sr = librosa.load(raw_path, sr=self.sampling_rate) - org_length = len(wav) - if org_length / sr > 50.0: - raise RuntimeError("Reached Max Length") - - wav16k = librosa.resample(wav, orig_sr=self.sampling_rate, target_sr=16000) - wav16k = wav16k - - hubert = self.vec_model(wav16k) - hubert = np.repeat(hubert, 2, axis=2).transpose(0, 2, 1).astype(np.float32) - hubert_length = hubert.shape[1] - - pitchf = f0_predictor.compute_f0(wav, hubert_length) - pitchf = pitchf * 2 ** (f0_up_key / 12) - pitch = pitchf.copy() - f0_mel = 1127 * np.log(1 + pitch / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - pitch = np.rint(f0_mel).astype(np.int64) - - pitchf = pitchf.reshape(1, len(pitchf)).astype(np.float32) - pitch = pitch.reshape(1, len(pitch)) - ds = np.array([sid]).astype(np.int64) - - rnd = np.random.randn(1, 192, hubert_length).astype(np.float32) - hubert_length = np.array([hubert_length]).astype(np.int64) - - out_wav = self.forward(hubert, hubert_length, pitch, pitchf, ds, rnd).squeeze() - out_wav = np.pad(out_wav, (0, 2 * self.hop_size), "constant") - return out_wav[0:org_length] diff --git a/spaces/Ramse/TTS_Hindi/modules/hifigan/utils/audio_processing.py b/spaces/Ramse/TTS_Hindi/modules/hifigan/utils/audio_processing.py deleted file mode 100644 index a0d63bd9c7a8cdab326ab47e9014611a7733931f..0000000000000000000000000000000000000000 --- a/spaces/Ramse/TTS_Hindi/modules/hifigan/utils/audio_processing.py +++ /dev/null @@ -1,85 +0,0 @@ -import torch -import numpy as np -from scipy.signal import get_window -import librosa.util as librosa_util - - -def window_sumsquare(window, n_frames, hop_length=200, win_length=800, - n_fft=800, dtype=np.float32, norm=None): - """ - # from librosa 0.6 - Compute the sum-square envelope of a window function at a given hop length. - This is used to estimate modulation effects induced by windowing - observations in short-time fourier transforms. - Parameters - ---------- - window : string, tuple, number, callable, or list-like - Window specification, as in `get_window` - n_frames : int > 0 - The number of analysis frames - hop_length : int > 0 - The number of samples to advance between frames - win_length : [optional] - The length of the window function. By default, this matches `n_fft`. - n_fft : int > 0 - The length of each analysis frame. - dtype : np.dtype - The data type of the output - Returns - ------- - wss : np.ndarray, shape=`(n_fft + hop_length * (n_frames - 1))` - The sum-squared envelope of the window function - """ - if win_length is None: - win_length = n_fft - - n = n_fft + hop_length * (n_frames - 1) - x = np.zeros(n, dtype=dtype) - - # Compute the squared window at the desired length - win_sq = get_window(window, win_length, fftbins=True) - win_sq = librosa_util.normalize(win_sq, norm=norm)**2 - win_sq = librosa_util.pad_center(win_sq, n_fft) - - # Fill the envelope - for i in range(n_frames): - sample = i * hop_length - x[sample:min(n, sample + n_fft)] += win_sq[:max(0, min(n_fft, n - sample))] - return x - - -def griffin_lim(magnitudes, stft_fn, n_iters=30): - """ - PARAMS - ------ - magnitudes: spectrogram magnitudes - stft_fn: STFT class with transform (STFT) and inverse (ISTFT) methods - """ - - angles = np.angle(np.exp(2j * np.pi * np.random.rand(*magnitudes.size()))) - angles = angles.astype(np.float32) - angles = torch.autograd.Variable(torch.from_numpy(angles)) - signal = stft_fn.inverse(magnitudes, angles).squeeze(1) - - for i in range(n_iters): - _, angles = stft_fn.transform(signal) - signal = stft_fn.inverse(magnitudes, angles).squeeze(1) - return signal - - -def dynamic_range_compression(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C \ No newline at end of file diff --git a/spaces/Realcat/image-matching-webui/third_party/DKM/dkm/models/dkm.py b/spaces/Realcat/image-matching-webui/third_party/DKM/dkm/models/dkm.py deleted file mode 100644 index 58462e5d14cf9cac6e1fa551298f9fc82f93fcab..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/DKM/dkm/models/dkm.py +++ /dev/null @@ -1,802 +0,0 @@ -import math -import os -import numpy as np -from PIL import Image -import torch -import torch.nn as nn -import torch.nn.functional as F -from ..utils import get_tuple_transform_ops -from einops import rearrange -from ..utils.local_correlation import local_correlation - - -class ConvRefiner(nn.Module): - def __init__( - self, - in_dim=6, - hidden_dim=16, - out_dim=2, - dw=False, - kernel_size=5, - hidden_blocks=3, - displacement_emb=None, - displacement_emb_dim=None, - local_corr_radius=None, - corr_in_other=None, - no_support_fm=False, - ): - super().__init__() - self.block1 = self.create_block( - in_dim, hidden_dim, dw=dw, kernel_size=kernel_size - ) - self.hidden_blocks = nn.Sequential( - *[ - self.create_block( - hidden_dim, - hidden_dim, - dw=dw, - kernel_size=kernel_size, - ) - for hb in range(hidden_blocks) - ] - ) - self.out_conv = nn.Conv2d(hidden_dim, out_dim, 1, 1, 0) - if displacement_emb: - self.has_displacement_emb = True - self.disp_emb = nn.Conv2d(2, displacement_emb_dim, 1, 1, 0) - else: - self.has_displacement_emb = False - self.local_corr_radius = local_corr_radius - self.corr_in_other = corr_in_other - self.no_support_fm = no_support_fm - - def create_block( - self, - in_dim, - out_dim, - dw=False, - kernel_size=5, - ): - num_groups = 1 if not dw else in_dim - if dw: - assert ( - out_dim % in_dim == 0 - ), "outdim must be divisible by indim for depthwise" - conv1 = nn.Conv2d( - in_dim, - out_dim, - kernel_size=kernel_size, - stride=1, - padding=kernel_size // 2, - groups=num_groups, - ) - norm = nn.BatchNorm2d(out_dim) - relu = nn.ReLU(inplace=True) - conv2 = nn.Conv2d(out_dim, out_dim, 1, 1, 0) - return nn.Sequential(conv1, norm, relu, conv2) - - def forward(self, x, y, flow): - """Computes the relative refining displacement in pixels for a given image x,y and a coarse flow-field between them - - Args: - x ([type]): [description] - y ([type]): [description] - flow ([type]): [description] - - Returns: - [type]: [description] - """ - device = x.device - b, c, hs, ws = x.shape - with torch.no_grad(): - x_hat = F.grid_sample(y, flow.permute(0, 2, 3, 1), align_corners=False) - if self.has_displacement_emb: - query_coords = torch.meshgrid( - ( - torch.linspace(-1 + 1 / hs, 1 - 1 / hs, hs, device=device), - torch.linspace(-1 + 1 / ws, 1 - 1 / ws, ws, device=device), - ) - ) - query_coords = torch.stack((query_coords[1], query_coords[0])) - query_coords = query_coords[None].expand(b, 2, hs, ws) - in_displacement = flow - query_coords - emb_in_displacement = self.disp_emb(in_displacement) - if self.local_corr_radius: - # TODO: should corr have gradient? - if self.corr_in_other: - # Corr in other means take a kxk grid around the predicted coordinate in other image - local_corr = local_correlation( - x, y, local_radius=self.local_corr_radius, flow=flow - ) - else: - # Otherwise we use the warp to sample in the first image - # This is actually different operations, especially for large viewpoint changes - local_corr = local_correlation( - x, - x_hat, - local_radius=self.local_corr_radius, - ) - if self.no_support_fm: - x_hat = torch.zeros_like(x) - d = torch.cat((x, x_hat, emb_in_displacement, local_corr), dim=1) - else: - d = torch.cat((x, x_hat, emb_in_displacement), dim=1) - else: - if self.no_support_fm: - x_hat = torch.zeros_like(x) - d = torch.cat((x, x_hat), dim=1) - d = self.block1(d) - d = self.hidden_blocks(d) - d = self.out_conv(d) - certainty, displacement = d[:, :-2], d[:, -2:] - return certainty, displacement - - -class CosKernel(nn.Module): # similar to softmax kernel - def __init__(self, T, learn_temperature=False): - super().__init__() - self.learn_temperature = learn_temperature - if self.learn_temperature: - self.T = nn.Parameter(torch.tensor(T)) - else: - self.T = T - - def __call__(self, x, y, eps=1e-6): - c = torch.einsum("bnd,bmd->bnm", x, y) / ( - x.norm(dim=-1)[..., None] * y.norm(dim=-1)[:, None] + eps - ) - if self.learn_temperature: - T = self.T.abs() + 0.01 - else: - T = torch.tensor(self.T, device=c.device) - K = ((c - 1.0) / T).exp() - return K - - -class CAB(nn.Module): - def __init__(self, in_channels, out_channels): - super(CAB, self).__init__() - self.global_pooling = nn.AdaptiveAvgPool2d(1) - self.conv1 = nn.Conv2d( - in_channels, out_channels, kernel_size=1, stride=1, padding=0 - ) - self.relu = nn.ReLU() - self.conv2 = nn.Conv2d( - out_channels, out_channels, kernel_size=1, stride=1, padding=0 - ) - self.sigmod = nn.Sigmoid() - - def forward(self, x): - x1, x2 = x # high, low (old, new) - x = torch.cat([x1, x2], dim=1) - x = self.global_pooling(x) - x = self.conv1(x) - x = self.relu(x) - x = self.conv2(x) - x = self.sigmod(x) - x2 = x * x2 - res = x2 + x1 - return res - - -class RRB(nn.Module): - def __init__(self, in_channels, out_channels, kernel_size=3): - super(RRB, self).__init__() - self.conv1 = nn.Conv2d( - in_channels, out_channels, kernel_size=1, stride=1, padding=0 - ) - self.conv2 = nn.Conv2d( - out_channels, - out_channels, - kernel_size=kernel_size, - stride=1, - padding=kernel_size // 2, - ) - self.relu = nn.ReLU() - self.bn = nn.BatchNorm2d(out_channels) - self.conv3 = nn.Conv2d( - out_channels, - out_channels, - kernel_size=kernel_size, - stride=1, - padding=kernel_size // 2, - ) - - def forward(self, x): - x = self.conv1(x) - res = self.conv2(x) - res = self.bn(res) - res = self.relu(res) - res = self.conv3(res) - return self.relu(x + res) - - -class DFN(nn.Module): - def __init__( - self, - internal_dim, - feat_input_modules, - pred_input_modules, - rrb_d_dict, - cab_dict, - rrb_u_dict, - use_global_context=False, - global_dim=None, - terminal_module=None, - upsample_mode="bilinear", - align_corners=False, - ): - super().__init__() - if use_global_context: - assert ( - global_dim is not None - ), "Global dim must be provided when using global context" - self.align_corners = align_corners - self.internal_dim = internal_dim - self.feat_input_modules = feat_input_modules - self.pred_input_modules = pred_input_modules - self.rrb_d = rrb_d_dict - self.cab = cab_dict - self.rrb_u = rrb_u_dict - self.use_global_context = use_global_context - if use_global_context: - self.global_to_internal = nn.Conv2d(global_dim, self.internal_dim, 1, 1, 0) - self.global_pooling = nn.AdaptiveAvgPool2d(1) - self.terminal_module = ( - terminal_module if terminal_module is not None else nn.Identity() - ) - self.upsample_mode = upsample_mode - self._scales = [int(key) for key in self.terminal_module.keys()] - - def scales(self): - return self._scales.copy() - - def forward(self, embeddings, feats, context, key): - feats = self.feat_input_modules[str(key)](feats) - embeddings = torch.cat([feats, embeddings], dim=1) - embeddings = self.rrb_d[str(key)](embeddings) - context = self.cab[str(key)]([context, embeddings]) - context = self.rrb_u[str(key)](context) - preds = self.terminal_module[str(key)](context) - pred_coord = preds[:, -2:] - pred_certainty = preds[:, :-2] - return pred_coord, pred_certainty, context - - -class GP(nn.Module): - def __init__( - self, - kernel, - T=1, - learn_temperature=False, - only_attention=False, - gp_dim=64, - basis="fourier", - covar_size=5, - only_nearest_neighbour=False, - sigma_noise=0.1, - no_cov=False, - predict_features=False, - ): - super().__init__() - self.K = kernel(T=T, learn_temperature=learn_temperature) - self.sigma_noise = sigma_noise - self.covar_size = covar_size - self.pos_conv = torch.nn.Conv2d(2, gp_dim, 1, 1) - self.only_attention = only_attention - self.only_nearest_neighbour = only_nearest_neighbour - self.basis = basis - self.no_cov = no_cov - self.dim = gp_dim - self.predict_features = predict_features - - def get_local_cov(self, cov): - K = self.covar_size - b, h, w, h, w = cov.shape - hw = h * w - cov = F.pad(cov, 4 * (K // 2,)) # pad v_q - delta = torch.stack( - torch.meshgrid( - torch.arange(-(K // 2), K // 2 + 1), torch.arange(-(K // 2), K // 2 + 1) - ), - dim=-1, - ) - positions = torch.stack( - torch.meshgrid( - torch.arange(K // 2, h + K // 2), torch.arange(K // 2, w + K // 2) - ), - dim=-1, - ) - neighbours = positions[:, :, None, None, :] + delta[None, :, :] - points = torch.arange(hw)[:, None].expand(hw, K**2) - local_cov = cov.reshape(b, hw, h + K - 1, w + K - 1)[ - :, - points.flatten(), - neighbours[..., 0].flatten(), - neighbours[..., 1].flatten(), - ].reshape(b, h, w, K**2) - return local_cov - - def reshape(self, x): - return rearrange(x, "b d h w -> b (h w) d") - - def project_to_basis(self, x): - if self.basis == "fourier": - return torch.cos(8 * math.pi * self.pos_conv(x)) - elif self.basis == "linear": - return self.pos_conv(x) - else: - raise ValueError( - "No other bases other than fourier and linear currently supported in public release" - ) - - def get_pos_enc(self, y): - b, c, h, w = y.shape - coarse_coords = torch.meshgrid( - ( - torch.linspace(-1 + 1 / h, 1 - 1 / h, h, device=y.device), - torch.linspace(-1 + 1 / w, 1 - 1 / w, w, device=y.device), - ) - ) - - coarse_coords = torch.stack((coarse_coords[1], coarse_coords[0]), dim=-1)[ - None - ].expand(b, h, w, 2) - coarse_coords = rearrange(coarse_coords, "b h w d -> b d h w") - coarse_embedded_coords = self.project_to_basis(coarse_coords) - return coarse_embedded_coords - - def forward(self, x, y, **kwargs): - b, c, h1, w1 = x.shape - b, c, h2, w2 = y.shape - f = self.get_pos_enc(y) - if self.predict_features: - f = f + y[:, : self.dim] # Stupid way to predict features - b, d, h2, w2 = f.shape - # assert x.shape == y.shape - x, y, f = self.reshape(x), self.reshape(y), self.reshape(f) - K_xx = self.K(x, x) - K_yy = self.K(y, y) - K_xy = self.K(x, y) - K_yx = K_xy.permute(0, 2, 1) - sigma_noise = self.sigma_noise * torch.eye(h2 * w2, device=x.device)[None, :, :] - # Due to https://github.com/pytorch/pytorch/issues/16963 annoying warnings, remove batch if N large - if len(K_yy[0]) > 2000: - K_yy_inv = torch.cat( - [ - torch.linalg.inv(K_yy[k : k + 1] + sigma_noise[k : k + 1]) - for k in range(b) - ] - ) - else: - K_yy_inv = torch.linalg.inv(K_yy + sigma_noise) - - mu_x = K_xy.matmul(K_yy_inv.matmul(f)) - mu_x = rearrange(mu_x, "b (h w) d -> b d h w", h=h1, w=w1) - if not self.no_cov: - cov_x = K_xx - K_xy.matmul(K_yy_inv.matmul(K_yx)) - cov_x = rearrange( - cov_x, "b (h w) (r c) -> b h w r c", h=h1, w=w1, r=h1, c=w1 - ) - local_cov_x = self.get_local_cov(cov_x) - local_cov_x = rearrange(local_cov_x, "b h w K -> b K h w") - gp_feats = torch.cat((mu_x, local_cov_x), dim=1) - else: - gp_feats = mu_x - return gp_feats - - -class Encoder(nn.Module): - def __init__(self, resnet): - super().__init__() - self.resnet = resnet - - def forward(self, x): - x0 = x - b, c, h, w = x.shape - x = self.resnet.conv1(x) - x = self.resnet.bn1(x) - x1 = self.resnet.relu(x) - - x = self.resnet.maxpool(x1) - x2 = self.resnet.layer1(x) - - x3 = self.resnet.layer2(x2) - - x4 = self.resnet.layer3(x3) - - x5 = self.resnet.layer4(x4) - feats = {32: x5, 16: x4, 8: x3, 4: x2, 2: x1, 1: x0} - return feats - - def train(self, mode=True): - super().train(mode) - for m in self.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eval() - pass - - -class Decoder(nn.Module): - def __init__( - self, - embedding_decoder, - gps, - proj, - conv_refiner, - transformers=None, - detach=False, - scales="all", - pos_embeddings=None, - ): - super().__init__() - self.embedding_decoder = embedding_decoder - self.gps = gps - self.proj = proj - self.conv_refiner = conv_refiner - self.detach = detach - if scales == "all": - self.scales = ["32", "16", "8", "4", "2", "1"] - else: - self.scales = scales - - def upsample_preds(self, flow, certainty, query, support): - b, hs, ws, d = flow.shape - b, c, h, w = query.shape - flow = flow.permute(0, 3, 1, 2) - certainty = F.interpolate( - certainty, size=(h, w), align_corners=False, mode="bilinear" - ) - flow = F.interpolate(flow, size=(h, w), align_corners=False, mode="bilinear") - delta_certainty, delta_flow = self.conv_refiner["1"](query, support, flow) - flow = torch.stack( - ( - flow[:, 0] + delta_flow[:, 0] / (4 * w), - flow[:, 1] + delta_flow[:, 1] / (4 * h), - ), - dim=1, - ) - flow = flow.permute(0, 2, 3, 1) - certainty = certainty + delta_certainty - return flow, certainty - - def get_placeholder_flow(self, b, h, w, device): - coarse_coords = torch.meshgrid( - ( - torch.linspace(-1 + 1 / h, 1 - 1 / h, h, device=device), - torch.linspace(-1 + 1 / w, 1 - 1 / w, w, device=device), - ) - ) - coarse_coords = torch.stack((coarse_coords[1], coarse_coords[0]), dim=-1)[ - None - ].expand(b, h, w, 2) - coarse_coords = rearrange(coarse_coords, "b h w d -> b d h w") - return coarse_coords - - def forward(self, f1, f2, upsample=False, dense_flow=None, dense_certainty=None): - coarse_scales = self.embedding_decoder.scales() - all_scales = self.scales if not upsample else ["8", "4", "2", "1"] - sizes = {scale: f1[scale].shape[-2:] for scale in f1} - h, w = sizes[1] - b = f1[1].shape[0] - device = f1[1].device - coarsest_scale = int(all_scales[0]) - old_stuff = torch.zeros( - b, - self.embedding_decoder.internal_dim, - *sizes[coarsest_scale], - device=f1[coarsest_scale].device - ) - dense_corresps = {} - if not upsample: - dense_flow = self.get_placeholder_flow(b, *sizes[coarsest_scale], device) - dense_certainty = 0.0 - else: - dense_flow = F.interpolate( - dense_flow, - size=sizes[coarsest_scale], - align_corners=False, - mode="bilinear", - ) - dense_certainty = F.interpolate( - dense_certainty, - size=sizes[coarsest_scale], - align_corners=False, - mode="bilinear", - ) - for new_scale in all_scales: - ins = int(new_scale) - f1_s, f2_s = f1[ins], f2[ins] - if new_scale in self.proj: - f1_s, f2_s = self.proj[new_scale](f1_s), self.proj[new_scale](f2_s) - b, c, hs, ws = f1_s.shape - if ins in coarse_scales: - old_stuff = F.interpolate( - old_stuff, size=sizes[ins], mode="bilinear", align_corners=False - ) - new_stuff = self.gps[new_scale](f1_s, f2_s, dense_flow=dense_flow) - dense_flow, dense_certainty, old_stuff = self.embedding_decoder( - new_stuff, f1_s, old_stuff, new_scale - ) - - if new_scale in self.conv_refiner: - delta_certainty, displacement = self.conv_refiner[new_scale]( - f1_s, f2_s, dense_flow - ) - dense_flow = torch.stack( - ( - dense_flow[:, 0] + ins * displacement[:, 0] / (4 * w), - dense_flow[:, 1] + ins * displacement[:, 1] / (4 * h), - ), - dim=1, - ) - dense_certainty = ( - dense_certainty + delta_certainty - ) # predict both certainty and displacement - - dense_corresps[ins] = { - "dense_flow": dense_flow, - "dense_certainty": dense_certainty, - } - - if new_scale != "1": - dense_flow = F.interpolate( - dense_flow, - size=sizes[ins // 2], - align_corners=False, - mode="bilinear", - ) - - dense_certainty = F.interpolate( - dense_certainty, - size=sizes[ins // 2], - align_corners=False, - mode="bilinear", - ) - if self.detach: - dense_flow = dense_flow.detach() - dense_certainty = dense_certainty.detach() - return dense_corresps - - -class RegressionMatcher(nn.Module): - def __init__( - self, - encoder, - decoder, - h=384, - w=512, - use_contrastive_loss=False, - alpha=1, - beta=0, - sample_mode="threshold", - upsample_preds=False, - symmetric=False, - name=None, - use_soft_mutual_nearest_neighbours=False, - ): - super().__init__() - self.encoder = encoder - self.decoder = decoder - self.w_resized = w - self.h_resized = h - self.og_transforms = get_tuple_transform_ops(resize=None, normalize=True) - self.use_contrastive_loss = use_contrastive_loss - self.alpha = alpha - self.beta = beta - self.sample_mode = sample_mode - self.upsample_preds = upsample_preds - self.symmetric = symmetric - self.name = name - self.sample_thresh = 0.05 - self.upsample_res = (864, 1152) - if use_soft_mutual_nearest_neighbours: - assert symmetric, "MNS requires symmetric inference" - self.use_soft_mutual_nearest_neighbours = use_soft_mutual_nearest_neighbours - - def extract_backbone_features(self, batch, batched=True, upsample=True): - # TODO: only extract stride [1,2,4,8] for upsample = True - x_q = batch["query"] - x_s = batch["support"] - if batched: - X = torch.cat((x_q, x_s)) - feature_pyramid = self.encoder(X) - else: - feature_pyramid = self.encoder(x_q), self.encoder(x_s) - return feature_pyramid - - def sample( - self, - dense_matches, - dense_certainty, - num=10000, - ): - if "threshold" in self.sample_mode: - upper_thresh = self.sample_thresh - dense_certainty = dense_certainty.clone() - dense_certainty[dense_certainty > upper_thresh] = 1 - elif "pow" in self.sample_mode: - dense_certainty = dense_certainty ** (1 / 3) - elif "naive" in self.sample_mode: - dense_certainty = torch.ones_like(dense_certainty) - matches, certainty = ( - dense_matches.reshape(-1, 4), - dense_certainty.reshape(-1), - ) - expansion_factor = 4 if "balanced" in self.sample_mode else 1 - good_samples = torch.multinomial( - certainty, - num_samples=min(expansion_factor * num, len(certainty)), - replacement=False, - ) - good_matches, good_certainty = matches[good_samples], certainty[good_samples] - if "balanced" not in self.sample_mode: - return good_matches, good_certainty - - from ..utils.kde import kde - - density = kde(good_matches, std=0.1) - p = 1 / (density + 1) - p[ - density < 10 - ] = 1e-7 # Basically should have at least 10 perfect neighbours, or around 100 ok ones - balanced_samples = torch.multinomial( - p, num_samples=min(num, len(good_certainty)), replacement=False - ) - return good_matches[balanced_samples], good_certainty[balanced_samples] - - def forward(self, batch, batched=True): - feature_pyramid = self.extract_backbone_features(batch, batched=batched) - if batched: - f_q_pyramid = { - scale: f_scale.chunk(2)[0] for scale, f_scale in feature_pyramid.items() - } - f_s_pyramid = { - scale: f_scale.chunk(2)[1] for scale, f_scale in feature_pyramid.items() - } - else: - f_q_pyramid, f_s_pyramid = feature_pyramid - dense_corresps = self.decoder(f_q_pyramid, f_s_pyramid) - if self.training and self.use_contrastive_loss: - return dense_corresps, (f_q_pyramid, f_s_pyramid) - else: - return dense_corresps - - def forward_symmetric(self, batch, upsample=False, batched=True): - feature_pyramid = self.extract_backbone_features( - batch, upsample=upsample, batched=batched - ) - f_q_pyramid = feature_pyramid - f_s_pyramid = { - scale: torch.cat((f_scale.chunk(2)[1], f_scale.chunk(2)[0])) - for scale, f_scale in feature_pyramid.items() - } - dense_corresps = self.decoder( - f_q_pyramid, - f_s_pyramid, - upsample=upsample, - **(batch["corresps"] if "corresps" in batch else {}) - ) - return dense_corresps - - def to_pixel_coordinates(self, matches, H_A, W_A, H_B, W_B): - kpts_A, kpts_B = matches[..., :2], matches[..., 2:] - kpts_A = torch.stack( - (W_A / 2 * (kpts_A[..., 0] + 1), H_A / 2 * (kpts_A[..., 1] + 1)), axis=-1 - ) - kpts_B = torch.stack( - (W_B / 2 * (kpts_B[..., 0] + 1), H_B / 2 * (kpts_B[..., 1] + 1)), axis=-1 - ) - return kpts_A, kpts_B - - def match(self, im1_path, im2_path, *args, batched=False, device=None): - assert not ( - batched and self.upsample_preds - ), "Cannot upsample preds if in batchmode (as we don't have access to high res images). You can turn off upsample_preds by model.upsample_preds = False " - if isinstance(im1_path, (str, os.PathLike)): - im1, im2 = Image.open(im1_path), Image.open(im2_path) - else: # assume it is a PIL Image - im1, im2 = im1_path, im2_path - if device is None: - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - symmetric = self.symmetric - self.train(False) - with torch.no_grad(): - if not batched: - b = 1 - w, h = im1.size - w2, h2 = im2.size - # Get images in good format - ws = self.w_resized - hs = self.h_resized - - test_transform = get_tuple_transform_ops( - resize=(hs, ws), normalize=True - ) - query, support = test_transform((im1, im2)) - batch = { - "query": query[None].to(device), - "support": support[None].to(device), - } - else: - b, c, h, w = im1.shape - b, c, h2, w2 = im2.shape - assert w == w2 and h == h2, "For batched images we assume same size" - batch = {"query": im1.to(device), "support": im2.to(device)} - hs, ws = self.h_resized, self.w_resized - finest_scale = 1 - # Run matcher - if symmetric: - dense_corresps = self.forward_symmetric(batch, batched=True) - else: - dense_corresps = self.forward(batch, batched=True) - - if self.upsample_preds: - hs, ws = self.upsample_res - low_res_certainty = F.interpolate( - dense_corresps[16]["dense_certainty"], - size=(hs, ws), - align_corners=False, - mode="bilinear", - ) - cert_clamp = 0 - factor = 0.5 - low_res_certainty = ( - factor * low_res_certainty * (low_res_certainty < cert_clamp) - ) - - if self.upsample_preds: - test_transform = get_tuple_transform_ops( - resize=(hs, ws), normalize=True - ) - query, support = test_transform((im1, im2)) - query, support = query[None].to(device), support[None].to(device) - batch = { - "query": query, - "support": support, - "corresps": dense_corresps[finest_scale], - } - if symmetric: - dense_corresps = self.forward_symmetric( - batch, upsample=True, batched=True - ) - else: - dense_corresps = self.forward(batch, batched=True, upsample=True) - query_to_support = dense_corresps[finest_scale]["dense_flow"] - dense_certainty = dense_corresps[finest_scale]["dense_certainty"] - - # Get certainty interpolation - dense_certainty = dense_certainty - low_res_certainty - query_to_support = query_to_support.permute(0, 2, 3, 1) - # Create im1 meshgrid - query_coords = torch.meshgrid( - ( - torch.linspace(-1 + 1 / hs, 1 - 1 / hs, hs, device=device), - torch.linspace(-1 + 1 / ws, 1 - 1 / ws, ws, device=device), - ) - ) - query_coords = torch.stack((query_coords[1], query_coords[0])) - query_coords = query_coords[None].expand(b, 2, hs, ws) - dense_certainty = dense_certainty.sigmoid() # logits -> probs - query_coords = query_coords.permute(0, 2, 3, 1) - if (query_to_support.abs() > 1).any() and True: - wrong = (query_to_support.abs() > 1).sum(dim=-1) > 0 - dense_certainty[wrong[:, None]] = 0 - - query_to_support = torch.clamp(query_to_support, -1, 1) - if symmetric: - support_coords = query_coords - qts, stq = query_to_support.chunk(2) - q_warp = torch.cat((query_coords, qts), dim=-1) - s_warp = torch.cat((stq, support_coords), dim=-1) - warp = torch.cat((q_warp, s_warp), dim=2) - dense_certainty = torch.cat(dense_certainty.chunk(2), dim=3)[:, 0] - else: - warp = torch.cat((query_coords, query_to_support), dim=-1) - if batched: - return (warp, dense_certainty) - else: - return ( - warp[0], - dense_certainty[0], - ) diff --git a/spaces/Realcat/image-matching-webui/third_party/SuperGluePretrainedNetwork/README.md b/spaces/Realcat/image-matching-webui/third_party/SuperGluePretrainedNetwork/README.md deleted file mode 100644 index ab08335ce7bb237fd8108470d53b0aac11acc01f..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/SuperGluePretrainedNetwork/README.md +++ /dev/null @@ -1,388 +0,0 @@ - - -### Research @ Magic Leap (CVPR 2020, Oral) - -# SuperGlue Inference and Evaluation Demo Script - -## Introduction -SuperGlue is a CVPR 2020 research project done at Magic Leap. The SuperGlue network is a Graph Neural Network combined with an Optimal Matching layer that is trained to perform matching on two sets of sparse image features. This repo includes PyTorch code and pretrained weights for running the SuperGlue matching network on top of [SuperPoint](https://arxiv.org/abs/1712.07629) keypoints and descriptors. Given a pair of images, you can use this repo to extract matching features across the image pair. - -

- -

- -SuperGlue operates as a "middle-end," performing context aggregation, matching, and filtering in a single end-to-end architecture. For more details, please see: - -* Full paper PDF: [SuperGlue: Learning Feature Matching with Graph Neural Networks](https://arxiv.org/abs/1911.11763). - -* Authors: *Paul-Edouard Sarlin, Daniel DeTone, Tomasz Malisiewicz, Andrew Rabinovich* - -* Website: [psarlin.com/superglue](https://psarlin.com/superglue) for videos, slides, recent updates, and more visualizations. - -* `hloc`: a new toolbox for visual localization and SfM with SuperGlue, available at [cvg/Hierarchical-Localization](https://github.com/cvg/Hierarchical-Localization/). Winner of 3 CVPR 2020 competitions on localization and image matching! - -We provide two pre-trained weights files: an indoor model trained on ScanNet data, and an outdoor model trained on MegaDepth data. Both models are inside the [weights directory](./models/weights). By default, the demo will run the **indoor** model. - -## Dependencies -* Python 3 >= 3.5 -* PyTorch >= 1.1 -* OpenCV >= 3.4 (4.1.2.30 recommended for best GUI keyboard interaction, see this [note](#additional-notes)) -* Matplotlib >= 3.1 -* NumPy >= 1.18 - -Simply run the following command: `pip3 install numpy opencv-python torch matplotlib` - -## Contents -There are two main top-level scripts in this repo: - -1. `demo_superglue.py` : runs a live demo on a webcam, IP camera, image directory or movie file -2. `match_pairs.py`: reads image pairs from files and dumps matches to disk (also runs evaluation if ground truth relative poses are provided) - -## Live Matching Demo Script (`demo_superglue.py`) -This demo runs SuperPoint + SuperGlue feature matching on an anchor image and live image. You can update the anchor image by pressing the `n` key. The demo can read image streams from a USB or IP camera, a directory containing images, or a video file. You can pass all of these inputs using the `--input` flag. - -### Run the demo on a live webcam - -Run the demo on the default USB webcam (ID #0), running on a CUDA GPU if one is found: - -```sh -./demo_superglue.py -``` - -Keyboard control: - -* `n`: select the current frame as the anchor -* `e`/`r`: increase/decrease the keypoint confidence threshold -* `d`/`f`: increase/decrease the match filtering threshold -* `k`: toggle the visualization of keypoints -* `q`: quit - -Run the demo on 320x240 images running on the CPU: - -```sh -./demo_superglue.py --resize 320 240 --force_cpu -``` - -The `--resize` flag can be used to resize the input image in three ways: - -1. `--resize` `width` `height` : will resize to exact `width` x `height` dimensions -2. `--resize` `max_dimension` : will resize largest input image dimension to `max_dimension` -3. `--resize` `-1` : will not resize (i.e. use original image dimensions) - -The default will resize images to `640x480`. - -### Run the demo on a directory of images - -The `--input` flag also accepts a path to a directory. We provide a directory of sample images from a sequence. To run the demo on the directory of images in `freiburg_sequence/` on a headless server (will not display to the screen) and write the output visualization images to `dump_demo_sequence/`: - -```sh -./demo_superglue.py --input assets/freiburg_sequence/ --output_dir dump_demo_sequence --resize 320 240 --no_display -``` - -You should see this output on the sample Freiburg-TUM RGBD sequence: - - - -The matches are colored by their predicted confidence in a jet colormap (Red: more confident, Blue: less confident). - -### Additional useful command line parameters -* Use `--image_glob` to change the image file extension (default: `*.png`, `*.jpg`, `*.jpeg`). -* Use `--skip` to skip intermediate frames (default: `1`). -* Use `--max_length` to cap the total number of frames processed (default: `1000000`). -* Use `--show_keypoints` to visualize the detected keypoints (default: `False`). - -## Run Matching+Evaluation (`match_pairs.py`) - -This repo also contains a script `match_pairs.py` that runs the matching from a list of image pairs. With this script, you can: - -* Run the matcher on a set of image pairs (no ground truth needed) -* Visualize the keypoints and matches, based on their confidence -* Evaluate and visualize the match correctness, if the ground truth relative poses and intrinsics are provided -* Save the keypoints, matches, and evaluation results for further processing -* Collate evaluation results over many pairs and generate result tables - -### Matches only mode - -The simplest usage of this script will process the image pairs listed in a given text file and dump the keypoints and matches to compressed numpy `npz` files. We provide the challenging ScanNet pairs from the main paper in `assets/example_indoor_pairs/`. Running the following will run SuperPoint + SuperGlue on each image pair, and dump the results to `dump_match_pairs/`: - -```sh -./match_pairs.py -``` - -The resulting `.npz` files can be read from Python as follows: - -```python ->>> import numpy as np ->>> path = 'dump_match_pairs/scene0711_00_frame-001680_scene0711_00_frame-001995_matches.npz' ->>> npz = np.load(path) ->>> npz.files -['keypoints0', 'keypoints1', 'matches', 'match_confidence'] ->>> npz['keypoints0'].shape -(382, 2) ->>> npz['keypoints1'].shape -(391, 2) ->>> npz['matches'].shape -(382,) ->>> np.sum(npz['matches']>-1) -115 ->>> npz['match_confidence'].shape -(382,) -``` - -For each keypoint in `keypoints0`, the `matches` array indicates the index of the matching keypoint in `keypoints1`, or `-1` if the keypoint is unmatched. - -### Visualization mode - -You can add the flag `--viz` to dump image outputs which visualize the matches: - -```sh -./match_pairs.py --viz -``` - -You should see images like this inside of `dump_match_pairs/` (or something very close to it, see this [note](#a-note-on-reproducibility)): - - - -The matches are colored by their predicted confidence in a jet colormap (Red: more confident, Blue: less confident). - -### Evaluation mode - -You can also estimate the pose using RANSAC + Essential Matrix decomposition and evaluate it if the ground truth relative poses and intrinsics are provided in the input `.txt` files. Each `.txt` file contains three key ground truth matrices: a 3x3 intrinsics matrix of image0: `K0`, a 3x3 intrinsics matrix of image1: `K1` , and a 4x4 matrix of the relative pose extrinsics `T_0to1`. - -To run the evaluation on the sample set of images (by default reading `assets/scannet_sample_pairs_with_gt.txt`), you can run: - -```sh -./match_pairs.py --eval -``` - - -Since you enabled `--eval`, you should see collated results printed to the terminal. For the example images provided, you should get the following numbers (or something very close to it, see this [note](#a-note-on-reproducibility)): - -```txt -Evaluation Results (mean over 15 pairs): -AUC@5 AUC@10 AUC@20 Prec MScore -26.99 48.40 64.47 73.52 19.60 -``` - -The resulting `.npz` files in `dump_match_pairs/` will now contain scalar values related to the evaluation, computed on the sample images provided. Here is what you should find in one of the generated evaluation files: - -```python ->>> import numpy as np ->>> path = 'dump_match_pairs/scene0711_00_frame-001680_scene0711_00_frame-001995_evaluation.npz' ->>> npz = np.load(path) ->>> print(npz.files) -['error_t', 'error_R', 'precision', 'matching_score', 'num_correct', 'epipolar_errors'] -``` - -You can also visualize the evaluation metrics by running the following command: - -```sh -./match_pairs.py --eval --viz -``` - -You should also now see additional images in `dump_match_pairs/` which visualize the evaluation numbers (or something very close to it, see this [note](#a-note-on-reproducibility)): - - - -The top left corner of the image shows the pose error and number of inliers, while the lines are colored by their epipolar error computed with the ground truth relative pose (red: higher error, green: lower error). - -### Running on sample outdoor pairs - -
- [Click to expand] - -In this repo, we also provide a few challenging Phototourism pairs, so that you can re-create some of the figures from the paper. Run this script to run matching and visualization (no ground truth is provided, see this [note](#reproducing-outdoor-evaluation-final-table)) on the provided pairs: - -```sh -./match_pairs.py --resize 1600 --superglue outdoor --max_keypoints 2048 --nms_radius 3 --resize_float --input_dir assets/phototourism_sample_images/ --input_pairs assets/phototourism_sample_pairs.txt --output_dir dump_match_pairs_outdoor --viz -``` - -You should now image pairs such as these in `dump_match_pairs_outdoor/` (or something very close to it, see this [note](#a-note-on-reproducibility)): - - - -
- -### Recommended settings for indoor / outdoor - -
- [Click to expand] - -For **indoor** images, we recommend the following settings (these are the defaults): - -```sh -./match_pairs.py --resize 640 --superglue indoor --max_keypoints 1024 --nms_radius 4 -``` - -For **outdoor** images, we recommend the following settings: - -```sh -./match_pairs.py --resize 1600 --superglue outdoor --max_keypoints 2048 --nms_radius 3 --resize_float -``` - -You can provide your own list of pairs `--input_pairs` for images contained in `--input_dir`. Images can be resized before network inference with `--resize`. If you are re-running the same evaluation many times, you can use the `--cache` flag to reuse old computation. -
- -### Test set pair file format explained - -
- [Click to expand] - -We provide the list of ScanNet test pairs in `assets/scannet_test_pairs_with_gt.txt` (with ground truth) and Phototourism test pairs `assets/phototourism_test_pairs.txt` (without ground truth) used to evaluate the matching from the paper. Each line corresponds to one pair and is structured as follows: - -``` -path_image_A path_image_B exif_rotationA exif_rotationB [KA_0 ... KA_8] [KB_0 ... KB_8] [T_AB_0 ... T_AB_15] -``` - -The `path_image_A` and `path_image_B` entries are paths to image A and B, respectively. The `exif_rotation` is an integer in the range [0, 3] that comes from the original EXIF metadata associated with the image, where, 0: no rotation, 1: 90 degree clockwise, 2: 180 degree clockwise, 3: 270 degree clockwise. If the EXIF data is not known, you can just provide a zero here and no rotation will be performed. `KA` and `KB` are the flattened `3x3` matrices of image A and image B intrinsics. `T_AB` is a flattened `4x4` matrix of the extrinsics between the pair. -
- -### Reproducing the indoor evaluation on ScanNet - -
- [Click to expand] - -We provide the groundtruth for ScanNet in our format in the file `assets/scannet_test_pairs_with_gt.txt` for convenience. In order to reproduce similar tables to what was in the paper, you will need to download the dataset (we do not provide the raw test images). To download the ScanNet dataset, do the following: - -1. Head to the [ScanNet](https://github.com/ScanNet/ScanNet) github repo to download the ScanNet test set (100 scenes). -2. You will need to extract the raw sensor data from the 100 `.sens` files in each scene in the test set using the [SensReader](https://github.com/ScanNet/ScanNet/tree/master/SensReader) tool. - -Once the ScanNet dataset is downloaded in `~/data/scannet`, you can run the following: - -```sh -./match_pairs.py --input_dir ~/data/scannet --input_pairs assets/scannet_test_pairs_with_gt.txt --output_dir dump_scannet_test_results --eval -``` - -You should get the following table for ScanNet (or something very close to it, see this [note](#a-note-on-reproducibility)): - -```txt -Evaluation Results (mean over 1500 pairs): -AUC@5 AUC@10 AUC@20 Prec MScore -16.12 33.76 51.79 84.37 31.14 -``` - -
- -### Reproducing the outdoor evaluation on YFCC - -
- [Click to expand] - -We provide the groundtruth for YFCC in our format in the file `assets/yfcc_test_pairs_with_gt.txt` for convenience. In order to reproduce similar tables to what was in the paper, you will need to download the dataset (we do not provide the raw test images). To download the YFCC dataset, you can use the [OANet](https://github.com/zjhthu/OANet) repo: - -```sh -git clone https://github.com/zjhthu/OANet -cd OANet -bash download_data.sh raw_data raw_data_yfcc.tar.gz 0 8 -tar -xvf raw_data_yfcc.tar.gz -mv raw_data/yfcc100m ~/data -``` - -Once the YFCC dataset is downloaded in `~/data/yfcc100m`, you can run the following: - -```sh -./match_pairs.py --input_dir ~/data/yfcc100m --input_pairs assets/yfcc_test_pairs_with_gt.txt --output_dir dump_yfcc_test_results --eval --resize 1600 --superglue outdoor --max_keypoints 2048 --nms_radius 3 --resize_float -``` - -You should get the following table for YFCC (or something very close to it, see this [note](#a-note-on-reproducibility)): - -```txt -Evaluation Results (mean over 4000 pairs): -AUC@5 AUC@10 AUC@20 Prec MScore -39.02 59.51 75.72 98.72 23.61 -``` - -
- -### Reproducing outdoor evaluation on Phototourism - -
- [Click to expand] - -The Phototourism results shown in the paper were produced using similar data as the test set from the [Image Matching Challenge 2020](https://vision.uvic.ca/image-matching-challenge/), which holds the ground truth data private for the test set. We list the pairs we used in `assets/phototourism_test_pairs.txt`. To reproduce similar numbers on this test set, please submit to the challenge benchmark. While the challenge is still live, we cannot share the test set publically since we want to help maintain the integrity of the challenge. - -
- -### Correcting EXIF rotation data in YFCC and Phototourism - -
- [Click to expand] - -In this repo, we provide manually corrected the EXIF rotation data for the outdoor evaluations on YFCC and Phototourism. For the YFCC dataset we found 7 images with incorrect EXIF rotation flags, resulting in 148 pairs out of 4000 being corrected. For Phototourism, we found 36 images with incorrect EXIF rotation flags, resulting in 212 out of 2200 pairs being corrected. - -The SuperGlue paper reports the results of SuperGlue **without** the corrected rotations, while the numbers in this README are reported **with** the corrected rotations. We found that our final conclusions from the evaluation still hold with or without the corrected rotations. For backwards compatability, we included the original, uncorrected EXIF rotation data in `assets/phototourism_test_pairs_original.txt` and `assets/yfcc_test_pairs_with_gt_original.txt` respectively. - -
- -### Outdoor training / validation scene splits of MegaDepth - -
- [Click to expand] - -For training and validation of the outdoor model, we used scenes from the [MegaDepth dataset](http://www.cs.cornell.edu/projects/megadepth/). We provide the list of scenes used to train the outdoor model in the `assets/` directory: - -* Training set: `assets/megadepth_train_scenes.txt` -* Validation set: `assets/megadepth_validation_scenes.txt` - -
- -### A note on reproducibility - -
- [Click to expand] - -After simplifying the model code and evaluation code and preparing it for release, we made some improvements and tweaks that result in slightly different numbers than what was reported in the paper. The numbers and figures reported in the README were done using Ubuntu 16.04, OpenCV 3.4.5, and PyTorch 1.1.0. Even with matching the library versions, we observed some slight differences across Mac and Ubuntu, which we believe are due to differences in OpenCV's image resize function implementation and randomization of RANSAC. -
- -### Creating high-quality PDF visualizations and faster visualization with --fast_viz - -
- [Click to expand] - -When generating output images with `match_pairs.py`, the default `--viz` flag uses a Matplotlib renderer which allows for the generation of camera-ready PDF visualizations if you additionally use `--viz_extension pdf` instead of the default png extension. - -``` -./match_pairs.py --viz --viz_extension pdf -``` - -Alternatively, you might want to save visualization images but have the generation be much faster. You can use the `--fast_viz` flag to use an OpenCV-based image renderer as follows: - -``` -./match_pairs.py --viz --fast_viz -``` - -If you would also like an OpenCV display window to preview the results (you must use non-pdf output and use fast_fiz), simply run: - -``` -./match_pairs.py --viz --fast_viz --opencv_display -``` - -
- - -## BibTeX Citation -If you use any ideas from the paper or code from this repo, please consider citing: - -```txt -@inproceedings{sarlin20superglue, - author = {Paul-Edouard Sarlin and - Daniel DeTone and - Tomasz Malisiewicz and - Andrew Rabinovich}, - title = {{SuperGlue}: Learning Feature Matching with Graph Neural Networks}, - booktitle = {CVPR}, - year = {2020}, - url = {https://arxiv.org/abs/1911.11763} -} -``` - -## Additional Notes -* For the demo, we found that the keyboard interaction works well with OpenCV 4.1.2.30, older versions were less responsive and the newest version had a [OpenCV bug on Mac](https://stackoverflow.com/questions/60032540/opencv-cv2-imshow-is-not-working-because-of-the-qt) -* We generally do not recommend to run SuperPoint+SuperGlue below 160x120 resolution (QQVGA) and above 2000x1500 -* We do not intend to release the SuperGlue training code. -* We do not intend to release the SIFT-based or homography SuperGlue models. - -## Legal Disclaimer -Magic Leap is proud to provide its latest samples, toolkits, and research projects on Github to foster development and gather feedback from the spatial computing community. Use of the resources within this repo is subject to (a) the license(s) included herein, or (b) if no license is included, Magic Leap's [Developer Agreement](https://id.magicleap.com/terms/developer), which is available on our [Developer Portal](https://developer.magicleap.com/). -If you need more, just ask on the [forums](https://forum.magicleap.com/hc/en-us/community/topics)! -We're thrilled to be part of a well-meaning, friendly and welcoming community of millions. diff --git a/spaces/Riksarkivet/htr_demo/tabs/overview_tab.py b/spaces/Riksarkivet/htr_demo/tabs/overview_tab.py deleted file mode 100644 index a5b321f08d9ed69796e9b28fc23ce4611ab4c430..0000000000000000000000000000000000000000 --- a/spaces/Riksarkivet/htr_demo/tabs/overview_tab.py +++ /dev/null @@ -1,69 +0,0 @@ -import gradio as gr - -from helper.text.text_overview import TextOverview - -with gr.Blocks() as overview: - with gr.Tabs(): - with gr.Tab("About"): - with gr.Row(): - with gr.Column(): - gr.Markdown(TextOverview.htrflow_col1) - with gr.Column(): - gr.Markdown(TextOverview.htrflow_col2) - with gr.Row(): - gr.Markdown(TextOverview.htrflow_row1) - with gr.Row(): - with gr.Tabs(): - with gr.Tab("Binarization"): - gr.Markdown(TextOverview.htrflow_tab1) - with gr.Tab("Region segmentation"): - gr.Markdown(TextOverview.htrflow_tab2) - with gr.Tab("Line segmentation"): - gr.Markdown(TextOverview.htrflow_tab3) - with gr.Tab("Text recognition"): - gr.Markdown(TextOverview.htrflow_tab4) - - with gr.Tab("Contributions"): - with gr.Row(): - with gr.Column(): - gr.Markdown(TextOverview.contributions) - gr.Markdown(TextOverview.huminfra_image) - - with gr.Tab("Duplicating for own use & API"): - with gr.Row(): - with gr.Column(): - gr.Markdown(TextOverview.duplicate) - - with gr.Column(): - gr.Markdown(TextOverview.api1) - gr.Code( - value=TextOverview.api_code1, - language="python", - interactive=False, - show_label=False, - ) - - gr.Markdown(TextOverview.api2) - - gr.Code( - value=TextOverview.api_code2, - language=None, - interactive=False, - show_label=False, - ) - - with gr.Tab("Changelog & Roadmap"): - with gr.Row(): - with gr.Column(): - gr.Markdown(TextOverview.changelog) - with gr.Accordion("Previous changes", open=False): - gr.Markdown(TextOverview.old_changelog) - with gr.Column(): - gr.Markdown(TextOverview.roadmap) - - with gr.Tab("FAQ & Contact"): - with gr.Row(): - with gr.Column(): - gr.Markdown(TextOverview.text_faq) - with gr.Column(): - gr.Markdown(TextOverview.text_discussion) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/sync_bn.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/sync_bn.py deleted file mode 100644 index c9b016fcbe860989c56cd1040034bcfa60e146d2..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/sync_bn.py +++ /dev/null @@ -1,279 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.distributed as dist -import torch.nn.functional as F -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.module import Module -from torch.nn.parameter import Parameter - -from annotator.uniformer.mmcv.cnn import NORM_LAYERS -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'sync_bn_forward_mean', 'sync_bn_forward_var', 'sync_bn_forward_output', - 'sync_bn_backward_param', 'sync_bn_backward_data' -]) - - -class SyncBatchNormFunction(Function): - - @staticmethod - def symbolic(g, input, running_mean, running_var, weight, bias, momentum, - eps, group, group_size, stats_mode): - return g.op( - 'mmcv::MMCVSyncBatchNorm', - input, - running_mean, - running_var, - weight, - bias, - momentum_f=momentum, - eps_f=eps, - group_i=group, - group_size_i=group_size, - stats_mode=stats_mode) - - @staticmethod - def forward(self, input, running_mean, running_var, weight, bias, momentum, - eps, group, group_size, stats_mode): - self.momentum = momentum - self.eps = eps - self.group = group - self.group_size = group_size - self.stats_mode = stats_mode - - assert isinstance( - input, (torch.HalfTensor, torch.FloatTensor, - torch.cuda.HalfTensor, torch.cuda.FloatTensor)), \ - f'only support Half or Float Tensor, but {input.type()}' - output = torch.zeros_like(input) - input3d = input.flatten(start_dim=2) - output3d = output.view_as(input3d) - num_channels = input3d.size(1) - - # ensure mean/var/norm/std are initialized as zeros - # ``torch.empty()`` does not guarantee that - mean = torch.zeros( - num_channels, dtype=torch.float, device=input3d.device) - var = torch.zeros( - num_channels, dtype=torch.float, device=input3d.device) - norm = torch.zeros_like( - input3d, dtype=torch.float, device=input3d.device) - std = torch.zeros( - num_channels, dtype=torch.float, device=input3d.device) - - batch_size = input3d.size(0) - if batch_size > 0: - ext_module.sync_bn_forward_mean(input3d, mean) - batch_flag = torch.ones([1], device=mean.device, dtype=mean.dtype) - else: - # skip updating mean and leave it as zeros when the input is empty - batch_flag = torch.zeros([1], device=mean.device, dtype=mean.dtype) - - # synchronize mean and the batch flag - vec = torch.cat([mean, batch_flag]) - if self.stats_mode == 'N': - vec *= batch_size - if self.group_size > 1: - dist.all_reduce(vec, group=self.group) - total_batch = vec[-1].detach() - mean = vec[:num_channels] - - if self.stats_mode == 'default': - mean = mean / self.group_size - elif self.stats_mode == 'N': - mean = mean / total_batch.clamp(min=1) - else: - raise NotImplementedError - - # leave var as zeros when the input is empty - if batch_size > 0: - ext_module.sync_bn_forward_var(input3d, mean, var) - - if self.stats_mode == 'N': - var *= batch_size - if self.group_size > 1: - dist.all_reduce(var, group=self.group) - - if self.stats_mode == 'default': - var /= self.group_size - elif self.stats_mode == 'N': - var /= total_batch.clamp(min=1) - else: - raise NotImplementedError - - # if the total batch size over all the ranks is zero, - # we should not update the statistics in the current batch - update_flag = total_batch.clamp(max=1) - momentum = update_flag * self.momentum - ext_module.sync_bn_forward_output( - input3d, - mean, - var, - weight, - bias, - running_mean, - running_var, - norm, - std, - output3d, - eps=self.eps, - momentum=momentum, - group_size=self.group_size) - self.save_for_backward(norm, std, weight) - return output - - @staticmethod - @once_differentiable - def backward(self, grad_output): - norm, std, weight = self.saved_tensors - grad_weight = torch.zeros_like(weight) - grad_bias = torch.zeros_like(weight) - grad_input = torch.zeros_like(grad_output) - grad_output3d = grad_output.flatten(start_dim=2) - grad_input3d = grad_input.view_as(grad_output3d) - - batch_size = grad_input3d.size(0) - if batch_size > 0: - ext_module.sync_bn_backward_param(grad_output3d, norm, grad_weight, - grad_bias) - - # all reduce - if self.group_size > 1: - dist.all_reduce(grad_weight, group=self.group) - dist.all_reduce(grad_bias, group=self.group) - grad_weight /= self.group_size - grad_bias /= self.group_size - - if batch_size > 0: - ext_module.sync_bn_backward_data(grad_output3d, weight, - grad_weight, grad_bias, norm, std, - grad_input3d) - - return grad_input, None, None, grad_weight, grad_bias, \ - None, None, None, None, None - - -@NORM_LAYERS.register_module(name='MMSyncBN') -class SyncBatchNorm(Module): - """Synchronized Batch Normalization. - - Args: - num_features (int): number of features/chennels in input tensor - eps (float, optional): a value added to the denominator for numerical - stability. Defaults to 1e-5. - momentum (float, optional): the value used for the running_mean and - running_var computation. Defaults to 0.1. - affine (bool, optional): whether to use learnable affine parameters. - Defaults to True. - track_running_stats (bool, optional): whether to track the running - mean and variance during training. When set to False, this - module does not track such statistics, and initializes statistics - buffers ``running_mean`` and ``running_var`` as ``None``. When - these buffers are ``None``, this module always uses batch - statistics in both training and eval modes. Defaults to True. - group (int, optional): synchronization of stats happen within - each process group individually. By default it is synchronization - across the whole world. Defaults to None. - stats_mode (str, optional): The statistical mode. Available options - includes ``'default'`` and ``'N'``. Defaults to 'default'. - When ``stats_mode=='default'``, it computes the overall statistics - using those from each worker with equal weight, i.e., the - statistics are synchronized and simply divied by ``group``. This - mode will produce inaccurate statistics when empty tensors occur. - When ``stats_mode=='N'``, it compute the overall statistics using - the total number of batches in each worker ignoring the number of - group, i.e., the statistics are synchronized and then divied by - the total batch ``N``. This mode is beneficial when empty tensors - occur during training, as it average the total mean by the real - number of batch. - """ - - def __init__(self, - num_features, - eps=1e-5, - momentum=0.1, - affine=True, - track_running_stats=True, - group=None, - stats_mode='default'): - super(SyncBatchNorm, self).__init__() - self.num_features = num_features - self.eps = eps - self.momentum = momentum - self.affine = affine - self.track_running_stats = track_running_stats - group = dist.group.WORLD if group is None else group - self.group = group - self.group_size = dist.get_world_size(group) - assert stats_mode in ['default', 'N'], \ - f'"stats_mode" only accepts "default" and "N", got "{stats_mode}"' - self.stats_mode = stats_mode - if self.affine: - self.weight = Parameter(torch.Tensor(num_features)) - self.bias = Parameter(torch.Tensor(num_features)) - else: - self.register_parameter('weight', None) - self.register_parameter('bias', None) - if self.track_running_stats: - self.register_buffer('running_mean', torch.zeros(num_features)) - self.register_buffer('running_var', torch.ones(num_features)) - self.register_buffer('num_batches_tracked', - torch.tensor(0, dtype=torch.long)) - else: - self.register_buffer('running_mean', None) - self.register_buffer('running_var', None) - self.register_buffer('num_batches_tracked', None) - self.reset_parameters() - - def reset_running_stats(self): - if self.track_running_stats: - self.running_mean.zero_() - self.running_var.fill_(1) - self.num_batches_tracked.zero_() - - def reset_parameters(self): - self.reset_running_stats() - if self.affine: - self.weight.data.uniform_() # pytorch use ones_() - self.bias.data.zero_() - - def forward(self, input): - if input.dim() < 2: - raise ValueError( - f'expected at least 2D input, got {input.dim()}D input') - if self.momentum is None: - exponential_average_factor = 0.0 - else: - exponential_average_factor = self.momentum - - if self.training and self.track_running_stats: - if self.num_batches_tracked is not None: - self.num_batches_tracked += 1 - if self.momentum is None: # use cumulative moving average - exponential_average_factor = 1.0 / float( - self.num_batches_tracked) - else: # use exponential moving average - exponential_average_factor = self.momentum - - if self.training or not self.track_running_stats: - return SyncBatchNormFunction.apply( - input, self.running_mean, self.running_var, self.weight, - self.bias, exponential_average_factor, self.eps, self.group, - self.group_size, self.stats_mode) - else: - return F.batch_norm(input, self.running_mean, self.running_var, - self.weight, self.bias, False, - exponential_average_factor, self.eps) - - def __repr__(self): - s = self.__class__.__name__ - s += f'({self.num_features}, ' - s += f'eps={self.eps}, ' - s += f'momentum={self.momentum}, ' - s += f'affine={self.affine}, ' - s += f'track_running_stats={self.track_running_stats}, ' - s += f'group_size={self.group_size},' - s += f'stats_mode={self.stats_mode})' - return s diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/datasets/pipelines/loading.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/datasets/pipelines/loading.py deleted file mode 100644 index cfae701da3dd48c9a02e11b6a6f7cc627221fede..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/datasets/pipelines/loading.py +++ /dev/null @@ -1,458 +0,0 @@ -import os.path as osp - -import mmcv -import numpy as np -import pycocotools.mask as maskUtils - -from annotator.uniformer.mmdet.core import BitmapMasks, PolygonMasks -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class LoadImageFromFile(object): - """Load an image from file. - - Required keys are "img_prefix" and "img_info" (a dict that must contain the - key "filename"). Added or updated keys are "filename", "img", "img_shape", - "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), - "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). - - Args: - to_float32 (bool): Whether to convert the loaded image to a float32 - numpy array. If set to False, the loaded image is an uint8 array. - Defaults to False. - color_type (str): The flag argument for :func:`mmcv.imfrombytes`. - Defaults to 'color'. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - to_float32=False, - color_type='color', - file_client_args=dict(backend='disk')): - self.to_float32 = to_float32 - self.color_type = color_type - self.file_client_args = file_client_args.copy() - self.file_client = None - - def __call__(self, results): - """Call functions to load image and get image meta information. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded image and meta information. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results['img_prefix'] is not None: - filename = osp.join(results['img_prefix'], - results['img_info']['filename']) - else: - filename = results['img_info']['filename'] - - img_bytes = self.file_client.get(filename) - img = mmcv.imfrombytes(img_bytes, flag=self.color_type) - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = filename - results['ori_filename'] = results['img_info']['filename'] - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - results['img_fields'] = ['img'] - return results - - def __repr__(self): - repr_str = (f'{self.__class__.__name__}(' - f'to_float32={self.to_float32}, ' - f"color_type='{self.color_type}', " - f'file_client_args={self.file_client_args})') - return repr_str - - -@PIPELINES.register_module() -class LoadImageFromWebcam(LoadImageFromFile): - """Load an image from webcam. - - Similar with :obj:`LoadImageFromFile`, but the image read from webcam is in - ``results['img']``. - """ - - def __call__(self, results): - """Call functions to add image meta information. - - Args: - results (dict): Result dict with Webcam read image in - ``results['img']``. - - Returns: - dict: The dict contains loaded image and meta information. - """ - - img = results['img'] - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = None - results['ori_filename'] = None - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - results['img_fields'] = ['img'] - return results - - -@PIPELINES.register_module() -class LoadMultiChannelImageFromFiles(object): - """Load multi-channel images from a list of separate channel files. - - Required keys are "img_prefix" and "img_info" (a dict that must contain the - key "filename", which is expected to be a list of filenames). - Added or updated keys are "filename", "img", "img_shape", - "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), - "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). - - Args: - to_float32 (bool): Whether to convert the loaded image to a float32 - numpy array. If set to False, the loaded image is an uint8 array. - Defaults to False. - color_type (str): The flag argument for :func:`mmcv.imfrombytes`. - Defaults to 'color'. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - to_float32=False, - color_type='unchanged', - file_client_args=dict(backend='disk')): - self.to_float32 = to_float32 - self.color_type = color_type - self.file_client_args = file_client_args.copy() - self.file_client = None - - def __call__(self, results): - """Call functions to load multiple images and get images meta - information. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded images and meta information. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results['img_prefix'] is not None: - filename = [ - osp.join(results['img_prefix'], fname) - for fname in results['img_info']['filename'] - ] - else: - filename = results['img_info']['filename'] - - img = [] - for name in filename: - img_bytes = self.file_client.get(name) - img.append(mmcv.imfrombytes(img_bytes, flag=self.color_type)) - img = np.stack(img, axis=-1) - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = filename - results['ori_filename'] = results['img_info']['filename'] - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - # Set initial values for default meta_keys - results['pad_shape'] = img.shape - results['scale_factor'] = 1.0 - num_channels = 1 if len(img.shape) < 3 else img.shape[2] - results['img_norm_cfg'] = dict( - mean=np.zeros(num_channels, dtype=np.float32), - std=np.ones(num_channels, dtype=np.float32), - to_rgb=False) - return results - - def __repr__(self): - repr_str = (f'{self.__class__.__name__}(' - f'to_float32={self.to_float32}, ' - f"color_type='{self.color_type}', " - f'file_client_args={self.file_client_args})') - return repr_str - - -@PIPELINES.register_module() -class LoadAnnotations(object): - """Load mutiple types of annotations. - - Args: - with_bbox (bool): Whether to parse and load the bbox annotation. - Default: True. - with_label (bool): Whether to parse and load the label annotation. - Default: True. - with_mask (bool): Whether to parse and load the mask annotation. - Default: False. - with_seg (bool): Whether to parse and load the semantic segmentation - annotation. Default: False. - poly2mask (bool): Whether to convert the instance masks from polygons - to bitmaps. Default: True. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - with_bbox=True, - with_label=True, - with_mask=False, - with_seg=False, - poly2mask=True, - file_client_args=dict(backend='disk')): - self.with_bbox = with_bbox - self.with_label = with_label - self.with_mask = with_mask - self.with_seg = with_seg - self.poly2mask = poly2mask - self.file_client_args = file_client_args.copy() - self.file_client = None - - def _load_bboxes(self, results): - """Private function to load bounding box annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded bounding box annotations. - """ - - ann_info = results['ann_info'] - results['gt_bboxes'] = ann_info['bboxes'].copy() - - gt_bboxes_ignore = ann_info.get('bboxes_ignore', None) - if gt_bboxes_ignore is not None: - results['gt_bboxes_ignore'] = gt_bboxes_ignore.copy() - results['bbox_fields'].append('gt_bboxes_ignore') - results['bbox_fields'].append('gt_bboxes') - return results - - def _load_labels(self, results): - """Private function to load label annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded label annotations. - """ - - results['gt_labels'] = results['ann_info']['labels'].copy() - return results - - def _poly2mask(self, mask_ann, img_h, img_w): - """Private function to convert masks represented with polygon to - bitmaps. - - Args: - mask_ann (list | dict): Polygon mask annotation input. - img_h (int): The height of output mask. - img_w (int): The width of output mask. - - Returns: - numpy.ndarray: The decode bitmap mask of shape (img_h, img_w). - """ - - if isinstance(mask_ann, list): - # polygon -- a single object might consist of multiple parts - # we merge all parts into one mask rle code - rles = maskUtils.frPyObjects(mask_ann, img_h, img_w) - rle = maskUtils.merge(rles) - elif isinstance(mask_ann['counts'], list): - # uncompressed RLE - rle = maskUtils.frPyObjects(mask_ann, img_h, img_w) - else: - # rle - rle = mask_ann - mask = maskUtils.decode(rle) - return mask - - def process_polygons(self, polygons): - """Convert polygons to list of ndarray and filter invalid polygons. - - Args: - polygons (list[list]): Polygons of one instance. - - Returns: - list[numpy.ndarray]: Processed polygons. - """ - - polygons = [np.array(p) for p in polygons] - valid_polygons = [] - for polygon in polygons: - if len(polygon) % 2 == 0 and len(polygon) >= 6: - valid_polygons.append(polygon) - return valid_polygons - - def _load_masks(self, results): - """Private function to load mask annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded mask annotations. - If ``self.poly2mask`` is set ``True``, `gt_mask` will contain - :obj:`PolygonMasks`. Otherwise, :obj:`BitmapMasks` is used. - """ - - h, w = results['img_info']['height'], results['img_info']['width'] - gt_masks = results['ann_info']['masks'] - if self.poly2mask: - gt_masks = BitmapMasks( - [self._poly2mask(mask, h, w) for mask in gt_masks], h, w) - else: - gt_masks = PolygonMasks( - [self.process_polygons(polygons) for polygons in gt_masks], h, - w) - results['gt_masks'] = gt_masks - results['mask_fields'].append('gt_masks') - return results - - def _load_semantic_seg(self, results): - """Private function to load semantic segmentation annotations. - - Args: - results (dict): Result dict from :obj:`dataset`. - - Returns: - dict: The dict contains loaded semantic segmentation annotations. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - filename = osp.join(results['seg_prefix'], - results['ann_info']['seg_map']) - img_bytes = self.file_client.get(filename) - results['gt_semantic_seg'] = mmcv.imfrombytes( - img_bytes, flag='unchanged').squeeze() - results['seg_fields'].append('gt_semantic_seg') - return results - - def __call__(self, results): - """Call function to load multiple types annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded bounding box, label, mask and - semantic segmentation annotations. - """ - - if self.with_bbox: - results = self._load_bboxes(results) - if results is None: - return None - if self.with_label: - results = self._load_labels(results) - if self.with_mask: - results = self._load_masks(results) - if self.with_seg: - results = self._load_semantic_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(with_bbox={self.with_bbox}, ' - repr_str += f'with_label={self.with_label}, ' - repr_str += f'with_mask={self.with_mask}, ' - repr_str += f'with_seg={self.with_seg}, ' - repr_str += f'poly2mask={self.poly2mask}, ' - repr_str += f'poly2mask={self.file_client_args})' - return repr_str - - -@PIPELINES.register_module() -class LoadProposals(object): - """Load proposal pipeline. - - Required key is "proposals". Updated keys are "proposals", "bbox_fields". - - Args: - num_max_proposals (int, optional): Maximum number of proposals to load. - If not specified, all proposals will be loaded. - """ - - def __init__(self, num_max_proposals=None): - self.num_max_proposals = num_max_proposals - - def __call__(self, results): - """Call function to load proposals from file. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded proposal annotations. - """ - - proposals = results['proposals'] - if proposals.shape[1] not in (4, 5): - raise AssertionError( - 'proposals should have shapes (n, 4) or (n, 5), ' - f'but found {proposals.shape}') - proposals = proposals[:, :4] - - if self.num_max_proposals is not None: - proposals = proposals[:self.num_max_proposals] - - if len(proposals) == 0: - proposals = np.array([[0, 0, 0, 0]], dtype=np.float32) - results['proposals'] = proposals - results['bbox_fields'].append('proposals') - return results - - def __repr__(self): - return self.__class__.__name__ + \ - f'(num_max_proposals={self.num_max_proposals})' - - -@PIPELINES.register_module() -class FilterAnnotations(object): - """Filter invalid annotations. - - Args: - min_gt_bbox_wh (tuple[int]): Minimum width and height of ground truth - boxes. - """ - - def __init__(self, min_gt_bbox_wh): - # TODO: add more filter options - self.min_gt_bbox_wh = min_gt_bbox_wh - - def __call__(self, results): - assert 'gt_bboxes' in results - gt_bboxes = results['gt_bboxes'] - w = gt_bboxes[:, 2] - gt_bboxes[:, 0] - h = gt_bboxes[:, 3] - gt_bboxes[:, 1] - keep = (w > self.min_gt_bbox_wh[0]) & (h > self.min_gt_bbox_wh[1]) - if not keep.any(): - return None - else: - keys = ('gt_bboxes', 'gt_labels', 'gt_masks', 'gt_semantic_seg') - for key in keys: - if key in results: - results[key] = results[key][keep] - return results diff --git a/spaces/SIGGRAPH2022/Text2Human/Text2Human/ui/__init__.py b/spaces/SIGGRAPH2022/Text2Human/Text2Human/ui/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ShapeNet/shapenet-explorer/app.py b/spaces/ShapeNet/shapenet-explorer/app.py deleted file mode 100644 index 117353d7726ab43287f0e24b865a83afa5dc8646..0000000000000000000000000000000000000000 --- a/spaces/ShapeNet/shapenet-explorer/app.py +++ /dev/null @@ -1,81 +0,0 @@ -import gradio as gr -from collections import defaultdict -import os -from huggingface_hub import HfApi, hf_hub_download - - -HUB_TOKEN = os.getenv("HUB_TOKEN") -REPO_ID = "ShapeNet/shapenetcore-gltf" - - -def get_dataset_classes(): - hf_api = HfApi() - info = hf_api.dataset_info(repo_id=REPO_ID, token=HUB_TOKEN) - dataset_classes = defaultdict(list) - - for file in info.siblings: - if ".gltf" in file.rfilename: - class_name = file.rfilename.split("/")[0] - dataset_classes[class_name].append(file.rfilename) - - print(dataset_classes) - return dataset_classes - - -dataset_dict = get_dataset_classes() -dataset_classes = list(dataset_dict.keys()) -default_models = dataset_dict[dataset_classes[0]] - - -def load_mesh(mesh_file_name): - return mesh_file_name, mesh_file_name - - -def update(asset_name): - split_model_path = asset_name.split("/") - asset_path = hf_hub_download( - repo_id=REPO_ID, - filename=split_model_path[1], - subfolder=split_model_path[0], - repo_type="dataset", - use_auth_token=HUB_TOKEN, - ) - print(asset_name) - return asset_path - - -def update_models(class_name): - model_choices = dataset_dict[class_name] - return gr.Dropdown.update(choices=model_choices) - - -def update_model_list(class_name): - model_choices = dataset_dict[class_name] - return gr.Dropdown.update(choices=model_choices, value=model_choices[0]) - - -def update_asset_path(model_name): - return REPO_ID + "/" + model_name - - -with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - inp = gr.Dropdown(choices=dataset_classes, interactive=True, label="3D Model Class", value=dataset_classes[0]) - out1 = gr.Dropdown(choices=default_models, interactive=True, label="3D Model", value=default_models[0]) - out3 = gr.Textbox(value=REPO_ID + "/" + out1.value, label="Asset Path") - out2 = gr.Model3D(clear_color=[0.0, 0.0, 0.0, 0.0], label="3D Model") - - # Update second dropdown - inp.change(fn=update_model_list, inputs=inp, outputs=out1) - - # Update 3D model view - inp.change(fn=update, inputs=out1, outputs=out2) - out1.change(fn=update, inputs=out1, outputs=out2) - - # Update path to asset - inp.change(fn=update_asset_path, inputs=out1, outputs=out3) - out1.change(fn=update_asset_path, inputs=out1, outputs=out3) - - -demo.launch() \ No newline at end of file diff --git a/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/models/GroundingDINO/ms_deform_attn.py b/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/models/GroundingDINO/ms_deform_attn.py deleted file mode 100644 index 489d501bef364020212306d81e9b85c8daa27491..0000000000000000000000000000000000000000 --- a/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/models/GroundingDINO/ms_deform_attn.py +++ /dev/null @@ -1,413 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Deformable DETR -# Copyright (c) 2020 SenseTime. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------------------------------ -# Modified from: -# https://github.com/fundamentalvision/Deformable-DETR/blob/main/models/ops/functions/ms_deform_attn_func.py -# https://github.com/fundamentalvision/Deformable-DETR/blob/main/models/ops/modules/ms_deform_attn.py -# https://github.com/open-mmlab/mmcv/blob/master/mmcv/ops/multi_scale_deform_attn.py -# ------------------------------------------------------------------------------------------------ - -import math -import warnings -from typing import Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.init import constant_, xavier_uniform_ - -try: - from groundingdino import _C -except: - warnings.warn("Failed to load custom C++ ops. Running on CPU mode Only!") - - -# helpers -def _is_power_of_2(n): - if (not isinstance(n, int)) or (n < 0): - raise ValueError("invalid input for _is_power_of_2: {} (type: {})".format(n, type(n))) - return (n & (n - 1) == 0) and n != 0 - - -class MultiScaleDeformableAttnFunction(Function): - @staticmethod - def forward( - ctx, - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - im2col_step, - ): - ctx.im2col_step = im2col_step - output = _C.ms_deform_attn_forward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - ctx.im2col_step, - ) - ctx.save_for_backward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - ) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - ( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - ) = ctx.saved_tensors - grad_value, grad_sampling_loc, grad_attn_weight = _C.ms_deform_attn_backward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - grad_output, - ctx.im2col_step, - ) - - return grad_value, None, None, grad_sampling_loc, grad_attn_weight, None - - -def multi_scale_deformable_attn_pytorch( - value: torch.Tensor, - value_spatial_shapes: torch.Tensor, - sampling_locations: torch.Tensor, - attention_weights: torch.Tensor, -) -> torch.Tensor: - - bs, _, num_heads, embed_dims = value.shape - _, num_queries, num_heads, num_levels, num_points, _ = sampling_locations.shape - value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes], dim=1) - sampling_grids = 2 * sampling_locations - 1 - sampling_value_list = [] - for level, (H_, W_) in enumerate(value_spatial_shapes): - # bs, H_*W_, num_heads, embed_dims -> - # bs, H_*W_, num_heads*embed_dims -> - # bs, num_heads*embed_dims, H_*W_ -> - # bs*num_heads, embed_dims, H_, W_ - value_l_ = ( - value_list[level].flatten(2).transpose(1, 2).reshape(bs * num_heads, embed_dims, H_, W_) - ) - # bs, num_queries, num_heads, num_points, 2 -> - # bs, num_heads, num_queries, num_points, 2 -> - # bs*num_heads, num_queries, num_points, 2 - sampling_grid_l_ = sampling_grids[:, :, :, level].transpose(1, 2).flatten(0, 1) - # bs*num_heads, embed_dims, num_queries, num_points - sampling_value_l_ = F.grid_sample( - value_l_, sampling_grid_l_, mode="bilinear", padding_mode="zeros", align_corners=False - ) - sampling_value_list.append(sampling_value_l_) - # (bs, num_queries, num_heads, num_levels, num_points) -> - # (bs, num_heads, num_queries, num_levels, num_points) -> - # (bs, num_heads, 1, num_queries, num_levels*num_points) - attention_weights = attention_weights.transpose(1, 2).reshape( - bs * num_heads, 1, num_queries, num_levels * num_points - ) - output = ( - (torch.stack(sampling_value_list, dim=-2).flatten(-2) * attention_weights) - .sum(-1) - .view(bs, num_heads * embed_dims, num_queries) - ) - return output.transpose(1, 2).contiguous() - - -class MultiScaleDeformableAttention(nn.Module): - """Multi-Scale Deformable Attention Module used in Deformable-DETR - - `Deformable DETR: Deformable Transformers for End-to-End Object Detection. - `_. - - Args: - embed_dim (int): The embedding dimension of Attention. Default: 256. - num_heads (int): The number of attention heads. Default: 8. - num_levels (int): The number of feature map used in Attention. Default: 4. - num_points (int): The number of sampling points for each query - in each head. Default: 4. - img2col_steps (int): The step used in image_to_column. Defualt: 64. - dropout (float): Dropout layer used in output. Default: 0.1. - batch_first (bool): if ``True``, then the input and output tensor will be - provided as `(bs, n, embed_dim)`. Default: False. `(n, bs, embed_dim)` - """ - - def __init__( - self, - embed_dim: int = 256, - num_heads: int = 8, - num_levels: int = 4, - num_points: int = 4, - img2col_step: int = 64, - batch_first: bool = False, - ): - super().__init__() - if embed_dim % num_heads != 0: - raise ValueError( - "embed_dim must be divisible by num_heads, but got {} and {}".format( - embed_dim, num_heads - ) - ) - head_dim = embed_dim // num_heads - - self.batch_first = batch_first - - if not _is_power_of_2(head_dim): - warnings.warn( - """ - You'd better set d_model in MSDeformAttn to make sure that - each dim of the attention head a power of 2, which is more efficient. - """ - ) - - self.im2col_step = img2col_step - self.embed_dim = embed_dim - self.num_heads = num_heads - self.num_levels = num_levels - self.num_points = num_points - self.sampling_offsets = nn.Linear(embed_dim, num_heads * num_levels * num_points * 2) - self.attention_weights = nn.Linear(embed_dim, num_heads * num_levels * num_points) - self.value_proj = nn.Linear(embed_dim, embed_dim) - self.output_proj = nn.Linear(embed_dim, embed_dim) - - self.init_weights() - - def _reset_parameters(self): - return self.init_weights() - - def init_weights(self): - """ - Default initialization for Parameters of Module. - """ - constant_(self.sampling_offsets.weight.data, 0.0) - thetas = torch.arange(self.num_heads, dtype=torch.float32) * ( - 2.0 * math.pi / self.num_heads - ) - grid_init = torch.stack([thetas.cos(), thetas.sin()], -1) - grid_init = ( - (grid_init / grid_init.abs().max(-1, keepdim=True)[0]) - .view(self.num_heads, 1, 1, 2) - .repeat(1, self.num_levels, self.num_points, 1) - ) - for i in range(self.num_points): - grid_init[:, :, i, :] *= i + 1 - with torch.no_grad(): - self.sampling_offsets.bias = nn.Parameter(grid_init.view(-1)) - constant_(self.attention_weights.weight.data, 0.0) - constant_(self.attention_weights.bias.data, 0.0) - xavier_uniform_(self.value_proj.weight.data) - constant_(self.value_proj.bias.data, 0.0) - xavier_uniform_(self.output_proj.weight.data) - constant_(self.output_proj.bias.data, 0.0) - - def freeze_sampling_offsets(self): - print("Freeze sampling offsets") - self.sampling_offsets.weight.requires_grad = False - self.sampling_offsets.bias.requires_grad = False - - def freeze_attention_weights(self): - print("Freeze attention weights") - self.attention_weights.weight.requires_grad = False - self.attention_weights.bias.requires_grad = False - - def forward( - self, - query: torch.Tensor, - key: Optional[torch.Tensor] = None, - value: Optional[torch.Tensor] = None, - query_pos: Optional[torch.Tensor] = None, - key_padding_mask: Optional[torch.Tensor] = None, - reference_points: Optional[torch.Tensor] = None, - spatial_shapes: Optional[torch.Tensor] = None, - level_start_index: Optional[torch.Tensor] = None, - **kwargs - ) -> torch.Tensor: - - """Forward Function of MultiScaleDeformableAttention - - Args: - query (torch.Tensor): Query embeddings with shape - `(num_query, bs, embed_dim)` - key (torch.Tensor): Key embeddings with shape - `(num_key, bs, embed_dim)` - value (torch.Tensor): Value embeddings with shape - `(num_key, bs, embed_dim)` - query_pos (torch.Tensor): The position embedding for `query`. Default: None. - key_padding_mask (torch.Tensor): ByteTensor for `query`, with shape `(bs, num_key)`, - indicating which elements within `key` to be ignored in attention. - reference_points (torch.Tensor): The normalized reference points - with shape `(bs, num_query, num_levels, 2)`, - all elements is range in [0, 1], top-left (0, 0), - bottom-right (1, 1), including padding are. - or `(N, Length_{query}, num_levels, 4)`, add additional - two dimensions `(h, w)` to form reference boxes. - spatial_shapes (torch.Tensor): Spatial shape of features in different levels. - With shape `(num_levels, 2)`, last dimension represents `(h, w)`. - level_start_index (torch.Tensor): The start index of each level. A tensor with - shape `(num_levels, )` which can be represented as - `[0, h_0 * w_0, h_0 * w_0 + h_1 * w_1, ...]`. - - Returns: - torch.Tensor: forward results with shape `(num_query, bs, embed_dim)` - """ - - if value is None: - value = query - - if query_pos is not None: - query = query + query_pos - - if not self.batch_first: - # change to (bs, num_query ,embed_dims) - query = query.permute(1, 0, 2) - value = value.permute(1, 0, 2) - - bs, num_query, _ = query.shape - bs, num_value, _ = value.shape - - assert (spatial_shapes[:, 0] * spatial_shapes[:, 1]).sum() == num_value - - value = self.value_proj(value) - if key_padding_mask is not None: - value = value.masked_fill(key_padding_mask[..., None], float(0)) - value = value.view(bs, num_value, self.num_heads, -1) - sampling_offsets = self.sampling_offsets(query).view( - bs, num_query, self.num_heads, self.num_levels, self.num_points, 2 - ) - attention_weights = self.attention_weights(query).view( - bs, num_query, self.num_heads, self.num_levels * self.num_points - ) - attention_weights = attention_weights.softmax(-1) - attention_weights = attention_weights.view( - bs, - num_query, - self.num_heads, - self.num_levels, - self.num_points, - ) - - # bs, num_query, num_heads, num_levels, num_points, 2 - if reference_points.shape[-1] == 2: - offset_normalizer = torch.stack([spatial_shapes[..., 1], spatial_shapes[..., 0]], -1) - sampling_locations = ( - reference_points[:, :, None, :, None, :] - + sampling_offsets / offset_normalizer[None, None, None, :, None, :] - ) - elif reference_points.shape[-1] == 4: - sampling_locations = ( - reference_points[:, :, None, :, None, :2] - + sampling_offsets - / self.num_points - * reference_points[:, :, None, :, None, 2:] - * 0.5 - ) - else: - raise ValueError( - "Last dim of reference_points must be 2 or 4, but get {} instead.".format( - reference_points.shape[-1] - ) - ) - - if torch.cuda.is_available() and value.is_cuda: - halffloat = False - if value.dtype == torch.float16: - halffloat = True - value = value.float() - sampling_locations = sampling_locations.float() - attention_weights = attention_weights.float() - - output = MultiScaleDeformableAttnFunction.apply( - value, - spatial_shapes, - level_start_index, - sampling_locations, - attention_weights, - self.im2col_step, - ) - - if halffloat: - output = output.half() - else: - output = multi_scale_deformable_attn_pytorch( - value, spatial_shapes, sampling_locations, attention_weights - ) - - output = self.output_proj(output) - - if not self.batch_first: - output = output.permute(1, 0, 2) - - return output - - -def create_dummy_class(klass, dependency, message=""): - """ - When a dependency of a class is not available, create a dummy class which throws ImportError - when used. - - Args: - klass (str): name of the class. - dependency (str): name of the dependency. - message: extra message to print - Returns: - class: a class object - """ - err = "Cannot import '{}', therefore '{}' is not available.".format(dependency, klass) - if message: - err = err + " " + message - - class _DummyMetaClass(type): - # throw error on class attribute access - def __getattr__(_, __): # noqa: B902 - raise ImportError(err) - - class _Dummy(object, metaclass=_DummyMetaClass): - # throw error on constructor - def __init__(self, *args, **kwargs): - raise ImportError(err) - - return _Dummy - - -def create_dummy_func(func, dependency, message=""): - """ - When a dependency of a function is not available, create a dummy function which throws - ImportError when used. - - Args: - func (str): name of the function. - dependency (str or list[str]): name(s) of the dependency. - message: extra message to print - Returns: - function: a function object - """ - err = "Cannot import '{}', therefore '{}' is not available.".format(dependency, func) - if message: - err = err + " " + message - - if isinstance(dependency, (list, tuple)): - dependency = ",".join(dependency) - - def _dummy(*args, **kwargs): - raise ImportError(err) - - return _dummy diff --git a/spaces/SpacesExamples/secret-example/Dockerfile b/spaces/SpacesExamples/secret-example/Dockerfile deleted file mode 100644 index 8f8ade48624fb1d93601d3da2fb033ae0ae31e58..0000000000000000000000000000000000000000 --- a/spaces/SpacesExamples/secret-example/Dockerfile +++ /dev/null @@ -1,19 +0,0 @@ -FROM python:3.9 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -COPY ./main.py /code/app/main.py - -# Get secret EXAMPLE and output it to /test at buildtime -RUN --mount=type=secret,id=EXAMPLE,mode=0444,required=true \ - cat /run/secrets/EXAMPLE > /test - -# Get secret SECRET_EXAMPLE and clone it as repo at buildtime -RUN --mount=type=secret,id=SECRET_EXAMPLE,mode=0444,required=true \ - git clone $(cat /run/secrets/SECRET_EXAMPLE) - -CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "7860"] \ No newline at end of file diff --git a/spaces/SuYuanS/AudioCraft_Plus/tests/utils/__init__.py b/spaces/SuYuanS/AudioCraft_Plus/tests/utils/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/tests/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/eventful.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/eventful.py deleted file mode 100644 index 837c6e034429e51fd51b52c17090481177670e2b..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/eventful.py +++ /dev/null @@ -1,5 +0,0 @@ -from warnings import warn - -warn("IPython.utils.eventful has moved to traitlets.eventful", stacklevel=2) - -from traitlets.eventful import * diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/utils/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/adapter/sessions.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/adapter/sessions.py deleted file mode 100644 index 0abebcc8c359d4d2fbca4cb9d6cbf77b8282a5b0..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/adapter/sessions.py +++ /dev/null @@ -1,284 +0,0 @@ -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See LICENSE in the project root -# for license information. - -import itertools -import os -import signal -import threading -import time - -from debugpy import common -from debugpy.common import log, util -from debugpy.adapter import components, launchers, servers - - -_lock = threading.RLock() -_sessions = set() -_sessions_changed = threading.Event() - - -class Session(util.Observable): - """A debug session involving a client, an adapter, a launcher, and a debug server. - - The client and the adapter are always present, and at least one of launcher and debug - server is present, depending on the scenario. - """ - - _counter = itertools.count(1) - - def __init__(self): - from debugpy.adapter import clients - - super().__init__() - - self.lock = threading.RLock() - self.id = next(self._counter) - self._changed_condition = threading.Condition(self.lock) - - self.client = components.missing(self, clients.Client) - """The client component. Always present.""" - - self.launcher = components.missing(self, launchers.Launcher) - """The launcher componet. Always present in "launch" sessions, and never - present in "attach" sessions. - """ - - self.server = components.missing(self, servers.Server) - """The debug server component. Always present, unless this is a "launch" - session with "noDebug". - """ - - self.no_debug = None - """Whether this is a "noDebug" session.""" - - self.pid = None - """Process ID of the debuggee process.""" - - self.debug_options = {} - """Debug options as specified by "launch" or "attach" request.""" - - self.is_finalizing = False - """Whether finalize() has been invoked.""" - - self.observers += [lambda *_: self.notify_changed()] - - def __str__(self): - return f"Session[{self.id}]" - - def __enter__(self): - """Lock the session for exclusive access.""" - self.lock.acquire() - return self - - def __exit__(self, exc_type, exc_value, exc_tb): - """Unlock the session.""" - self.lock.release() - - def register(self): - with _lock: - _sessions.add(self) - _sessions_changed.set() - - def notify_changed(self): - with self: - self._changed_condition.notify_all() - - # A session is considered ended once all components disconnect, and there - # are no further incoming messages from anything to handle. - components = self.client, self.launcher, self.server - if all(not com or not com.is_connected for com in components): - with _lock: - if self in _sessions: - log.info("{0} has ended.", self) - _sessions.remove(self) - _sessions_changed.set() - - def wait_for(self, predicate, timeout=None): - """Waits until predicate() becomes true. - - The predicate is invoked with the session locked. If satisfied, the method - returns immediately. Otherwise, the lock is released (even if it was held - at entry), and the method blocks waiting for some attribute of either self, - self.client, self.server, or self.launcher to change. On every change, session - is re-locked and predicate is re-evaluated, until it is satisfied. - - While the session is unlocked, message handlers for components other than - the one that is waiting can run, but message handlers for that one are still - blocked. - - If timeout is not None, the method will unblock and return after that many - seconds regardless of whether the predicate was satisfied. The method returns - False if it timed out, and True otherwise. - """ - - def wait_for_timeout(): - time.sleep(timeout) - wait_for_timeout.timed_out = True - self.notify_changed() - - wait_for_timeout.timed_out = False - if timeout is not None: - thread = threading.Thread( - target=wait_for_timeout, name="Session.wait_for() timeout" - ) - thread.daemon = True - thread.start() - - with self: - while not predicate(): - if wait_for_timeout.timed_out: - return False - self._changed_condition.wait() - return True - - def finalize(self, why, terminate_debuggee=None): - """Finalizes the debug session. - - If the server is present, sends "disconnect" request with "terminateDebuggee" - set as specified request to it; waits for it to disconnect, allowing any - remaining messages from it to be handled; and closes the server channel. - - If the launcher is present, sends "terminate" request to it, regardless of the - value of terminate; waits for it to disconnect, allowing any remaining messages - from it to be handled; and closes the launcher channel. - - If the client is present, sends "terminated" event to it. - - If terminate_debuggee=None, it is treated as True if the session has a Launcher - component, and False otherwise. - """ - - if self.is_finalizing: - return - self.is_finalizing = True - log.info("{0}; finalizing {1}.", why, self) - - if terminate_debuggee is None: - terminate_debuggee = bool(self.launcher) - - try: - self._finalize(why, terminate_debuggee) - except Exception: - # Finalization should never fail, and if it does, the session is in an - # indeterminate and likely unrecoverable state, so just fail fast. - log.swallow_exception("Fatal error while finalizing {0}", self) - os._exit(1) - - log.info("{0} finalized.", self) - - def _finalize(self, why, terminate_debuggee): - # If the client started a session, and then disconnected before issuing "launch" - # or "attach", the main thread will be blocked waiting for the first server - # connection to come in - unblock it, so that we can exit. - servers.dont_wait_for_first_connection() - - if self.server: - if self.server.is_connected: - if terminate_debuggee and self.launcher and self.launcher.is_connected: - # If we were specifically asked to terminate the debuggee, and we - # can ask the launcher to kill it, do so instead of disconnecting - # from the server to prevent debuggee from running any more code. - self.launcher.terminate_debuggee() - else: - # Otherwise, let the server handle it the best it can. - try: - self.server.channel.request( - "disconnect", {"terminateDebuggee": terminate_debuggee} - ) - except Exception: - pass - self.server.detach_from_session() - - if self.launcher and self.launcher.is_connected: - # If there was a server, we just disconnected from it above, which should - # cause the debuggee process to exit, unless it is being replaced in situ - - # so let's wait for that first. - if self.server and not self.server.connection.process_replaced: - log.info('{0} waiting for "exited" event...', self) - if not self.wait_for( - lambda: self.launcher.exit_code is not None, - timeout=common.PROCESS_EXIT_TIMEOUT, - ): - log.warning('{0} timed out waiting for "exited" event.', self) - - # Terminate the debuggee process if it's still alive for any reason - - # whether it's because there was no server to handle graceful shutdown, - # or because the server couldn't handle it for some reason - unless the - # process is being replaced in situ. - if not (self.server and self.server.connection.process_replaced): - self.launcher.terminate_debuggee() - - # Wait until the launcher message queue fully drains. There is no timeout - # here, because the final "terminated" event will only come after reading - # user input in wait-on-exit scenarios. In addition, if the process was - # replaced in situ, the launcher might still have more output to capture - # from its replacement. - log.info("{0} waiting for {1} to disconnect...", self, self.launcher) - self.wait_for(lambda: not self.launcher.is_connected) - - try: - self.launcher.channel.close() - except Exception: - log.swallow_exception() - - if self.client: - if self.client.is_connected: - # Tell the client that debugging is over, but don't close the channel until it - # tells us to, via the "disconnect" request. - body = {} - if self.client.restart_requested: - body["restart"] = True - try: - self.client.channel.send_event("terminated", body) - except Exception: - pass - - if ( - self.client.start_request is not None - and self.client.start_request.command == "launch" - and not (self.server and self.server.connection.process_replaced) - ): - servers.stop_serving() - log.info( - '"launch" session ended - killing remaining debuggee processes.' - ) - - pids_killed = set() - if self.launcher and self.launcher.pid is not None: - # Already killed above. - pids_killed.add(self.launcher.pid) - - while True: - conns = [ - conn - for conn in servers.connections() - if conn.pid not in pids_killed - ] - if not len(conns): - break - for conn in conns: - log.info("Killing {0}", conn) - try: - os.kill(conn.pid, signal.SIGTERM) - except Exception: - log.swallow_exception("Failed to kill {0}", conn) - pids_killed.add(conn.pid) - - -def get(pid): - with _lock: - return next((session for session in _sessions if session.pid == pid), None) - - -def wait_until_ended(): - """Blocks until all sessions have ended. - - A session ends when all components that it manages disconnect from it. - """ - while True: - with _lock: - if not len(_sessions): - return - _sessions_changed.clear() - _sessions_changed.wait() diff --git a/spaces/Superlang/ImageProcessor/annotator/leres/leres/Resnet.py b/spaces/Superlang/ImageProcessor/annotator/leres/leres/Resnet.py deleted file mode 100644 index f12c9975c1aa05401269be3ca3dbaa56bde55581..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/leres/leres/Resnet.py +++ /dev/null @@ -1,199 +0,0 @@ -import torch.nn as nn -import torch.nn as NN - -__all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101', - 'resnet152'] - - -model_urls = { - 'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth', - 'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth', - 'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth', - 'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth', - 'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth', -} - - -def conv3x3(in_planes, out_planes, stride=1): - """3x3 convolution with padding""" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=1, bias=False) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(BasicBlock, self).__init__() - self.conv1 = conv3x3(inplanes, planes, stride) - self.bn1 = NN.BatchNorm2d(planes) #NN.BatchNorm2d - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = NN.BatchNorm2d(planes) #NN.BatchNorm2d - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(Bottleneck, self).__init__() - self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) - self.bn1 = NN.BatchNorm2d(planes) #NN.BatchNorm2d - self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, - padding=1, bias=False) - self.bn2 = NN.BatchNorm2d(planes) #NN.BatchNorm2d - self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False) - self.bn3 = NN.BatchNorm2d(planes * self.expansion) #NN.BatchNorm2d - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class ResNet(nn.Module): - - def __init__(self, block, layers, num_classes=1000): - self.inplanes = 64 - super(ResNet, self).__init__() - self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, - bias=False) - self.bn1 = NN.BatchNorm2d(64) #NN.BatchNorm2d - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = self._make_layer(block, 64, layers[0]) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2) - self.layer3 = self._make_layer(block, 256, layers[2], stride=2) - self.layer4 = self._make_layer(block, 512, layers[3], stride=2) - #self.avgpool = nn.AvgPool2d(7, stride=1) - #self.fc = nn.Linear(512 * block.expansion, num_classes) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.BatchNorm2d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - def _make_layer(self, block, planes, blocks, stride=1): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d(self.inplanes, planes * block.expansion, - kernel_size=1, stride=stride, bias=False), - NN.BatchNorm2d(planes * block.expansion), #NN.BatchNorm2d - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes)) - - return nn.Sequential(*layers) - - def forward(self, x): - features = [] - - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - - x = self.layer1(x) - features.append(x) - x = self.layer2(x) - features.append(x) - x = self.layer3(x) - features.append(x) - x = self.layer4(x) - features.append(x) - - return features - - -def resnet18(pretrained=True, **kwargs): - """Constructs a ResNet-18 model. - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - """ - model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs) - return model - - -def resnet34(pretrained=True, **kwargs): - """Constructs a ResNet-34 model. - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - """ - model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs) - return model - - -def resnet50(pretrained=True, **kwargs): - """Constructs a ResNet-50 model. - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - """ - model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs) - - return model - - -def resnet101(pretrained=True, **kwargs): - """Constructs a ResNet-101 model. - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - """ - model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs) - - return model - - -def resnet152(pretrained=True, **kwargs): - """Constructs a ResNet-152 model. - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - """ - model = ResNet(Bottleneck, [3, 8, 36, 3], **kwargs) - return model diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/checkpoint/catalog.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/checkpoint/catalog.py deleted file mode 100644 index b5641858fea4936ad10b07a4237faba78dda77ff..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/checkpoint/catalog.py +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging - -from annotator.oneformer.detectron2.utils.file_io import PathHandler, PathManager - - -class ModelCatalog(object): - """ - Store mappings from names to third-party models. - """ - - S3_C2_DETECTRON_PREFIX = "https://dl.fbaipublicfiles.com/detectron" - - # MSRA models have STRIDE_IN_1X1=True. False otherwise. - # NOTE: all BN models here have fused BN into an affine layer. - # As a result, you should only load them to a model with "FrozenBN". - # Loading them to a model with regular BN or SyncBN is wrong. - # Even when loaded to FrozenBN, it is still different from affine by an epsilon, - # which should be negligible for training. - # NOTE: all models here uses PIXEL_STD=[1,1,1] - # NOTE: Most of the BN models here are no longer used. We use the - # re-converted pre-trained models under detectron2 model zoo instead. - C2_IMAGENET_MODELS = { - "MSRA/R-50": "ImageNetPretrained/MSRA/R-50.pkl", - "MSRA/R-101": "ImageNetPretrained/MSRA/R-101.pkl", - "FAIR/R-50-GN": "ImageNetPretrained/47261647/R-50-GN.pkl", - "FAIR/R-101-GN": "ImageNetPretrained/47592356/R-101-GN.pkl", - "FAIR/X-101-32x8d": "ImageNetPretrained/20171220/X-101-32x8d.pkl", - "FAIR/X-101-64x4d": "ImageNetPretrained/FBResNeXt/X-101-64x4d.pkl", - "FAIR/X-152-32x8d-IN5k": "ImageNetPretrained/25093814/X-152-32x8d-IN5k.pkl", - } - - C2_DETECTRON_PATH_FORMAT = ( - "{prefix}/{url}/output/train/{dataset}/{type}/model_final.pkl" # noqa B950 - ) - - C2_DATASET_COCO = "coco_2014_train%3Acoco_2014_valminusminival" - C2_DATASET_COCO_KEYPOINTS = "keypoints_coco_2014_train%3Akeypoints_coco_2014_valminusminival" - - # format: {model_name} -> part of the url - C2_DETECTRON_MODELS = { - "35857197/e2e_faster_rcnn_R-50-C4_1x": "35857197/12_2017_baselines/e2e_faster_rcnn_R-50-C4_1x.yaml.01_33_49.iAX0mXvW", # noqa B950 - "35857345/e2e_faster_rcnn_R-50-FPN_1x": "35857345/12_2017_baselines/e2e_faster_rcnn_R-50-FPN_1x.yaml.01_36_30.cUF7QR7I", # noqa B950 - "35857890/e2e_faster_rcnn_R-101-FPN_1x": "35857890/12_2017_baselines/e2e_faster_rcnn_R-101-FPN_1x.yaml.01_38_50.sNxI7sX7", # noqa B950 - "36761737/e2e_faster_rcnn_X-101-32x8d-FPN_1x": "36761737/12_2017_baselines/e2e_faster_rcnn_X-101-32x8d-FPN_1x.yaml.06_31_39.5MIHi1fZ", # noqa B950 - "35858791/e2e_mask_rcnn_R-50-C4_1x": "35858791/12_2017_baselines/e2e_mask_rcnn_R-50-C4_1x.yaml.01_45_57.ZgkA7hPB", # noqa B950 - "35858933/e2e_mask_rcnn_R-50-FPN_1x": "35858933/12_2017_baselines/e2e_mask_rcnn_R-50-FPN_1x.yaml.01_48_14.DzEQe4wC", # noqa B950 - "35861795/e2e_mask_rcnn_R-101-FPN_1x": "35861795/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_1x.yaml.02_31_37.KqyEK4tT", # noqa B950 - "36761843/e2e_mask_rcnn_X-101-32x8d-FPN_1x": "36761843/12_2017_baselines/e2e_mask_rcnn_X-101-32x8d-FPN_1x.yaml.06_35_59.RZotkLKI", # noqa B950 - "48616381/e2e_mask_rcnn_R-50-FPN_2x_gn": "GN/48616381/04_2018_gn_baselines/e2e_mask_rcnn_R-50-FPN_2x_gn_0416.13_23_38.bTlTI97Q", # noqa B950 - "37697547/e2e_keypoint_rcnn_R-50-FPN_1x": "37697547/12_2017_baselines/e2e_keypoint_rcnn_R-50-FPN_1x.yaml.08_42_54.kdzV35ao", # noqa B950 - "35998355/rpn_R-50-C4_1x": "35998355/12_2017_baselines/rpn_R-50-C4_1x.yaml.08_00_43.njH5oD9L", # noqa B950 - "35998814/rpn_R-50-FPN_1x": "35998814/12_2017_baselines/rpn_R-50-FPN_1x.yaml.08_06_03.Axg0r179", # noqa B950 - "36225147/fast_R-50-FPN_1x": "36225147/12_2017_baselines/fast_rcnn_R-50-FPN_1x.yaml.08_39_09.L3obSdQ2", # noqa B950 - } - - @staticmethod - def get(name): - if name.startswith("Caffe2Detectron/COCO"): - return ModelCatalog._get_c2_detectron_baseline(name) - if name.startswith("ImageNetPretrained/"): - return ModelCatalog._get_c2_imagenet_pretrained(name) - raise RuntimeError("model not present in the catalog: {}".format(name)) - - @staticmethod - def _get_c2_imagenet_pretrained(name): - prefix = ModelCatalog.S3_C2_DETECTRON_PREFIX - name = name[len("ImageNetPretrained/") :] - name = ModelCatalog.C2_IMAGENET_MODELS[name] - url = "/".join([prefix, name]) - return url - - @staticmethod - def _get_c2_detectron_baseline(name): - name = name[len("Caffe2Detectron/COCO/") :] - url = ModelCatalog.C2_DETECTRON_MODELS[name] - if "keypoint_rcnn" in name: - dataset = ModelCatalog.C2_DATASET_COCO_KEYPOINTS - else: - dataset = ModelCatalog.C2_DATASET_COCO - - if "35998355/rpn_R-50-C4_1x" in name: - # this one model is somehow different from others .. - type = "rpn" - else: - type = "generalized_rcnn" - - # Detectron C2 models are stored in the structure defined in `C2_DETECTRON_PATH_FORMAT`. - url = ModelCatalog.C2_DETECTRON_PATH_FORMAT.format( - prefix=ModelCatalog.S3_C2_DETECTRON_PREFIX, url=url, type=type, dataset=dataset - ) - return url - - -class ModelCatalogHandler(PathHandler): - """ - Resolve URL like catalog://. - """ - - PREFIX = "catalog://" - - def _get_supported_prefixes(self): - return [self.PREFIX] - - def _get_local_path(self, path, **kwargs): - logger = logging.getLogger(__name__) - catalog_path = ModelCatalog.get(path[len(self.PREFIX) :]) - logger.info("Catalog entry {} points to {}".format(path, catalog_path)) - return PathManager.get_local_path(catalog_path, **kwargs) - - def _open(self, path, mode="r", **kwargs): - return PathManager.open(self._get_local_path(path), mode, **kwargs) - - -PathManager.register_handler(ModelCatalogHandler()) diff --git a/spaces/TNR-5/AI-WebTV/Dockerfile b/spaces/TNR-5/AI-WebTV/Dockerfile deleted file mode 100644 index 8e670518dbe8c6f90ca81d1aafdda840ebdff7b4..0000000000000000000000000000000000000000 --- a/spaces/TNR-5/AI-WebTV/Dockerfile +++ /dev/null @@ -1,35 +0,0 @@ -FROM node:18 - - -ARG DEBIAN_FRONTEND=noninteractive - -RUN apt update - -RUN apt --yes install ffmpeg - -# Set up a new user named "user" with user ID 1000 -RUN useradd -o -u 1000 user - -# Switch to the "user" user -USER user - -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# Install app dependencies -# A wildcard is used to ensure both package.json AND package-lock.json are copied -# where available (npm@5+) -COPY --chown=user package*.json $HOME/app - -RUN npm install - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app - -EXPOSE 7860 1935 8000 - -CMD [ "npm", "run", "start" ] \ No newline at end of file diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/cachecontrol/adapter.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/cachecontrol/adapter.py deleted file mode 100644 index 94c75e1a05b47922945c5233e90e9f936b108b66..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/cachecontrol/adapter.py +++ /dev/null @@ -1,137 +0,0 @@ -# SPDX-FileCopyrightText: 2015 Eric Larson -# -# SPDX-License-Identifier: Apache-2.0 - -import types -import functools -import zlib - -from pip._vendor.requests.adapters import HTTPAdapter - -from .controller import CacheController, PERMANENT_REDIRECT_STATUSES -from .cache import DictCache -from .filewrapper import CallbackFileWrapper - - -class CacheControlAdapter(HTTPAdapter): - invalidating_methods = {"PUT", "PATCH", "DELETE"} - - def __init__( - self, - cache=None, - cache_etags=True, - controller_class=None, - serializer=None, - heuristic=None, - cacheable_methods=None, - *args, - **kw - ): - super(CacheControlAdapter, self).__init__(*args, **kw) - self.cache = DictCache() if cache is None else cache - self.heuristic = heuristic - self.cacheable_methods = cacheable_methods or ("GET",) - - controller_factory = controller_class or CacheController - self.controller = controller_factory( - self.cache, cache_etags=cache_etags, serializer=serializer - ) - - def send(self, request, cacheable_methods=None, **kw): - """ - Send a request. Use the request information to see if it - exists in the cache and cache the response if we need to and can. - """ - cacheable = cacheable_methods or self.cacheable_methods - if request.method in cacheable: - try: - cached_response = self.controller.cached_request(request) - except zlib.error: - cached_response = None - if cached_response: - return self.build_response(request, cached_response, from_cache=True) - - # check for etags and add headers if appropriate - request.headers.update(self.controller.conditional_headers(request)) - - resp = super(CacheControlAdapter, self).send(request, **kw) - - return resp - - def build_response( - self, request, response, from_cache=False, cacheable_methods=None - ): - """ - Build a response by making a request or using the cache. - - This will end up calling send and returning a potentially - cached response - """ - cacheable = cacheable_methods or self.cacheable_methods - if not from_cache and request.method in cacheable: - # Check for any heuristics that might update headers - # before trying to cache. - if self.heuristic: - response = self.heuristic.apply(response) - - # apply any expiration heuristics - if response.status == 304: - # We must have sent an ETag request. This could mean - # that we've been expired already or that we simply - # have an etag. In either case, we want to try and - # update the cache if that is the case. - cached_response = self.controller.update_cached_response( - request, response - ) - - if cached_response is not response: - from_cache = True - - # We are done with the server response, read a - # possible response body (compliant servers will - # not return one, but we cannot be 100% sure) and - # release the connection back to the pool. - response.read(decode_content=False) - response.release_conn() - - response = cached_response - - # We always cache the 301 responses - elif int(response.status) in PERMANENT_REDIRECT_STATUSES: - self.controller.cache_response(request, response) - else: - # Wrap the response file with a wrapper that will cache the - # response when the stream has been consumed. - response._fp = CallbackFileWrapper( - response._fp, - functools.partial( - self.controller.cache_response, request, response - ), - ) - if response.chunked: - super_update_chunk_length = response._update_chunk_length - - def _update_chunk_length(self): - super_update_chunk_length() - if self.chunk_left == 0: - self._fp._close() - - response._update_chunk_length = types.MethodType( - _update_chunk_length, response - ) - - resp = super(CacheControlAdapter, self).build_response(request, response) - - # See if we should invalidate the cache. - if request.method in self.invalidating_methods and resp.ok: - cache_url = self.controller.cache_url(request.url) - self.cache.delete(cache_url) - - # Give the request a from_cache attr to let people use it - resp.from_cache = from_cache - - return resp - - def close(self): - self.cache.close() - super(CacheControlAdapter, self).close() diff --git a/spaces/TexR6/AttentionMaps/utils.py b/spaces/TexR6/AttentionMaps/utils.py deleted file mode 100644 index 8f36206ae837337842da55a9b1506c982caa6db0..0000000000000000000000000000000000000000 --- a/spaces/TexR6/AttentionMaps/utils.py +++ /dev/null @@ -1,152 +0,0 @@ -import re -import math -import torch -import collections - -from torch import nn -from functools import partial -from torch.utils import model_zoo -from torch.nn import functional as F - -from resnet import resnet50 - -################################################################################ -### Help functions for model architecture -################################################################################ - -# Params: namedtuple -# get_width_and_height_from_size and calculate_output_image_size - -# Parameters for the entire model (stem, all blocks, and head) -Params = collections.namedtuple('Params', [ - 'image_size', 'patch_size', 'emb_dim', 'mlp_dim', 'num_heads', 'num_layers', - 'num_classes', 'attn_dropout_rate', 'dropout_rate', 'resnet' -]) - -# Set Params and BlockArgs's defaults -Params.__new__.__defaults__ = (None, ) * len(Params._fields) - - -def get_width_and_height_from_size(x): - """Obtain height and width from x. - Args: - x (int, tuple or list): Data size. - Returns: - size: A tuple or list (H,W). - """ - if isinstance(x, int): - return x, x - if isinstance(x, list) or isinstance(x, tuple): - return x - else: - raise TypeError() - - -################################################################################ -### Helper functions for loading model params -################################################################################ - -# get_model_params and efficientnet: -# Functions to get BlockArgs and GlobalParams for efficientnet -# url_map and url_map_advprop: Dicts of url_map for pretrained weights -# load_pretrained_weights: A function to load pretrained weights - - -def vision_transformer(model_name): - """Create Params for vision transformer model. - Args: - model_name (str): Model name to be queried. - Returns: - Params(params_dict[model_name]) - """ - - params_dict = { - 'ViT-B_16': (384, 16, 768, 3072, 12, 12, 1000, 0.0, 0.1, None), - 'ViT-B_32': (384, 32, 768, 3072, 12, 12, 1000, 0.0, 0.1, None), - 'ViT-L_16': (384, 16, 1024, 4096, 16, 24, 1000, 0.0, 0.1, None), - 'ViT-L_32': (384, 32, 1024, 4096, 16, 24, 1000, 0.0, 0.1, None), - 'R50+ViT-B_16': (384, 1, 768, 3072, 12, 12, 1000, 0.0, 0.1, resnet50), - } - image_size, patch_size, emb_dim, mlp_dim, num_heads, num_layers, num_classes, attn_dropout_rate, dropout_rate, resnet = params_dict[ - model_name] - params = Params(image_size=image_size, - patch_size=patch_size, - emb_dim=emb_dim, - mlp_dim=mlp_dim, - num_heads=num_heads, - num_layers=num_layers, - num_classes=num_classes, - attn_dropout_rate=attn_dropout_rate, - dropout_rate=dropout_rate, - resnet=resnet) - - return params - - -def get_model_params(model_name, override_params): - """Get the block args and global params for a given model name. - Args: - model_name (str): Model's name. - override_params (dict): A dict to modify params. - Returns: - params - """ - params = vision_transformer(model_name) - - if override_params: - # ValueError will be raised here if override_params has fields not included in params. - params = params._replace(**override_params) - return params - - -# train with Standard methods -# check more details in paper(An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale) -url_map = { - 'ViT-B_16': - 'https://github.com/tczhangzhi/VisionTransformer-PyTorch/releases/download/1.0.1/ViT-B_16_imagenet21k_imagenet2012.pth', - 'ViT-B_32': - 'https://github.com/tczhangzhi/VisionTransformer-PyTorch/releases/download/1.0.1/ViT-B_32_imagenet21k_imagenet2012.pth', - 'ViT-L_16': - 'https://github.com/tczhangzhi/VisionTransformer-PyTorch/releases/download/1.0.1/ViT-L_16_imagenet21k_imagenet2012.pth', - 'ViT-L_32': - 'https://github.com/tczhangzhi/VisionTransformer-PyTorch/releases/download/1.0.1/ViT-L_32_imagenet21k_imagenet2012.pth', - 'R50+ViT-B_16': - 'https://github.com/tczhangzhi/VisionTransformer-PyTorch/releases/download/1.0.1/R50+ViT-B_16_imagenet21k_imagenet2012.pth', -} - - -def load_pretrained_weights(model, - model_name, - weights_path=None, - load_fc=True, - advprop=False): - """Loads pretrained weights from weights path or download using url. - Args: - model (Module): The whole model of vision transformer. - model_name (str): Model name of vision transformer. - weights_path (None or str): - str: path to pretrained weights file on the local disk. - None: use pretrained weights downloaded from the Internet. - load_fc (bool): Whether to load pretrained weights for fc layer at the end of the model. - """ - if isinstance(weights_path, str): - state_dict = torch.load(weights_path) - else: - state_dict = model_zoo.load_url(url_map[model_name]) - - if load_fc: - ret = model.load_state_dict(state_dict, strict=False) - assert not ret.missing_keys, 'Missing keys when loading pretrained weights: {}'.format( - ret.missing_keys) - else: - state_dict.pop('classifier.weight') - state_dict.pop('classifier.bias') - ret = model.load_state_dict(state_dict, strict=False) - assert set(ret.missing_keys) == set([ - 'classifier.weight', 'classifier.bias' - ]), 'Missing keys when loading pretrained weights: {}'.format( - ret.missing_keys) - assert not ret.unexpected_keys, 'Missing keys when loading pretrained weights: {}'.format( - ret.unexpected_keys) - - print('Loaded pretrained weights for {}'.format(model_name)) diff --git a/spaces/Thafx/sdrv40/README.md b/spaces/Thafx/sdrv40/README.md deleted file mode 100644 index ac86311ed41636a90f3586da3d53686a7f0b3c81..0000000000000000000000000000000000000000 --- a/spaces/Thafx/sdrv40/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Realistic Vision v4.0 -emoji: 📷 -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: true -duplicated_from: Thafx/sdrv30 -tags: -- stable-diffusion -- stable-diffusion-diffusers -- text-to-image -- realistic-vision -models: -- SG161222/Realistic_Vision_V4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Toritto/Genshin-impact-IA-project-v1/lib/infer_pack/onnx_inference.py b/spaces/Toritto/Genshin-impact-IA-project-v1/lib/infer_pack/onnx_inference.py deleted file mode 100644 index 6517853be49e61c427cf7cd9b5ed203f6d5f367e..0000000000000000000000000000000000000000 --- a/spaces/Toritto/Genshin-impact-IA-project-v1/lib/infer_pack/onnx_inference.py +++ /dev/null @@ -1,145 +0,0 @@ -import onnxruntime -import librosa -import numpy as np -import soundfile - - -class ContentVec: - def __init__(self, vec_path="pretrained/vec-768-layer-12.onnx", device=None): - print("load model(s) from {}".format(vec_path)) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def __call__(self, wav): - return self.forward(wav) - - def forward(self, wav): - feats = wav - if feats.ndim == 2: # double channels - feats = feats.mean(-1) - assert feats.ndim == 1, feats.ndim - feats = np.expand_dims(np.expand_dims(feats, 0), 0) - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input)[0] - return logits.transpose(0, 2, 1) - - -def get_f0_predictor(f0_predictor, hop_length, sampling_rate, **kargs): - if f0_predictor == "pm": - from lib.infer_pack.modules.F0Predictor.PMF0Predictor import PMF0Predictor - - f0_predictor_object = PMF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "harvest": - from lib.infer_pack.modules.F0Predictor.HarvestF0Predictor import ( - HarvestF0Predictor, - ) - - f0_predictor_object = HarvestF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "dio": - from lib.infer_pack.modules.F0Predictor.DioF0Predictor import DioF0Predictor - - f0_predictor_object = DioF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - else: - raise Exception("Unknown f0 predictor") - return f0_predictor_object - - -class OnnxRVC: - def __init__( - self, - model_path, - sr=40000, - hop_size=512, - vec_path="vec-768-layer-12", - device="cpu", - ): - vec_path = f"pretrained/{vec_path}.onnx" - self.vec_model = ContentVec(vec_path, device) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(model_path, providers=providers) - self.sampling_rate = sr - self.hop_size = hop_size - - def forward(self, hubert, hubert_length, pitch, pitchf, ds, rnd): - onnx_input = { - self.model.get_inputs()[0].name: hubert, - self.model.get_inputs()[1].name: hubert_length, - self.model.get_inputs()[2].name: pitch, - self.model.get_inputs()[3].name: pitchf, - self.model.get_inputs()[4].name: ds, - self.model.get_inputs()[5].name: rnd, - } - return (self.model.run(None, onnx_input)[0] * 32767).astype(np.int16) - - def inference( - self, - raw_path, - sid, - f0_method="dio", - f0_up_key=0, - pad_time=0.5, - cr_threshold=0.02, - ): - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - f0_predictor = get_f0_predictor( - f0_method, - hop_length=self.hop_size, - sampling_rate=self.sampling_rate, - threshold=cr_threshold, - ) - wav, sr = librosa.load(raw_path, sr=self.sampling_rate) - org_length = len(wav) - if org_length / sr > 50.0: - raise RuntimeError("Reached Max Length") - - wav16k = librosa.resample(wav, orig_sr=self.sampling_rate, target_sr=16000) - wav16k = wav16k - - hubert = self.vec_model(wav16k) - hubert = np.repeat(hubert, 2, axis=2).transpose(0, 2, 1).astype(np.float32) - hubert_length = hubert.shape[1] - - pitchf = f0_predictor.compute_f0(wav, hubert_length) - pitchf = pitchf * 2 ** (f0_up_key / 12) - pitch = pitchf.copy() - f0_mel = 1127 * np.log(1 + pitch / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - pitch = np.rint(f0_mel).astype(np.int64) - - pitchf = pitchf.reshape(1, len(pitchf)).astype(np.float32) - pitch = pitch.reshape(1, len(pitch)) - ds = np.array([sid]).astype(np.int64) - - rnd = np.random.randn(1, 192, hubert_length).astype(np.float32) - hubert_length = np.array([hubert_length]).astype(np.int64) - - out_wav = self.forward(hubert, hubert_length, pitch, pitchf, ds, rnd).squeeze() - out_wav = np.pad(out_wav, (0, 2 * self.hop_size), "constant") - return out_wav[0:org_length] diff --git a/spaces/Usaki108/VoiceChange/infer_pack/attentions.py b/spaces/Usaki108/VoiceChange/infer_pack/attentions.py deleted file mode 100644 index 77cb63ffccf3e33badf22d50862a64ba517b487f..0000000000000000000000000000000000000000 --- a/spaces/Usaki108/VoiceChange/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from infer_pack import commons -from infer_pack import modules -from infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Better.py b/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Better.py deleted file mode 100644 index bee52870eb3300f25c9762ab204968791a2a30a9..0000000000000000000000000000000000000000 --- a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Better.py +++ /dev/null @@ -1,56 +0,0 @@ -import os -import json -import requests -from typing import Dict, get_type_hints - -url = 'https://openai-proxy-api.vercel.app/v1/' -model = [ - 'gpt-3.5-turbo', - 'gpt-3.5-turbo-0613', - 'gpt-3.5-turbo-16k', - 'gpt-3.5-turbo-16k-0613', - 'gpt-4', -] - -supports_stream = True -needs_auth = False - - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - headers = { - 'Content-Type': 'application/json', - 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36 Edg/114.0.1823.58', - 'Referer': 'https://chat.ylokh.xyz/', - 'Origin': 'https://chat.ylokh.xyz', - 'Connection': 'keep-alive', - } - - json_data = { - 'messages': messages, - 'temperature': 1.0, - 'model': model, - 'stream': stream, - } - - response = requests.post( - 'https://openai-proxy-api.vercel.app/v1/chat/completions', headers=headers, json=json_data, stream=True - ) - - for token in response.iter_lines(): - decoded = token.decode('utf-8') - if decoded.startswith('data: '): - data_str = decoded.replace('data: ', '') - data = json.loads(data_str) - if 'choices' in data and 'delta' in data['choices'][0]: - delta = data['choices'][0]['delta'] - content = delta.get('content', '') - finish_reason = delta.get('finish_reason', '') - - if finish_reason == 'stop': - break - if content: - yield content - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + '(%s)' % ', '.join( - [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git "a/spaces/WhyLIM/ChatGPT-academic/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" "b/spaces/WhyLIM/ChatGPT-academic/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" deleted file mode 100644 index bbaa545820ce927915578aafbaec77a2dbe56378..0000000000000000000000000000000000000000 --- "a/spaces/WhyLIM/ChatGPT-academic/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" +++ /dev/null @@ -1,17 +0,0 @@ -from predict import predict_no_ui -from toolbox import CatchException, report_execption, write_results_to_file, predict_no_ui_but_counting_down -fast_debug = False - -@CatchException -def 高阶功能模板函数(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT): - history = [] # 清空历史,以免输入溢出 - for i in range(5): - i_say = f'我给出一个数字,你给出该数字的平方。我给出数字:{i}' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield chatbot, history, '正常' # 由于请求gpt需要一段时间,我们先及时地做一次状态显示 - - gpt_say = predict_no_ui(inputs=i_say, top_p=top_p, temperature=temperature) # 请求gpt,需要一段时间 - - chatbot[-1] = (i_say, gpt_say) - history.append(i_say);history.append(gpt_say) - yield chatbot, history, '正常' # 显示 \ No newline at end of file diff --git a/spaces/Wings77/ChatGPT4/README.md b/spaces/Wings77/ChatGPT4/README.md deleted file mode 100644 index 7938de14e5355209aaae713f289ca469181bbb17..0000000000000000000000000000000000000000 --- a/spaces/Wings77/ChatGPT4/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Chat-with-GPT4 -emoji: 🚀 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: ysharma/ChatGPT4 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Xenova/semantic-image-search-client/_next/static/chunks/webpack-a260e913a8d312ed.js b/spaces/Xenova/semantic-image-search-client/_next/static/chunks/webpack-a260e913a8d312ed.js deleted file mode 100644 index a3eb5ff8f5ee3fef2b4720a3a0e62bffaad1af8a..0000000000000000000000000000000000000000 --- a/spaces/Xenova/semantic-image-search-client/_next/static/chunks/webpack-a260e913a8d312ed.js +++ /dev/null @@ -1 +0,0 @@ -!function(){"use strict";var e,t,n,r,o,u,i,c,f,a={},l={};function s(e){var t=l[e];if(void 0!==t)return t.exports;var n=l[e]={exports:{}},r=!0;try{a[e](n,n.exports,s),r=!1}finally{r&&delete l[e]}return n.exports}s.m=a,e=[],s.O=function(t,n,r,o){if(n){o=o||0;for(var u=e.length;u>0&&e[u-1][2]>o;u--)e[u]=e[u-1];e[u]=[n,r,o];return}for(var i=1/0,u=0;u=o&&Object.keys(s.O).every(function(e){return s.O[e](n[f])})?n.splice(f--,1):(c=!1,o any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - if (len_bucket == 0): - continue - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j * self.batch_size:(j + 1) * self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/Yntec/DucHaiten-Webui-CPU/README.md b/spaces/Yntec/DucHaiten-Webui-CPU/README.md deleted file mode 100644 index cd7f17f5f146820d37932d42d402a63990be2b2e..0000000000000000000000000000000000000000 --- a/spaces/Yntec/DucHaiten-Webui-CPU/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: DucHaiten Webui on Cpu -emoji: 👌👌 -colorFrom: pink -colorTo: teal -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: true -python_version: 3.10.6 -duplicated_from: hehysh/stable-diffusion-webui-cpu-the-best ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/YuAnthony/Audio-Caption/processes/__init__.py b/spaces/YuAnthony/Audio-Caption/processes/__init__.py deleted file mode 100644 index fb42eb10c371430372d76b02ef9f16a603f54647..0000000000000000000000000000000000000000 --- a/spaces/YuAnthony/Audio-Caption/processes/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- - -from . import dataset -from . import method - -__author__ = 'Konstantinos Drossos -- Tampere University' -__docformat__ = 'reStructuredText' -__all__ = ['dataset', 'method'] - -# EOF diff --git a/spaces/Yuliang/ECON/lib/net/__init__.py b/spaces/Yuliang/ECON/lib/net/__init__.py deleted file mode 100644 index 3a6f1d279143173e7dac5f626817a2fe90371bea..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ECON/lib/net/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .NormalNet import NormalNet diff --git a/spaces/ZilliaxOfficial/nyaru-svc-3.0/hubert/__init__.py b/spaces/ZilliaxOfficial/nyaru-svc-3.0/hubert/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/abidlabs/whisper-large-v2/README.md b/spaces/abidlabs/whisper-large-v2/README.md deleted file mode 100644 index d2bbb47cf2419ea658b34c2f5d3844752382adbb..0000000000000000000000000000000000000000 --- a/spaces/abidlabs/whisper-large-v2/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Whisper Large V2 -emoji: 🤫 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false -tags: -- whisper-event -duplicated_from: sanchit-gandhi/whisper-large-v2 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/aidealab/interior-ai/segmentation.py b/spaces/aidealab/interior-ai/segmentation.py deleted file mode 100644 index 6b4c9b0982778a26f873355323d25af04845e9ef..0000000000000000000000000000000000000000 --- a/spaces/aidealab/interior-ai/segmentation.py +++ /dev/null @@ -1,55 +0,0 @@ -import logging -from typing import List, Tuple, Dict - -import streamlit as st -import torch -import gc -import numpy as np -from PIL import Image - -from transformers import AutoImageProcessor, UperNetForSemanticSegmentation - -from palette import ade_palette - -LOGGING = logging.getLogger(__name__) - - -def flush(): - gc.collect() - torch.cuda.empty_cache() - -@st.experimental_singleton(max_entries=5) -def get_segmentation_pipeline() -> Tuple[AutoImageProcessor, UperNetForSemanticSegmentation]: - """Method to load the segmentation pipeline - Returns: - Tuple[AutoImageProcessor, UperNetForSemanticSegmentation]: segmentation pipeline - """ - image_processor = AutoImageProcessor.from_pretrained("openmmlab/upernet-convnext-small") - image_segmentor = UperNetForSemanticSegmentation.from_pretrained( - "openmmlab/upernet-convnext-small") - return image_processor, image_segmentor - - -@torch.inference_mode() -@torch.autocast('cuda') -def segment_image(image: Image) -> Image: - """Method to segment image - Args: - image (Image): input image - Returns: - Image: segmented image - """ - image_processor, image_segmentor = get_segmentation_pipeline() - pixel_values = image_processor(image, return_tensors="pt").pixel_values - with torch.no_grad(): - outputs = image_segmentor(pixel_values) - - seg = image_processor.post_process_semantic_segmentation( - outputs, target_sizes=[image.size[::-1]])[0] - color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) - palette = np.array(ade_palette()) - for label, color in enumerate(palette): - color_seg[seg == label, :] = color - color_seg = color_seg.astype(np.uint8) - seg_image = Image.fromarray(color_seg).convert('RGB') - return seg_image \ No newline at end of file diff --git a/spaces/akhaliq/JoJoGAN/op/fused_bias_act.cpp b/spaces/akhaliq/JoJoGAN/op/fused_bias_act.cpp deleted file mode 100644 index 02be898f970bcc8ea297867fcaa4e71b24b3d949..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/JoJoGAN/op/fused_bias_act.cpp +++ /dev/null @@ -1,21 +0,0 @@ -#include - - -torch::Tensor fused_bias_act_op(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer, - int act, int grad, float alpha, float scale); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor fused_bias_act(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer, - int act, int grad, float alpha, float scale) { - CHECK_CUDA(input); - CHECK_CUDA(bias); - - return fused_bias_act_op(input, bias, refer, act, grad, alpha, scale); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("fused_bias_act", &fused_bias_act, "fused bias act (CUDA)"); -} \ No newline at end of file diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/template_single_spk/voc1/cmd.sh b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/template_single_spk/voc1/cmd.sh deleted file mode 100644 index 19f342102fc4f3389157c48f1196b16b68eb1cf1..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/template_single_spk/voc1/cmd.sh +++ /dev/null @@ -1,91 +0,0 @@ -# ====== About run.pl, queue.pl, slurm.pl, and ssh.pl ====== -# Usage: .pl [options] JOB=1: -# e.g. -# run.pl --mem 4G JOB=1:10 echo.JOB.log echo JOB -# -# Options: -# --time